id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2305.19858 | Enhancing image quality prediction with self-supervised visual masking | Full-reference image quality metrics (FR-IQMs) aim to measure the visual
differences between a pair of reference and distorted images, with the goal of
accurately predicting human judgments. However, existing FR-IQMs, including
traditional ones like PSNR and SSIM and even perceptual ones such as HDR-VDP,
LPIPS, and DISTS, still fall short in capturing the complexities and nuances of
human perception. In this work, rather than devising a novel IQM model, we seek
to improve upon the perceptual quality of existing FR-IQM methods. We achieve
this by considering visual masking, an important characteristic of the human
visual system that changes its sensitivity to distortions as a function of
local image content. Specifically, for a given FR-IQM metric, we propose to
predict a visual masking model that modulates reference and distorted images in
a way that penalizes the visual errors based on their visibility. Since the
ground truth visual masks are difficult to obtain, we demonstrate how they can
be derived in a self-supervised manner solely based on mean opinion scores
(MOS) collected from an FR-IQM dataset. Our approach results in enhanced FR-IQM
metrics that are more in line with human prediction both visually and
quantitatively. | Uğur Çoğalan, Mojtaba Bemana, Hans-Peter Seidel, Karol Myszkowski | 2023-05-31T13:48:51Z | http://arxiv.org/abs/2305.19858v2 | # Enhancing image quality prediction with self-supervised visual masking
###### Abstract
Full-reference image quality metrics (FR-IQMs) aim to measure the visual differences between a pair of reference and distorted images, with the goal of accurately predicting human judgments. However, existing FR-IQMs, including traditional ones like PSNR and SSIM and even perceptual ones such as HDR-VDP, LPIPS, and DISTS, still fall short in capturing the complexities and nuances of human perception. In this work, rather than devising a novel IQM model, we seek to improve upon the perceptual quality of existing FR-IQM methods. We achieve this by considering visual masking, an important characteristic of the human visual system that changes its sensitivity to distortions as a function of local image content. Specifically, for a given FR-IQM metric, we propose to predict a visual masking model that modulates reference and distorted images in a way that penalizes the visual errors based on their visibility. Since the ground truth visual masks are difficult to obtain, we demonstrate how they can be derived in a self-supervised manner solely based on mean opinion scores (MOS) collected from
an FR-IQM dataset. Our approach results in enhanced FR-IQM metrics that are more in line with human prediction both visually and quantitatively.
specific distribution of pixel values, models based on information theory (Sheikh and Bovik, 2005, 2006) measure the mutual information between images by comparing their joint histograms and taking into account the statistical dependencies between neighboring pixels. Classical metrics can offer either a single overall quality score or a visibility map indicating the distortion intensity. VDM (Lubin, 1995), VDP (Daly, 1993), and HDR-VDP (Mantiuk et al., 2011, 2021) measure either the visibility of distortions or perceived distortions magnitude, or both by considering various visual aspects such as luminance adaptation, contrast sensitivity, and visual masking. A more recent metric, FLIP (Andersson et al., 2020), emphasizes color differences, and it is sensitive to even subtle distortions by emulating flipping between the compared image pair.
#### Deep learning-based metrics
In recent years, research in FR-IQA has been placing greater emphasis on perceptual comparisons in deep feature space rather than image space to enhance the alignment with human judgments. Prashnani et al. (2018) are among the first to utilize deep feature models learned from human-labeled data to predict perceptual errors. However, Zhang et al. (2018) demonstrate that internal image representations from classification networks can be used for image comparison. They propose the Perceptual Image Patch Similarity (LPIPS) index, which quantifies image similarity by measuring the \(\ell_{2}\) distances between pre-trained VGG features. To further improve the correlation with human judgments, they learn per-channel weights for selected VGG features using their collected perceptual similarity dataset. Recognizing that simple \(\ell_{p}\)-norm measures fail to consider the statistical dependency of errors across different locations, Ding et al. (2020) introduce the DISTS, which aims to measure the texture and structure similarity between feature pairs by comparing their global mean, variance, and correlations in the form of SSIM. Building upon this work, A-DISTS (Ding et al., 2021) extended the approach to incorporate local structure and texture comparisons. Moving away from deterministic point-wise feature comparisons, DeepWSD (Liao et al., 2022) compares the overall distributions of features using the Wasserstein distance, a statistical measure for comparing two distributions. Nevertheless, the majority of the proposed IQMs metrics are targeted toward producing a single quality score and are not primarily
Figure 3. Our proposed visual masking approach for enhancing classic metrics such as MAE and SSIM (left) and learning-based metrics such as DISTS or LPIPS (right). For classic metrics, the input to our mask predictor network \(\mathcal{F}\) are sRGB images, while for learning-based metrics, the inputs are the VGG features extracted from the images. We learn the visual masks in a self-supervised fashion by minimizing the difference between the metric final score and human scores collected from an FR-IQM dataset.
Figure 2. Agreement of metric predictions with human judgments. We consider the classic (MAE and SSIM) and learning-based (LPIPS and DISTS) metrics, and we compare their prediction to their enhanced versions (E-MAE, E-SSIM, E-DISTS, and E-IPPS) using our approach. On the left, we see a situation where MAE and SSIM favor JPEG-like artifacts over slightly resampled textures. On the right, we encounter a scenario where LPIPS and DISTS prefer blur over a subtle color shift. Our extended metric versions are always aligned with human choice. The images have been extracted from the PIPAL dataset (Gu et al., 2020).
designed to generate per-pixel error maps. In this regard, Wolski et al. (2018) employ a custom CNN model trained in a fully supervised way using coarse user marking data to predict an error visibility map that highlights the regions where distortions are more likely to be noticeable.
In this work, we extend the classic and deep learning-based metrics by introducing a learnable component trained on perceptual MOS data in a self-supervised way. By implicitly analyzing local image content, our model derives per-pixel maps which mimic visual masking, effectively modeling the visual significance of distortions.
## 3. Self-supervised visual masking
In this section, we elaborate on our methodology for perceptually calibrating the existing FR-IQMs. Given a reference and distorted pair (\(X\) and \(Y\)) \(\in R^{H\times W\times C}\), we first learn a visual mask, \(M\)\(\in R^{H\times W\times 1}\), which has the same spatial dimensions as the inputs. For classical metrics (Fig. 3-left), the input \(X\) and \(Y\) are sRGB images (\(C=3\)), while for learning-based metrics such as LPIPS or DISTS, the inputs are the VGG features extracted from the images and \(C\) is the number of channels in a given VGG layer (Fig. 3-right). The predicted mask is then element-wised multiplied with the \(X\) and \(Y\) before being fed into an FR quality metric, \(\mathcal{D}\). Note that the same mask is applied to both the reference and distorted inputs. For estimating the mask \(M\), we utilize a lightweight CNN denoted as \(\mathcal{F}\), which takes both \(X\) and \(Y\) as input. Mathematically, this can be expressed as:
\[M=\mathcal{F}(X,Y) \tag{1}\]
It is important to note that the network \(\mathcal{F}\) is trained specifically for a metric \(\mathcal{D}\). In the case of metrics such as LPIPS and DISTS, we follow their specific architecture and compute a mask for each layer using a separate \(\mathcal{F}\), and the same mask is applied for all channels in a given layer (Fig. 3-right). Since we cannot directly supervise the output of the mask generator network, we adopt a self-supervised approach to train it using an IQM dataset with a single quality score. The network's parameters are optimized by minimizing the \(\ell_{2}\) difference between the metric output value and human scores. Our loss is formulated as follows:
\[Loss=\|\mathcal{G}(\mathcal{D}(M\odot X,M\odot Y))-q\|_{2}^{2} \tag{2}\]
Here, \(q\in[0,1]\) represents the normalized mean opinion score when comparing the images \(X\) and \(Y\). As the metric response can vary in an arbitrary range, following a similar approach in Zhang et al. (2018), a small network \(\mathcal{G}\) is jointly trained to map the metric response to the human ratings. Note that the network \(\mathcal{G}\) is not applied during inference time but rather during the training process.
### Training and network details
For training, we use the KADID dataset (Lin et al., 2019), which comprises 81 natural images that have been distorted using 25 types of traditional distortions, each at five different levels, making roughly 10k training pairs. Note that we train our mask generator network \(\mathcal{F}\) for all the distortion categories together rather than for one specific category. We find that a simple CNN with three convolutional layers, each consisting of 64 channels, suffices for successful training. ReLU activation is applied after each layer, while we use Sigmoid activation for the final layer to keep the mask values in the range between 0 and 1. Our mapping network \(\mathcal{G}\) consists of two 32-channel fully connected (FC) ReLU layers, followed by a 1-channel FC layer with Sigmoid activation. The batch size for training is set to 4. We employ the Adam optimizer (Kingma and Ba, 2017) with an initial learning rate of \(10^{-4}\) and a weight decay of \(10^{-6}\).
## 4. Results
In this section, we first present our experimental setup, which we use then for our method evaluation and ablations of different training strategies.
### Experimental setup
We employ our visual masking approach to enhance some of the classical metrics (MAR, PSNR, SSIM, MS-SSIM, and FLIP) and recent learning-based methods (VGG, LPIPS, DISTS, and DeepWSD). Note for MS-SSIM, we used the same \(\mathcal{F}\) across all scales, while the inputs are images at different scales. Moreover, the metric called VGG is computed by simply taking the \(\ell_{1}\) difference between VGG features for the same layers as originally chosen for LPIPS and DISTS. We assess the performance of our proposed approach on three well-established IQM datasets, including CSIQ (Larson and
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{2}{c}{CSIQ} & \multicolumn{2}{c}{TID} & \multicolumn{2}{c}{PIPAL} \\ \cline{2-9}
**Metric** & PLCC & SRCC & KRCC & PLCC & SRCC & KRCC & PLCC & SRCC & KRCC \\ \hline FSIM & 0.900 & 0.913 & 0.740 & 0.847 & 0.789 & 0.611 & 0.651 & 0.617 & 0.441 \\ VIF & 0.826 & 0.841 & 0.642 & 0.820 & 0.813 & 0.616 & 0.584 & 0.538 & 0.378 \\ HDR-VDP- & 0.912 & 0.815 & 0.724 & 0.815 & 0.775 & 0.588 & 0.592 & 0.514 & 0.363 \\ PieAPP & 0.827 & 0.840 & 0.653 & 0.832 & 0.849 & 0.652 & **0.729** & **0.709** & **0.521** \\ \hline MAE & 0.819 & 0.801 & 0.599 & 0.639 & 0.627 & 0.409 & 0.458 & 0.443 & 0.304 \\ E-MAE & 0.871 & 0.917 & 0.738 & 0.857 & 0.863 & 0.673 & 0.597 & 0.066 & 0.429 \\ PSNR & 0.851 & 0.837 & 0.645 & 0.726 & 0.714 & 0.540 & 0.468 & 0.456 & 0.314 \\ E-PSNR & 0.901 & 0.910 & 0.728 & 0.855 & 0.844 & 0.656 & 0.637 & 0.629 & 0.446 \\ SSM & 0.848 & 0.863 & 0.665 & 0.697 & 0.663 & 0.479 & 0.550 & 0.534 & 0.373 \\ E-SSIM & 0.869 & 0.910 & 0.732 & 0.842 & 0.868 & 0.677 & 0.671 & 0.656 & 0.469 \\ MS-SSIM & 0.826 & 0.841 & 0.642 & 0.820 & 0.813 & 0.616 & 0.584 & 0.538 & 0.379 \\ E-MS-SSIM & 0.862 & 0.895 & 0.709 & 0.806 & 0.825 & 0.621 & 0.642 & 0.634 & 0.453 \\ FLIP & 0.731 & 0.724 & 0.527 & 0.591 & 0.537 & 0.413 & 0.498 & 0.442 & 0.306 \\ E-F-FIP & 0.871 & 0.902 & 0.715 & 0.859 & 0.858 & 0.666 & 0.621 & 0.612 & 0.434 \\ VGG & 0.938 & 0.952 & 0.804 & 0.853 & 0.820 & 0.634 & 0.610 & 0.432 \\ E-VGG & 0.914 & 0.938 & 0.776 & 0.895 & 0.889 & 0.710 & 0.695 & 0.675 & 0.485 \\ LPIPS & 0.944 & 0.929 & 0.769 & 0.803 & 0.756 & 0.586 & 0.640 & 0.598 & 0.424 \\ E-IPPS & 0.922 & 0.933 & 0.771 & 0.884 & 0.876 & 0.689 & 0.705 & 0.678 & 0.490 \\ DISTS & 0.947 & 0.947 & 0.796 & 0.839 & 0.811 & 0.619 & 0.645 & 0.626 & 0.445 \\ E-DISTS & 0.932 & 0.925 & 0.753 & **0.903** & **0.915** & **0.725** & 0.725 & 0.697 & 0.507 \\ DeepWSD & **0.949** & **0.961** & **0.821** & 0.879 & 0.861 & 0.674 & 0.593 & 0.584 & 0.409 \\ E-DeepWSD & 0.937 & 0.937 & 0.775 & 0.905 & **0.892** & 0.710 & 0.704 & 0.672 & 0.485 \\ \hline \hline \end{tabular}
\end{table}
Table 1. Performance comparison of existing FR-IQMs (top part) and their enhanced versions using our approach (specified by the prefix E in the bottom part) on three standard IQM datasets. Higher values of SRCC, PLCC, and KRCC indicate better quality prediction. The first and second best metrics for each dataset are indicated in bold and underlined, respectively. Additionally, the version with superior correlation is highlighted in dark gray for each metric.
Chandler 2010), TID2013 (Ponomarenko et al., 2015), and PIPAL (Gu et al., 2020). The first two datasets mainly consist of synthetic distortions, ranging from 1k to 3k images. On the other hand, PIPAL is the most comprehensive IQA dataset due to its diverse and complex distortions, consisting of 23k images. Each reference image in this dataset was subjected to 116 distortions, including 19 GAN-type distortions. For evaluation, following Ding et al. (2020), we resize the smaller side resolution of input images to 224 while maintaining the aspect ratio. For each dataset, three metrics are used for evaluation: Spearman's rank correlation coefficient (SRCC), Pearson linear correlation coefficient (PLCC), and the Kendall rank correlation coefficient (RKCC). The PLCC measures the accuracy of the predictions, while the SRCC indicates the monotonicity of the predictions, and the KRCC measures the ordinal association. When computing the PLCC, we mapped the metric scores to the MOS values using a four-parameter function (Ding et al., 2020).
### Evaluations
In this section, we present the outcome of the quantitative (agreement with the MOS data) and qualitative (the quality of error maps) evaluation of our method. We also analyze the mask content and relate it with perceptual models of contrast and blur perception. Finally, we analyze the error map prediction of different distortion levels, and we consider the potential use of our enhanced E-MAE metric as a loss in an image restoration task.
Quality predictionThe experimental results are presented in Tbl. 1, where with the prefix E, we denote our proposed extension for each specific IQM. Our extension of traditional metrics, such as MAE, PSNR, SSIM, and FLIP consistently improves their performance for all datasets. This is remarkable as those metrics are commonly used, and our simple extension can make their distortion prediction closer to the human observer. Interestingly, the enhanced E-MAE and E-PSNR outperform recent learning-based VGG, LPIPS, and DISTS in the TID dataset while showing a comparable performance for the PIPAL dataset. Notable improvements are also observed in both datasets for the recent learning-based metrics (E-VGG, E-LPIPS, E-DIST, and E-DeepWSD), positioning them at a level comparable to other state-of-the-art IQMs, such as PieAPP (Prashnani et al., 2018). The only exception is the case of the small-scale CSIQ dataset, where the original learning-based metrics achieve high correlations with the MOS data and leave little space for further improvements.
Error map predictionIn Fig. 1, we show the error maps predicted by various existing IQMs and their enhanced versions for a set of images featuring different types of distortions. Note that the error map for VGG is visualized for the first layer. Moreover, the scores for MAE and VGG metrics exist in an unbounded range, and following Andersson et al. (2020), we apply a Sigmoid function to normalize them within the range from zero to one. Fig. 11 presents more examples for the SSIM and VGG metrics. Additionally, Fig. 4 showcases two examples where our E-MAE metric achieves more visually accurate error maps compared to well-established visibility metrics such as HDR-VDP-2 (Mantiuk et al., 2011), LocVis (Wolski et al., 2018), and FovVideoVDP (Mantiuk et al., 2021). Please refer to our supplementary material for more comprehensive results.
Mask visualizationIt is also intriguing to see the learned mask, i.e., the output of the network \(\mathcal{F}\), and to compare it with a traditional visual contrast masking model, such as the one used in JPEG2000 compression (Zeng et al., 2002). To this end, Fig. 5 presents our masks generated for noise and blur distortions. We consider the same distortion level and three levels of image contrast enhancement (\(\times 0.5,\times 1\), and \(\times 2\)). In the case of noise distortion, our learned masks predict stronger visual masking in the high-contrast butterfly and better noise visibility in the out-of-focus smooth background. Increasing image contrast (\(\times 2\)) leads to even stronger visual masking in the butterfly area and the plant behind it. Reducing image contrast (\(\times 0.5\)) leads to the inverse effect. Such behavior is compatible with the visual contrast masking model (Zeng et al., 2002; Turusun et al., 2019), where due to self-contrast masking, the higher the contrast of the original signal (e.g., on edges), the stronger the distortion should be to make it visible. Along a similar line,
Figure 4. The visual comparisons of distortion visibility maps for superresolution (upper row) and joint denoising and superresolution (bottom row) tasks as acquired from the PIPAL dataset. The first two columns present the reference and distorted images, followed by the respective metric predictions: MAE, HDR-VDP-2 (Mantiuk et al., 2011), LocVis (Wolski et al., 2018), FovVideoVDP (Mantiuk et al., 2021), and our E-MAE. As can be seen, the existing metrics tend to overestimate the distortion visibility. Note that LocVis and E-MAE have not seen such distortions in their training.
due to neighborhood masking, the higher the contrast texture, the stronger the visual masking as well. In the case of blur distortion, our learned mask predicts its strong visibility on high-contrast edges. The stronger the image contrast (\(\times 2\)), the blur visibility improves. Assigning a higher weight by our mask to high contrast regions agrees with perceptual models of blur detection and discrimination (Watson and Ahumada, 2011; Sebastian et al., 2015).
Note that we derive each mask taking as an input both the reference and distorted images; the mask can resolve even per-pixel distortions, as in the case of noise (Fig. 5), and accordingly informs the E-MAE metric on the perceptual importance of such distortions. What is also remarkable is that the HVS might impose contradictory requirements on hand-crafted visual models that become specific for a given distortion. This is well illustrated in Fig. 5, where noise can be better masked by strong contrast patterns (Zeng et al., 2002; Tursun et al., 2019) while blur is actually better revealed by strong contrast patterns (Watson and Ahumada, 2011). Our learned E-MAE mask somehow recognizes the distortion context and reacts as expected by penalizing less noise distortion in high-contrast and textured regions while penalizing more blur distortion at high-contrast edges. Interestingly, such local, seemingly contradictory behavior has been learned solemnly by providing multiple pairs of reference and distortion images along with the corresponding quality MOS rating which is just a single number. No annotation on specific distortion types has been required in our training. Fig. 8 shows further examples that our learned masking is also informed about contrast masking by texture (Ferwerda et al., 1997) and the contrast sensitivity function (CSF) (Daly, 1993; Barten, 1999; Wuerger et al., 2020).
**Sensitivity to different distortion levels** Our predicted mask can effectively highlight both the presence of a perceived error between two images at each pixel as well its corresponding magnitude. In this regard, Fig. 9 illustrates the predicted mask for a set of Monte Carlo-rendered images with a progressively increased number of samples per pixel (SPP). As can be seen, the perceived error decreases as the sample count rises, which is better reflected in our predicted Mask and the E-MAE error map compared to the MAE metric. Note that the predicted mask correctly reacts to the emergence of new fireflies as the sample count increases.
**Employing the enhanced metric as the loss** In this part, we investigate the benefit of the enhanced IQM metrics in optimizing image restoration algorithms. To this end, we employ MAE and E-MAE as loss functions for training image denoising using the state-of-the-art image restoration method, Restormer (Zamir et al., 2022). For our training set, we select the images in the BSD400 dataset (Martin et al., 2001) and introduce synthetic noise to these images by applying additive white Gaussian noise with a randomly chosen standard deviation ranging between 0 and 50. Then, we evaluate the trained models on five benchmark datasets, consistent with the ones used in (Zamir et al., 2022). We conduct our evaluation for various noise levels and report the results in Tbl. 2. We can observe that training with MAE leads to a higher PSNR value; however, training with E-MAE yields better scores when assessing with LPIPS and E-MAE metrics. The visual results are provided in Fig. 10.
### Ablations
We perform a set of ablations to investigate the impact of reduced training data in terms of distortion levels, reference image number, and distortion type diversity on the E-MAE metric prediction accuracy.
**Distortion levels** The first experiment analyzes the importance of incorporating various distortion levels into our training set. In this regard, we train our network for the E-MAE metric using only one distortion level per category, and the results are reported in Fig. 6. Interestingly, for all the datasets (except PIPAL), an inverse U-shape trend emerged across five different distortion levels, where we observe the lowest correlation when training with the minimum and maximum distortion levels (levels 1 and 5). Conversely, a
Figure 5. Comparison of our E-MAE metric masks for the noise (fifth row) and blur (sixth row) distortions as a function of different image contrast (\(\times 0.5,\times 1\), and \(\times 2\)). In the fourth row, we also show a map with the human sensitivity to local contrast changes as predicted by a traditional model of visual contrast masking (Tursun et al., 2019, Eq.4). In all cases, darker means less sensitive. However, the scale of our masks that serve as weights in the E-MAE metric is different than the one for the visual contrast masking model that directly denotes the hypothetical HVS response to local distortion contrast.
moderate amount of distortion (level 3) appears to be sufficiently representative for each distortion category and achieved a comparable correlation to training with all five levels. This behavior can be anticipated because, at the lowest and highest distortion levels, the distortions are either barely visible or strongly visible, leading to the consistent selection of mostly extreme rating scores. Consequently, when the network is exclusively exposed to images with one such extreme distortion and rating levels, it fails to learn to differentiate between them. On the other hand, at moderate distortion levels where distortions are partially visible or invisible, the network has a better opportunity to learn masks that behave differently for varying spatial locations.
Dataset sizeAlthough we employ a large-scale KADID dataset in our training (25 distortion types \(\times\) 5 distortion levels), the number of reference images is limited to 81. This ablation aims to investigate the training performance by even further reducing the number of reference images. To this end, we perform multiple runs of E-MAE metric training using randomly selected subsets of 20, 40, and 60 reference images. Fig. 7 presents the SRCC correlations averaged over multiple runs. The correlation differences between 40, 60, and the full set of 81 reference images are minor. In the case of 20 reference images, the performance is slightly lower and the variance higher, which indicates that 20 scenes might not be enough to capture image content variability.
## 5. Limitations
The actual visual contrast masking is the function of the viewing condition and the display size (Chandler, 2013), which is often considered in the perceptual quality metrics (Daly, 1993; Mantiuk et al., 2011, 2021; Andersson et al., 2020) but otherwise mostly ignored. However, the effectiveness of our visual masking model is limited to the experimental setup where human scores are obtained in the KADID dataset. As shown in Fig. 8, in the context of the CSF reproduction, our metric might not be well calibrated for near contrast threshold stimuli, whose visibility is also affected by the viewing distance and display conditions. Improving on those aspects, we relegate to future work.
## 6. Conclusion
In this paper, we present a new approach towards reducing the notorious gap between the existing quality metric prediction and the actual distortion visibility by the human observer. We achieve this by self-supervised training of a metric-specific network using the existing distortion datasets labeled with mean option score (MOS). We show that although overall image quality is rated with a single MOS value in the training data, by securing sufficient diversity of such training, as detailed in our ablation study, the network can leverage global MOS into a meaningful per-pixel mask. The mask, through different weighting of local distortion visibility, helps a given metric to aggregate such local information into the comprehensive MOS value, as imposed by the training data. The mask can be learned directly in the image space for traditional metrics or in the feature space for recent learning-based metrics. In either case, it is trivial to incorporate into the existing metrics. Remarkably, our approach improves the performance of commonly used metrics, such as MAE, PSNR, SSIM, and FLIP on all datasets we tested. The prediction accuracy of recent learning-based metrics is typically substantially enhanced.
|
2305.19680 | Spectral theory of Jacobi operators with increasing coefficients. The
critical case | Spectral properties of Jacobi operators $J$ are intimately related to an
asymptotic behavior of the corresponding orthogonal polynomials $P_{n}(z)$ as
$n\to\infty$. We study the case where the off-diagonal coefficients $a_{n}$
and, eventually, diagonal coefficients $ b_{n}$ of $J$ tend to infinity in such
a way that the ratio $\gamma_{n}:=2^{-1}b_{n} (a_{n}a_{n-1})^{-1/2} $ has a
finite limit $ \gamma $. %We study an asymptotic behavior as $n\to\infty$ of
the orthogonal polynomials $P_{n}(z)$ defined by Jacobi recurrence coefficients
$a_{n}$ (off-diagonal terms) and $ b_{n}$ (diagonal terms). %We consider the
case $a_{n}\to\infty$ and suppose that the sequence $\gamma_{n}:=2^{-1}b_{n}
(a_{n}a_{n-1})^{-1/2} $ has a limit $ \gamma $ as $n\to\infty$. In the case
$|\gamma | < 1$ asymptotic formulas for $P_{n}(z)$ generalize those for the
Hermite polynomials and the corresponding Jacobi operators $J$ have absolutely
continuous spectra covering the whole real line. If $|\gamma | > 1$, then
spectra of the operators $J$ are discrete. Our goal is to investigate the
critical case $| \gamma |=1$ that occurs, for example, for the Laguerre
polynomials. The formulas obtained depend crucially on the rate of growth of
the coefficients $a_{n}$ (or $b_{n}$) and are qualitatively different in the
cases where $a_{n}\to \infty$ faster or slower then $n$. For the fast growth of
$a_{n}$, we also have to distinguish the cases
$|\gamma_{n}|
\to 1-0$ and $|\gamma_{n}|
\to 1+0$. Spectral properties of the corresponding Jacobi operators are quite
different in all these cases. Our approach works for an arbitrary power growth
of the Jacobi coefficients. | D. R. Yafaev | 2023-05-31T09:23:57Z | http://arxiv.org/abs/2305.19680v1 | # Spectral theory of Jacobi operators with increasing coefficients. The critical case
###### Abstract.
Spectral properties of Jacobi operators \(J\) are intimately related to an asymptotic behavior of the corresponding orthogonal polynomials \(P_{n}(z)\) as \(n\to\infty\). We study the case where the off-diagonal coefficients \(a_{n}\) and, eventually, diagonal coefficients \(b_{n}\) of \(J\) tend to infinity in such a way that the ratio \(\gamma_{n}:=2^{-1}b_{n}(a_{n}a_{n-1})^{-1/2}\) has a finite limit \(\gamma\). In the case \(|\gamma|<1\) asymptotic formulas for \(P_{n}(z)\) generalize those for the Hermite polynomials and the corresponding Jacobi operators \(J\) have absolutely continuous spectra covering the whole real line. If \(|\gamma|>1\), then spectra of the operators \(J\) are discrete. Our goal is to investigate the critical case \(|\gamma|=1\) that occurs, for example, for the Laguerre polynomials. The formulas obtained depend crucially on the rate of growth of the coefficients \(a_{n}\) (or \(b_{n}\)) and are qualitatively different in the cases where \(a_{n}\to\infty\) faster or slower then \(n\). For the fast growth of \(a_{n}\), we also have to distinguish the cases \(|\gamma_{n}|\to 1-0\) and \(|\gamma_{n}|\to 1+0\). Spectral properties of the corresponding Jacobi operators are quite different in all these cases. Our approach works for an arbitrary power growth of the Jacobi coefficients.
Key words and phrases:Increasing Jacobi coefficients, difference equations, Jost solutions, limiting absorption principle, absolutely continuous spectrum 2000 Mathematics Subject Classification: 33C45, 39A70, 47A40, 47B39 Supported by project Russian Science Foundation 22-11-00070
## 1. Introduction. Basic definitions
### Jacobi operators
We consider Jacobi operators defined by three-diagonal matrices
\[\mathcal{J}=\begin{pmatrix}b_{0}&a_{0}&0&0&0&\cdots\\ a_{0}&b_{1}&a_{1}&0&0&\cdots\\ 0&a_{1}&b_{2}&a_{2}&0&\cdots\\ 0&0&a_{2}&b_{3}&a_{3}&\cdots\\ \vdots&\vdots&\vdots&\ddots&\ddots&\ddots\end{pmatrix}\]
in the canonical basis of the space \(\ell^{2}(\mathbb{Z}_{+})\). Thus, if \(u=(u_{0},u_{1},\ldots)^{\top}=:(u_{n})\) is a column, then
\[(\mathcal{J}u)_{0}=b_{0}u_{0}+a_{0}u_{1}\quad\text{and}\quad(\mathcal{J}u)_{n }=a_{n-1}u_{n-1}+b_{n}u_{n}+a_{n}u_{n+1}\quad\text{for}\quad n\geq 1.\]
It is always supposed that \(a_{n}>0\), \(b_{n}=\bar{b}_{n}\) so that the matrix \(\mathcal{J}\) is symmetric and commutes with the complex conjugation. The minimal Jacobi operator \(J_{\min}\) is
defined by the equality \(J_{\min}u=\mathcal{J}u\) on the set \(\mathcal{D}\subset\ell^{2}(\mathbb{Z}_{+})\) of vectors \(u=(u_{n})\) with only a finite number of non-zero components \(u_{n}\). The operator \(J_{\min}\) is symmetric in the space \(\ell^{2}(\mathbb{Z}_{+})\) and \(J_{\min}:\mathcal{D}\to\mathcal{D}\). Its adjoint \(J_{\min}^{*}\) coincides with the maximal operator \(J_{\max}\) given by the same formula \(J_{\max}u=\mathcal{J}u\) on the set \(\mathcal{D}(J_{\max})\) of all vectors \(u\in\ell^{2}(\mathbb{Z}_{+})\) such that \(\mathcal{J}u\in\ell^{2}(\mathbb{Z}_{+})\).
The operator \(J_{\min}\) is bounded if and only if both sequences \(a_{n}\) and \(b_{n}\) are in \(\ell^{\infty}(\mathbb{Z}_{+})\). In general, \(J_{\min}\) may have deficiency indices \((0,0)\) (that is, it is essentially self-adjoint) or \((1,1)\). Its essential self-adjointness depends on a behavior of solutions to the difference equation
\[a_{n-1}F_{n-1}(z)+b_{n}F_{n}(z)+a_{n}F_{n+1}(z)=zF_{n}(z),\quad n\geq 1. \tag{1.1}\]
Recall that the Weyl theory developed by him for differential equations can be naturally adapted to equations (1.1) (see, e.g., SS3 of Chapter 1 in the book [1] and references therein). For \(\operatorname{Im}z\neq 0\), equation (1.1) always has a non-trivial solution \(F_{n}(z)\in\ell^{2}(\mathbb{Z}_{+})\). This solution is either unique (up to a constant factor) or all solutions of equation (1.1) belong to \(\ell^{2}(\mathbb{Z}_{+})\). The first instance is known as the limit point case and the second one - as the limit circle case. It turns out that the operator \(J_{\min}\) is essentially self-adjoint if and only if the limit point case occurs; then the closure \(\operatorname{clos}J_{\min}\) of \(J_{\min}\) equals \(J_{\max}\). In the limit circle case, the operator \(J_{\min}\) has deficiency indices \((1,1)\).
It is well known that the limit point case occurs if \(a_{n}\to\infty\) as \(n\to\infty\) but not too rapidly. For example, the condition
\[\sum_{n=0}^{\infty}a_{n}^{-1}=\infty \tag{1.2}\]
(introduced by T. Carleman in his book [4]) is sufficient for the essential self-adjointness of the operator \(J_{\min}\). Under this condition no assumptions on the diagonal elements \(b_{n}\) are required. In general, the essential self-adjointness of \(J_{\min}\) is determined by a competition between sequences \(a_{n}\) and \(b_{n}\). For example, if \(b_{n}\) are much larger than \(a_{n}\), then \(J_{\min}\) is close to a diagonal operator so that it is essentially self-adjoint independently of the growth of \(a_{n}\).
### Orthogonal polynomials
Orthogonal polynomials \(P_{n}(z)\) can be formally defined as "eigenvectors" of the Jacobi operators. This means that a column
\[P(z)=\left(P_{0}(z),P_{1}(z),\ldots\right)^{\top}\]
satisfies the equation \(\mathcal{J}P(z)=zP(z)\) with \(z\in\mathbb{C}\) being an "eigenvalue". This equation is equivalent to the recurrence relation
\[a_{n-1}P_{n-1}(z)+b_{n}P_{n}(z)+a_{n}P_{n+1}(z)=zP_{n}(z),\quad n\in\mathbb{Z}_ {+}=\{0,1,2,\ldots\}, \tag{1.3}\]
complemented by boundary conditions \(P_{-1}(z)=0\), \(P_{0}(z)=1\). Determining \(P_{n}(z)\), \(n=1,2,\ldots\), successively from (1.3), we see that \(P_{n}(z)\) is a polynomial with real coefficients of degree \(n\): \(P_{n}(z)=p_{n}z^{n}+\cdots\) where \(p_{n}=(a_{0}a_{1}\cdots a_{n-1})^{-1}\).
The spectra of all self-adjoint extensions \(J\) of the minimal operator \(J_{\min}\) are simple with \(e_{0}=(1,0,0,\ldots)^{\top}\) being a generating vector. Therefore it is natural to define the spectral measure of \(J\) by the relation \(d\Xi_{J}(\lambda)=d\langle E_{J}(\lambda)e_{0},e_{0}\rangle\) where \(E_{J}(\lambda)\) is the spectral family of the operator \(J\) and \(\langle\cdot,\cdot\rangle\) is the scalar product in the space \(\ell^{2}(\mathbb{Z}_{+})\). For all extensions \(J\) of the operator \(J_{\min}\), the polynomials \(P_{n}(\lambda)\) are orthogonal and normalized in the spaces \(L^{2}(\mathbb{R};d\Xi_{J})\):
\[\int_{-\infty}^{\infty}P_{n}(\lambda)P_{m}(\lambda)d\Xi_{J}(\lambda)=\delta_{ n,m};\]
as usual, \(\delta_{n,n}=1\) and \(\delta_{n,m}=0\) for \(n\neq m\). We always consider normalized polynomials \(P_{n}(\lambda)\). They are often called orthonormal. If the operator \(J_{\min}\) is essentially self-adjoint and \(J=\operatorname{clos}J_{\min}\), we write \(d\Xi(\lambda)\) instead of \(d\Xi_{J}(\lambda)\).
It is useful to keep in mind the following elementary observation.
**Proposition 1.1**.: _If a sequence \(F_{n}(z)\) satisfies equation (1.1), then_
\[F_{n}^{\sharp}(z)=(-1)^{n}F_{n}(-z)\]
_satisfies the same equation with the Jacobi coefficients \((a_{n}^{\sharp},b_{n}^{\sharp})=(a_{n},-b_{n})\). In particular, \(P_{n}^{\sharp}(z)=(-1)^{n}P_{n}(-z)\) are the orthonormal polynomials for the coefficients \((a_{n}^{\sharp},b_{n}^{\sharp})\). In the limit point case, if \(J^{\sharp}\) is the Jacobi operator in the space \(\ell^{2}(\mathbb{Z}_{+})\) with matrix elements \((a_{n}^{\sharp},b_{n}^{\sharp})\), then \(J^{\sharp}=-\mathcal{U}^{*}J\,\mathcal{U}\) where the unitary operator \(\mathcal{U}\) is defined by \((\mathcal{U}F)_{n}=(-1)^{n}F_{n}\) for \(n\in\mathbb{Z}_{+}\). The corresponding spectral measures are linked by the relation \(d\Xi^{\sharp}(\lambda)=d\Xi(-\lambda)\). In particular, if \(b_{n}=0\) for all \(n\), then the operators \(J\) and \(-J\) are unitarily equivalent._
The comprehensive presentation of the results described shortly above can be found in the books [1, 5, 21] and the surveys [13, 20, 22, 23].
### Asymptotic results
We study the case \(a_{n}\to\infty\) as \(n\to\infty\) and are interested in the asymptotic behavior of the polynomials \(P_{n}(z)\) as \(n\to\infty\). The condition \(a_{n}\to\infty\) is fulfilled for the Hermite polynomials where the Jacobi coefficients are
\[a_{n}=\sqrt{(n+1)/2}\quad\text{and}\quad b_{n}=0 \tag{1.4}\]
and the Laguerre polynomials \(L_{n}^{(p)}(z)\) where
\[a_{n}=\sqrt{(n+1)(n+1+p)}\quad\text{and}\quad b_{n}=2n+p+1,\quad p>-1. \tag{1.5}\]
In the general case there are two essentially different approaches to this problem. The first one derives asymptotic formulas for \(P_{n}(z)\) from the spectral measure \(d\Xi(\lambda)\), and the second proceeds directly from the coefficients \(a_{n}\), \(b_{n}\). The first method goes back to S. Bernstein (see his pioneering paper [3] or Theorem 12.1.4 in the G. Szego book [21]) who obtained formulas generalizing those for the Jacobi polynomials. In terms of the coefficients \(a_{n}\), \(b_{n}\), the assumptions of [3] correspond to the conditions
\[a_{n}\to a_{\infty}>0,\quad b_{n}\to 0\quad\text{as}\quad n\to\infty. \tag{1.6}\]
Generalizations of the asymptotic formulas for the Hermite polynomials are known as the Plancherel-Rotach formulas.
A study of an asymptotic behavior of the orthonormal polynomials for given coefficients \(a_{n}\), \(b_{n}\) was initiated by P. Nevai in his book [17]. He (see also the papers [14] and [24]) investigated the case of stabilizing coefficients satisfying condition (1.6), but, in contrast to [3], the results of [17, 14, 24] were stated directly in terms of the Jacobi coefficients. The case of the coefficients \(a_{n}\to\infty\) was later studied in [10] by J. Janas and S. Naboko and in [2] by A. Aptekarev and J. Geronimo. It was assumed in these papers that there exists a finite limit
\[\frac{b_{n}}{2\sqrt{a_{n-1}a_{n}}}=:\gamma_{n}\to\gamma,\quad n\to\infty, \tag{1.7}\]
where \(|\gamma|<1\) so that \(b_{n}\) are relatively small compared to \(a_{n}\). The Carleman condition (1.2) was also required. The famous example of this type is given by the Hermite coefficients (1.4). In the general case the results are qualitatively similar to this particular case. Asymptotics of \(P_{n}(\lambda)\) are oscillating for \(\lambda\in\mathbb{R}\) and \(P_{n}(z)\) exponentially grow as \(n\to\infty\) if \(\operatorname{Im}z\neq 0\). Spectra of the operators \(J\) are absolutely continuous and fill the whole real axis. If (1.7) is satisfied with \(|\gamma|>1\), then diagonal elements \(b_{n}\) dominate off-diagonal elements \(a_{n}\). This ensures that the spectra of such operators \(J\) are discrete. Note (see, e.g., [30]) that algebraic structures of asymptotic formulas for the orthonormal polynomials are quite similar in the cases \(|\gamma|<1\) and \(|\gamma|>1\), but in the second case \(P_{n}(z)\) exponentially grow as \(n\to\infty\) even for \(z\in\mathbb{R}\) (unless \(z\) is an eigenvalue of \(J\)).
The case of rapidly increasing coefficients \(a_{n}\) when the Carleman condition (1.2) is violated, so that
\[\sum_{n=0}^{\infty}a_{n}^{-1}<\infty,\]
was investigated in a recent paper [27] where it was also assumed that \(|\gamma|\neq 1\). Astonishingly, the asymptotics of the orthogonal polynomials in this a priori highly singular case is particularly simple and general.
### Critical case
In the critical case \(|\gamma|=1\), the coefficients \(a_{n}\) and \(b_{n}\) are of the same order and asymptotic formulas for \(P_{n}(z)\) are determined by details of their behavior as \(n\to\infty\).
Thus, one has to require assumptions on the coefficients \(a_{n}\) and \(b_{n}\) more specific compared to (1.7). To make our presentation as simple as possible, we assume that, asymptotically,
\[a_{n}=n^{\sigma}(1+\alpha n^{-1}+O(n^{-2})),\quad n\to\infty, \tag{1.8}\]
and
\[b_{n}=2\gamma n^{\sigma}(1+\beta n^{-1}+O(n^{-2})),\quad n\to\infty, \tag{1.9}\]
for some \(\alpha,\beta,\gamma\in\mathbb{R}\) and1\(\sigma>0\). Thus, the operators with periodically modulated coefficients (see, e.g., [9] and references therein) are out of the scope of this paper. The critical case is distinguished by the condition \(|\gamma|=1\). In view of Proposition 1.1 the results for \(\gamma=1\) and \(\gamma=-1\) are equivalent. It turns out that the asymptotic formulas for \(P_{n}(z)\) depend crucially on the parameter
Footnote 1: The case \(\sigma>3/2\) was considered earlier in [28]
\[\tau=2\beta-2\alpha+\sigma. \tag{1.10}\]
Roughly speaking, the cases \(\tau<0\) (or \(\tau>0\)) correspond to dominating off-diagonal \(a_{n}\) (resp., diagonal \(b_{n}\)) Jacobi coefficients.
All the results of this paper can be extended to a more general situation where the terms \(\alpha n^{-1}\) and \(\beta n^{-1}\) in (1.8), (1.9) are replaced by \(\alpha n^{-p}\) and \(\beta n^{-p}\) for some \(p\in(0,2)\) and the error term \(O(n^{-2})\) is replaced by \(O(n^{-r})\) for \(r>\max\{1,p\}\).
The classical example where the critical case occurs is given by the Laguerre coefficients (1.5). In this case, we have \(\gamma=1\), \(\sigma=1\) and \(\alpha=1+p/2\), \(\beta=(1+p)/2\) so that \(\tau=0\). The corresponding Jacobi operators \(J=J^{(p)}\) have absolutely continuous spectra coinciding with \([0,\infty)\). Another example is given by the Jacobi operators describing birth and death processes investigated in [11] and [15]. The recurrence coefficients of such operators are rather close to (1.5) so that spectral and asymptotic results for these two classes of operators are similar.
Probably, a study of Jacobi operators in the critical case was initiatiated by J. Dombrowsi and S. Pedersen in the papers [6, 7] where spectral properties of such operators were investigated under sufficiently general assumptions on the coefficients \(a_{n}\) and \(b_{n}\). Asymptotics of the orthogonal polynomials in this situation was studied by J. Janas, S. Naboko and E. Sheronova in the pioneering paper [12]. They accepted conditions (1.8), (1.9) with \(\sigma\in(1/2,2/3)\), \(\alpha=\beta=0\) and studied equation (1.1) for real \(z=\lambda\). Both oscillating for \(\lambda>0\) (if \(\gamma=1\)) and exponentially growing (or decaying) for \(\lambda<0\) (if \(\gamma=1\)) asymptotics of solutions of equation (1.1) were investigated in [12]. The results of this paper imply that positive spectra of the operators \(J\) are absolutely continuous and negative spectra are discrete. Recently the results of [12] were generalized and supplemented in [16] by some ideas of [2] - see Remark 7.9 below.
We note also the paper [19] by J. Sahbani where interesting spectral results were obtained avoiding a study of asymptotics of the orthogonal polynomials. The paper [19] relies on the Mourre method.
In the non-critical case \(|\gamma|\neq 1\), asymptotic formulas are qualitatively different for \(\sigma\leq 1\) when the Carleman condition is satisfied and for \(\sigma>1\) when the Carleman condition fails. In the critical case the borderline is \(\sigma=3/2\). The case of rapidly increasing coefficients where \(\sigma>3/2\) was studied in [28]. For such \(\sigma\), the limit circle case is realized (if \(\tau<0\)) and the corresponding Jacobi operators have discrete spectra.
Our goal is to consistently study the regular critical case where \(|\gamma|=1\) and \(\sigma\leq 3/2\). Then the Jacobi operator \(J_{\min}\) is essentially self-adjoint, even if the Carleman condition (1.2) fails. Its spectral properties turn out to be qualitatively different in the cases \(\sigma\in(0,1)\), \(\sigma=1\) and \(\sigma\in(1,3/2]\). Moreover, for \(\sigma\in(1,3/2]\) the answers depend crucially on the sign of the parameter \(\tau\) defined by (1.10). In all cases, our asymptotic formulas are constructed in terms of the sequence
\[t_{n}(z)=-\tau n^{-1}+zn^{-\sigma}. \tag{1.11}\]
Note that the critical situation studied here is morally similar to a threshold behavior of orthogonal polynomials for case (1.6). For such coefficients, the role of (1.7) is played (see [17, 14, 26]) by the relation
\[\lim_{n\to\infty}\frac{b_{n}-\lambda}{2a_{n}}=-\frac{\lambda}{2a_{\infty}}.\]
Since the essential spectrum of the operator \(J\) is now \([-2a_{\infty},2a_{\infty}]\), the values \(\lambda=\pm 2a_{\infty}\) are the threshold values of the spectral parameter \(\lambda\). The parameter \(-\lambda/(2a_{\infty})\) plays the role of \(\gamma\) so that the cases \(|\gamma|<1\) (resp., \(|\gamma|>1\)) correspond to \(\lambda\) lying inside the essential spectrum of \(J\) (resp., outside of it).
### Scheme of the approach
We use the traditional approach developed for differential equations
\[-(a(x)f^{\prime}(x,z))^{\prime}+b(x)f(x,z)=zf(x,z),\quad x>0,\quad a(x)>0. \tag{1.12}\]
To a large extent, \(x\), \(a(x)\) and \(b(x)\) in (1.12) play the roles of the parameters \(n\), \(a_{n}\) and \(b_{n}\) in the Jacobi equation (1.1). The regular solution \(\psi(x,z)\) of the differential equation (1.12) is distinguished by the conditions
\[\psi(0,z)=0,\quad\psi^{\prime}(0,z)=1.\]
It plays the role of the polynomial solution \(P_{n}(z)\) of equation (1.1) fixed by the conditions \(P_{-1}(z)=0\), \(P_{0}(z)=1\).
A study of an asymptotics of the regular solution \(\psi(x,z)\) relies on a construction of special solutions of the differential equation (1.12) distinguished by their asymptotics as \(x\to\infty\). For example, in the case \(a(x)=1\), \(b\in L^{1}(\mathbb{R}_{+})\), equation (1.12) has a solution \(f(x,z)\), known as the Jost solution, behaving like \(e^{i\sqrt{z}x}\), \(\operatorname{Im}\sqrt{z}\geq 0\), as \(x\to\infty\). Under fairly general assumptions equation (1.12) has a solution \(f(x,z)\) (we also call it the Jost solution) whose asymptotics is given by the classical Liouville-Green formula (see Chapter 6 of the book [18])
\[f(x,z)\sim\mathcal{G}(x,z)^{-1/2}\exp\Big{(}i\int_{x_{0}}^{x}\mathcal{G}(y,z) dy\Big{)}=:\mathcal{A}(x,z) \tag{1.13}\]
as \(x\to\infty\). Here \(x_{0}\) is some fixed number and
\[\mathcal{G}(x,z)=\sqrt{\frac{z-b(x)}{a(x)}},\quad\operatorname{Im}\mathcal{G} (x,z)\geq 0.\]
Note that the function \(\mathcal{A}(x,z)\) (the Ansatz for the Jost solution \(f(x,z)\)) satisfies equation (1.12) with a sufficiently good accuracy.
For real \(\lambda\) in the absolutely continuous spectrum of the operator
\[-\frac{d}{dx}\big{(}a(x)\frac{d}{dx}\big{)}+b(x),\]
the regular solution \(\psi(x,\lambda)\) of (1.12) is a linear combination of the Jost solutions \(f(x,\lambda+i0)\) and \(f(x,\lambda-i0)\) which yields asymptotics of \(\psi(x,\lambda)\) as \(x\to\infty\). For example, in the case \(a(x)=1\), \(b\in L^{1}(\mathbb{R}_{+})\) and \(\lambda>0\), one has
\[\psi(x,\lambda)=\kappa(\lambda)\sin(\sqrt{\lambda}x+\eta(\lambda))+o(1),\quad x \to\infty,\]
where \(\kappa(\lambda)\) and \(\eta(\lambda)\) are known as the scattering (or limit) amplitude and phase, respectively. If \(\operatorname{Im}z\neq 0\), then one additionally constructs, by an explicit formula, a solution \(g(x,z)\) of (1.12) exponentially growing as \(x\to\infty\). This yields asymptotics of \(\psi(x,z)\) for \(\operatorname{Im}z\neq 0\).
An analogy between the equations (1.1) and (1.12) is of course very well known. However it seems to be never consistently exploited before. In particular, the papers cited above use also specific methods of difference equations. For example, the absolute continuity of the spectrum is often deduced from the subordinacy theory, the asymptotics of the orthonormal polynomials are calculated by studying infinite products of transfer matrices, etc. Some of these tools are quite ingenious, but, in the author's opinion, the standard approach of differential equations works perfectly well and allows one to study an asymptotic behavior of orthonormal polynomials in a very direct way. It permits an arbitrary growth of the coefficients \(a_{n}\) and \(b_{n}\) (all values of \(\sigma\) in formulas (1.8), (1.9)) and naturally leads to a variety of new results, for example, to a construction of the resolvents of Jacobi operators and to the limiting absorption principle. For Jacobi operators with increasing coefficients, this approach was already used in the non-critical case \(|\gamma|\neq 1\) in [30].
We are applying the same scheme to the regular critical case when conditions (1.8) and (1.9) are satisfied with \(\sigma\leq 3/2\) and \(|\gamma|=1\) in (1.9). Under these assumptions the limit point case occurs although for \(\sigma>1\) the Carleman condition (1.2) is violated.
Let us briefly describe the main steps of our approach. In the non-critical case \(|\gamma|\neq 1\), it was presented in [30].
A. First, we distinguish solutions (the Jost solutions) \(f_{n}(z)\) of the difference equation (1.1) by their asymptotics as \(n\to\infty\). This requires a construction of an Ansatz \(\mathcal{A}_{n}(z)\) for the Jost solutions such that the relative remainder
\[\mathbf{r}_{n}(z):=(\sqrt{a_{n-1}a_{n}}\mathcal{A}_{n}(z))^{-1}\big{(}a_{n-1} \mathcal{A}_{n-1}(z)+(b_{n}-z)\mathcal{A}_{n}(z)+a_{n}\mathcal{A}_{n+1}(z) \big{)} \tag{1.14}\]
belongs at least to the space \(\ell^{1}(\mathbb{Z}_{+})\).
B. We seek \(\mathcal{A}_{n}(z)\) in the form
\[\mathcal{A}_{n}(z)=(-\gamma)^{n}n^{-\rho}e^{i\varphi_{n}(\gamma z)},\quad \gamma=\pm 1, \tag{1.15}\]
where the power \(\rho\) in the amplitude and the phases \(\varphi_{n}\) are determined by the coefficients \(a_{n}\), \(b_{n}\). Post factum, \({\mathcal{A}}_{n}(z)\) turns out to be the leading term of the asymptotics of \(f_{n}(z)\) as \(n\to\infty\):
\[f_{n}(z)={\mathcal{A}}_{n}(z)(1+o(1)). \tag{1.16}\]
Actually, the Ansatzen we use are only distantly similar to the Liouville-Green Ansatz (1.13). On the other hand, for \(\sigma=1\), relation (1.15) is close to formulas of the Birkhoff-Adams method significantly polished in [25] (see also Theorem 8.36 in the book [8]).
C. Then we make a multiplicative change of variables
\[f_{n}(z)={\mathcal{A}}_{n}(z)u_{n}(z) \tag{1.17}\]
which permits us to reduce the Jacobi equation (1.1) for \(f_{n}(z)\) to a Volterra "integral" equation for the sequence \(u_{n}(z)\). This equation depends of course on the parameters \(a_{n}\), \(b_{n}\). In particular, for \(\sigma>1\), it is qualitatively different in the cases \(\tau<0\) and \(\tau>0\). However in all cases the Volterra equation for \(u_{n}(z)\) is standardly solved by iterations which allows us to prove that it has a solution such that \(u_{n}(z)\to 1\) as \(n\to\infty\). Then the Jost solutions \(f_{n}(z)\) are defined by formula (1.17).
D. To find an asymptotics of all solutions of the Jacobi equation (1.1) and, in particular, of the orthonormal polynomials \(P_{n}(z)\), we have to construct a solution linearly independent with \(f_{n}(z)\). If a real \(z=\lambda\) belongs to the absolutely continuous spectrum of the operator \(J\), then the solutions \(f_{n}(\lambda+i0)\) and its complex conjugate \(f_{n}(\lambda-i0)\) are linearly independent. For regular points \(z\), a solution \(g_{n}(z)\) of (1.1) linearly independent with \(f_{n}(z)\) is constructed (see, e.g., Theorem 2.2 in [30]) by an explicit formula
\[g_{n}(z)=f_{n}(z)\sum_{m=n_{0}}^{n}(a_{m-1}f_{m-1}(z)f_{m}(z))^{-1},\quad n \geq n_{0}, \tag{1.18}\]
where \(n_{0}=n_{0}(z)\) is a sufficiently large number. It follows from (1.15), (1.16) that this solution grows exponentially (for \(\sigma<3/2\)) as \(n\to\infty\):
\[g_{n}(z)=i\varkappa(z)(-\gamma)^{n+1}n^{-\rho}e^{-i\varphi_{n}(\gamma z)} \bigl{(}1+o(1)\bigr{)}; \tag{1.19}\]
the factor \(\varkappa(z)\) here is given by equality (2.13), but it is inessential in (1.19). Since \(g_{n}(z)\) is linearly independent with \(f_{n}(z)\), the polynomials \(P_{n}(z)\) are linear combinations of \(f_{n}(z)\) and \(g_{n}(z)\) which yields asymptotics of \(P_{n}(z)\).
E. Our results on the Jost solutions \(f_{n}(z)\) allow us to determine the spectral structure of the operator \(J\) and to construct its resolvent \(R(z)\). At the same time, we obtain the limiting absorption principle for the operator \(J\) stating that matrix elements of its resolvent \(R(z)\), that is the scalar products \(\langle R(z)u,v\rangle\), \(\operatorname{Im}z\neq 0\), are continuous functions of \(z\) up to the absolutely continuous spectrum of the operator \(J\) if elements \(u\) and \(v\) belong to a suitable dense subset of \(\ell^{2}({\mathbb{Z}}_{+})\).
All these steps, except possibly the construction of the exponentially growing solution \(g_{n}(z)\), are rather standard. No more specific tools are required in the problem considered.
Actually, the scheme described above works virtually in all asymptotic problems in the limit point case, both for difference and differential operators. In the limit circle case, some modifications are required; see [27, 28]. The important differences are that, in the limit circle case, one has two natural Ansatzen \(\mathcal{A}_{n}^{(\pm)}=n^{-\rho}e^{\pm i\varphi_{n}}\) where \(\varphi_{n}=\bar{\varphi}_{n}\) does not depend on the spectral parameter \(z\in\mathbb{C}\) and \(\rho>1/2\) so that \(\mathcal{A}_{n}^{(\pm)}\in\ell^{2}(\mathbb{Z}_{+})\).
To emphasize the analogy between differential and difference equations, we often use the "continuous" terminology (Volterra integral equations, integration by parts, etc.) for sequences labelled by the discrete variable \(n\).
Our plan is the following. The main results of the paper are stated in Sect. 2. In Sect. 3, we define the number \(\rho\) and the phases \(\varphi_{n}\) in formula (1.15) for the Ansatz \(\mathcal{A}_{n}(z)\) and check an estimate
\[\mathbf{r}_{n}(z)=O(n^{-\delta}),\quad n\to\infty, \tag{1.20}\]
with an appropriate \(\delta=\delta(\rho)>1\) for remainder (1.14). A Volterra integral equation for \(u_{n}(z)\) is introduced and investigated in Sect. 4. This leads to a construction of the Jost solutions \(f_{n}(z)\) in Sect. 5. In this section, the proofs of Theorems 2.1, 2.3 and 2.4 are concluded. Asymptotics of the orthonormal polynomials \(P_{n}(z)\) are found in Sect. 6. The results for regular points \(z\) and for \(z\) in the absolutely continuous spectrum of the Jacobi operator \(J\) are stated in Theorems 6.6 and 6.11, respectively. The results on spectral properties of the Jacobi operators are collected in Theorem 2.11. Its proof is given in Sect. 7.
## 2. Main results
Our goal is to study the critical case when assumptions (1.8) and (1.9) are satisfied with \(|\gamma|=1\). In proofs, we may suppose that \(\gamma=1\). The results for \(\gamma=-1\) then follow from Proposition 1.1.
The results stated below crucially depend on the values of \(\sigma\) and \(\tau\). In the cases \(\sigma\in(1,3/2]\) (\(\sigma\in(0,1)\)) the first (resp., the second) term in (1.11) is dominating so that the asymptotic formulas are qualitatively different in these cases.
### Jost solutions
Our approach relies on a study of solutions \(f_{n}(z)\) of the Jacobi equation (1.1) distinguished by their behavior for \(n\to\infty\). Actually, we determine the sequences \(f_{n}(z)\) by their asymptotics
\[f_{n}(z)=(-\gamma)^{n}n^{-\rho}e^{i\varphi_{n}(\gamma z)}\big{(}1+o(1)\big{)},\quad n\to\infty. \tag{2.1}\]
Here
\[\rho=\begin{cases}\sigma/2-1/4\quad\text{for}\quad\sigma\geq 1\\ \sigma/4\quad\text{for}\quad\sigma\leq 1\end{cases} \tag{2.2}\]
(observe that \(\rho\) takes the critical value \(\rho=1/2\) for the critical value \(\sigma=3/2\)) and
\[\varphi_{n}(z)=\sum_{m=0}^{n}\theta_{m}(z). \tag{2.3}\]
The terms \(\theta_{n}(z)\) will be defined by explicit formulas below in this subsection. Note that
\[\operatorname{Im}\theta_{n}(z)\geq 0. \tag{2.4}\]
By an analogy with differential equations, it is natural to use the term "Jost solutions" for \(f_{n}(z)\). In the situation we consider, formula (2.1) plays the role of the Liouville-Green formula (1.13). Observe that, for an arbitrary constant \(C(z)\), the sequence \(C(z)f_{n}(z)\) can be also taken for the Jost solution. In particular, a finite number of terms in equality (2.3) is inessential.
We denote \(\Pi=\mathbb{C}\setminus\mathbb{R}\) and \(\Pi_{0}=\mathbb{C}\setminus\mathbb{R}_{+}\). The sequence \(t_{n}(z)\) is given by formula (1.11) where \(\tau\) is number (1.10). The analytic function \(\sqrt{t}\) is defined on \(\Pi_{0}\) and \(\operatorname{Im}\sqrt{t}>0\) for \(t\in\Pi_{0}\). Below \(C\), sometimes with indices, and \(c\) are different positive constants whose precise values are of no importance.
We state the results about the Jost solutions \(f_{n}(z)\) separately for the cases \(\sigma\in(1,3/2]\), \(\sigma\in(0,1)\) and \(\sigma=1\). Let us start with the case \(\sigma>1\).
**Theorem 2.1**.: _Let assumptions (1.8), (1.9) with \(|\gamma|=1\) and \(\sigma\in(1,3/2]\) be satisfied. Set \(\rho=\sigma/2-1/4\),_
\[\theta_{n}(z)=\sqrt{t_{n}(z)} \tag{2.5}\]
_and let \(\varphi_{n}(z)\) be sum (2.3)._
_If \(\tau<0\), then for every \(z\in\operatorname{clos}\Pi\) equation (1.1) has a solution \(f_{n}(z)\) with asymptotics (2.1). For all \(n\in\mathbb{Z}_{+}\), the functions \(f_{n}(z)\) are analytic in \(\Pi\) and are continuous up to the cut along the real axis._
_If \(\tau>0\), then asymptotic formula (2.1) is true for all \(z\in\mathbb{C}\). In this case the functions \(f_{n}(z)\) are analytic in the whole complex plane \(\mathbb{C}\)._
_For all \(\tau\neq 0\), formula (2.1) is uniform in \(z\) from compact subsets of \(\mathbb{C}\)._
We emphasize that the asymptotic behavior of the solutions \(f_{n}(z)\) as \(n\to\infty\) is drastically different for small diagonal elements \(b_{n}\) when \(\tau<0\) and for large \(b_{n}\) when \(\tau>0\) - cf. formulas (2.17) and (2.18), below. This manifests itself in spectral properties of the corresponding Jacobi operators \(J\) - see part \(1^{0}\) of Theorem 2.11.
**Remark 2.2**.: Formula (2.1) is true for all \(\sigma>3/2\), but in this case it can be simplified by setting \(z=0\) in the right-hand side of (2.1). Thus, the leading term of the asymptotics of \(f_{n}(z)\) does not depend on \(z\in\mathbb{C}\) and the power \(\rho>1/2\) so that \(f_{n}(z)\in\ell^{2}(\mathbb{Z}_{+})\). This leads to important spectral consequences: for \(\sigma>3/2\) the deficiency indices of the minimal Jacobi operator \(J_{\min}\) are \((1,1)\), and the spectra of all its self-adjoint extensions are discrete. The case \(\sigma>3/2\) was investigated in [28].
Let us pass to the case \(\sigma<1\). The phases \(\theta_{n}(z)\) are again defined by formula (2.5) for \(\sigma>2/3\), but their construction is more complicated for \(\sigma\leq 2/3\). Let us set
\[T_{n}(z)=t_{n}(z)+\sum_{l=2}^{L}p_{l}t_{n}^{l}(z) \tag{2.6}\]
where a sufficiently large \(L\) depends on \(\sigma\) and the real numbers \(p_{l}\) are defined in Lemma 3.5. In particular, \(T_{n}(z)=t_{n}(z)\) for \(\sigma>2/3\). Given \(T_{n}(z)\), the phases \(\theta_{n}(z)\) are defined by the formula
\[\theta_{n}(z)=\sqrt{T_{n}(z)} \tag{2.7}\]
playing the role of (2.5). It is easy to show (see Remark 3.7, for details) that \(T_{n}(z)\in\Pi_{0}\); thus, \(\theta_{n}(z)\) are correctly defined.
**Theorem 2.3**.: _Let assumptions (1.8), (1.9) with \(|\gamma|=1\) and \(\sigma\in(0,1)\) be satisfied. Set \(\rho=\sigma/4\) and define the functions \(\theta_{n}(z)\) by formulas (2.6), (2.7). Let \(\varphi_{n}(z)\) be sum (2.3). Then for every \(z\neq 0\) such that \(z\in\gamma\operatorname{clos}\Pi_{0}\), equation (1.1) has a solution \(f_{n}(z)\) with asymptotics (2.1). For all \(n\in\mathbb{Z}_{+}\), the functions \(f_{n}(z)\) are analytic in \(z\in\gamma\Pi_{0}\) and are continuous up to the cut along the half-axis \(\gamma\mathbb{R}_{+}\), with a possible exception of the boundary point \(z=0\)._
In the intermediary case \(\sigma=1\), the definition of the phases \(\theta_{n}(z)\) is particularly explicit and the construction of the Jost solutions is simpler than for \(\sigma\neq 1\).
**Theorem 2.4**.: _Let assumptions (1.8), (1.9) with \(|\gamma|=1\) and \(\sigma=1\) be satisfied. Set \(\rho=1/4\), define the functions \(\theta_{n}(z)\) by the formula_
\[\theta_{n}(z)=\sqrt{-\tau+\gamma zn}^{-1/2},\]
_and let \(\varphi_{n}(z)\) be sum (2.3). Then for every \(z\) such that \(z\in\gamma(\tau+\operatorname{clos}\Pi_{0})\), \(z\neq\gamma\tau\), equation (1.1) has a solution \(f_{n}(z)\) with asymptotics (2.1). For all \(n\in\mathbb{Z}_{+}\), the functions \(f_{n}(z)\) are analytic in \(z\in\gamma(\tau+\Pi_{0})\) and are continuous up to the cut along the half-axis \(\gamma(\tau+\mathbb{R}_{+})\), with a possible exception of the boundary point \(\gamma\tau\)._
We emphasize that in the case \(\sigma\leq 1\) the condition \(\tau\neq 0\) is not required.
It is convenient to introduce a notation
\[\mathcal{S}=\begin{cases}\mathbb{R}&\text{if}\quad\sigma\in(1,3/2],\,\tau<0\\ \emptyset&\text{if}\quad\sigma\in(1,3/2],\,\tau>0\\ \gamma(0,\infty)&\text{if}\quad\sigma\in(0,1)\\ \gamma(\tau,\infty)&\text{if}\quad\sigma=1.\end{cases} \tag{2.8}\]
We will see in Sect. 2.4 that the spectrum of the operator \(J\) is absolutely continuous on the closed interval \(\operatorname{clos}\mathcal{S}\), and it may be only discrete on \(\mathbb{R}\setminus\operatorname{clos}\mathcal{S}\). Note that Theorems 2.1, 2.3 and 2.4 give asymptotic formulas for the Jost solutions \(f_{n}(z)\) for all \(z\) in the complex plane with the cut along \(\mathcal{S}\), except the thresholds in the absolutely continuous spectrum (\(z=0\) if \(\sigma\in(0,1)\) and \(z=\gamma\tau\) if \(\sigma=1\)). For \(\lambda\in\mathcal{S}\)
equation (1.1) has two linearly independent solutions \(f_{n}(\lambda+i0)\) and its complex conjugate
\[f_{n}(\lambda-i0)=\overline{f_{n}(\lambda+i0)}.\]
Under the assumptions of any of these theorems the solution \(f_{n}(z)\) of equation (1.1) is determined essentially uniquely by its asymptotics (2.1). This is discussed in Sect. 5.5 (see Propositions 5.9, 5.11 and Remark 5.10).
Note that the values of \(u_{m-1}\) and \(u_{m}\) for some \(m\in\mathbb{Z}_{+}\) determine the whole sequence \(u_{n}\) satisfying the difference equation (1.1). Therefore it suffices to construct sequences \(f_{n}(z)\) for sufficiently large \(n\) only. Then they are extended to all \(n\) as solutions of equation (1.1).
We also mention that \(f_{n}^{\sharp}(z)=(-1)^{n}f_{n}(-z)\) is the Jost solution for the Jacobi equation (1.1) with the coefficients \((a_{n}^{\sharp},b_{n}^{\sharp})=(a_{n},-b_{n})\).
### Asymptotics at infinity
Here we find explicit asymptotic formulas for the phases \(\theta_{n}(z)\) and then for their sums \(\varphi_{n}(z)\) as \(n\to\infty\). These formulas depend crucially on the values of the parameters \(\sigma\) and \(\tau\).
Suppose first that \(\sigma\in(1,3/2]\) and that \(z\in\operatorname{clos}\Pi\) for \(\tau<0\) and \(z\in\mathbb{C}\) for \(\tau>0\). Then the term \(-\tau n^{-1}\) is dominating in (1.11) so that according to definition (2.5)
\[\theta_{n}(z)=n^{-1/2}\sqrt{|\tau|+zn^{1-\sigma}}=\pm\sqrt{|\tau|}n^{-1/2}\pm \frac{z}{2\sqrt{|\tau|}}n^{1/2-\sigma}+O(n^{3/2-2\sigma}) \tag{2.9}\]
for \(\pm\operatorname{Im}z\geq 0\) if \(\tau<0\) and
\[\theta_{n}(z)=in^{-1/2}\sqrt{\tau-zn^{1-\sigma}}=i\sqrt{\tau}n^{-1/2}-i\frac{ z}{2\sqrt{\tau}}n^{1/2-\sigma}+O(n^{3/2-2\sigma}) \tag{2.10}\]
for all \(z\in\mathbb{C}\) if \(\tau>0\).
In the case \(\sigma<1\), the term \(zn^{-\sigma}\) is dominating in (1.11). Moreover, for \(\sigma\leq 2/3\), the phases \(\theta_{n}(z)\) are given by formula (2.7) more general than (2.5). The last circumstance is however inessential because the terms \(t_{n}^{l}\) with \(l>1\) in (2.6) are negligible compared to \(t_{n}\). This yields an asymptotics
\[\theta_{n}(z)=\sqrt{z}n^{-\sigma/2}\big{(}1+O(n^{-\epsilon})\big{)},\quad \epsilon>0. \tag{2.11}\]
In particular, these results imply the following assertion.
**Proposition 2.5**.: _Set_
\[\nu=\begin{cases}1/2&\text{if}\quad\sigma\geq 1\\ \sigma/2&\text{if}\quad\sigma\leq 1\end{cases} \tag{2.12}\]
_and_
\[\varkappa(z)=\begin{cases}\pm\sqrt{|\tau|}&\text{if}\quad\sigma>1,\,\tau<0, \,\pm\operatorname{Im}z\geq 0\\ i\sqrt{\tau}&\text{if}\quad\sigma>1,\,\tau>0,\,z\in\mathbb{C}\\ \sqrt{z}&\text{if}\quad\sigma<1,\,z\in\operatorname{clos}\Pi_{0},\,z\neq 0\\ \sqrt{z-\tau}&\text{if}\quad\sigma=1,\,z\in\tau+\operatorname{clos}\Pi_{0},\,z \neq\tau.\end{cases} \tag{2.13}\]
_Then_
\[\theta_{n}(z)=\varkappa(z)n^{-\nu}(1+o(1)). \tag{2.14}\]
To pass to asymptotics of sums (2.3), we use the Euler-Maclaurin formula
\[\sum_{m=1}^{n}F(m)=\int_{1}^{n}F(x)dx+\frac{F(n)+F(1)}{2}+\int_{1}^{n}F^{\prime} (x)\big{(}x-[x]-\frac{1}{2}\big{)}dx \tag{2.15}\]
where \([x]\) is the integer part of \(x\). This formula is true for arbitrary functions \(F\in C^{1}\).
Formula (2.15) allows one to deduce an asymptotics as \(n\to\infty\) of sum (2.3) from that of the phases \(\theta_{n}\). For example, for \(\sigma=1\), we apply (2.15) to \(F(x)=x^{-1/2}\) which yields
\[\varphi_{n}(z)=2\sqrt{-\tau+z}\,n^{1/2}+C+o(1) \tag{2.16}\]
with some constant \(C\). The remainder \(C+o(1)\) here can be neglected in asymptotics (2.1) because the Jost solutions are defined up to a constant factor.
Next, we consider the case \(\sigma\in(1,3/2)\). If \(\tau<0\), it follows from (2.9) and the Euler-Maclaurin formula (2.15) that
\[\varphi_{n}(z)=\pm 2\sqrt{|\tau|n}\pm\frac{z}{\sqrt{|\tau|}(3-2\sigma)}n^{3/ 2-\sigma}+O(n^{5/2-2\sigma})\quad\text{for}\pm\operatorname{Im}z\geq 0. \tag{2.17}\]
So, up to error terms, the functions \(e^{i\varphi_{n}(z)}\) where \(z=\lambda+i\varepsilon\) contain oscillating
\[\exp\big{(}\pm 2i\sqrt{|\tau|n}\pm\frac{i\lambda}{\sqrt{|\tau|}(3-2\sigma)}n^{ 3/2-\sigma}\big{)}\]
and exponentially decaying2
Footnote 2: We say that a sequence \(x_{n}\) tends to zero exponentially if \(x_{n}=O(e^{-n^{a}})\) for some \(a>0\).
\[\exp\big{(}-\frac{|\varepsilon|}{\sqrt{|\tau|}(3-2\sigma)}n^{3/2-\sigma}\big{)}\]
factors. Note that the strongly oscillating factor \(\exp(\pm 2i\sqrt{|\tau|n})\) in the asymptotics of \(f_{n}(z)\) as \(n\to\infty\) does not depend on \(z\). In the case \(\tau>0\), we have
\[\varphi_{n}(z)=2i\sqrt{\tau n}-\frac{iz}{\sqrt{\tau}(3-2\sigma)}n^{3/2-\sigma }+O(n^{5/2-2\sigma})\quad\text{if}\quad\sigma\in(1,3/2). \tag{2.18}\]
Thus, the Jost solutions \(f_{n}(z)\) contain an exponentially decaying factor \(e^{-2\sqrt{\tau n}}\) for all \(z\in\mathbb{C}\).
Formulas (2.17) and (2.18) remain true also for \(\sigma=3/2\) if \((3-2\sigma)^{-1}n^{3/2-\sigma}\) is replaced by \(\ln n\). For example, for \(\tau<0\), we have
\[\varphi_{n}(z)=\pm 2\sqrt{|\tau|n}\pm\frac{z}{\sqrt{|\tau|}}\ln n+C+o(1)\quad \text{for}\pm\operatorname{Im}z\geq 0. \tag{2.19}\]
In the case \(\sigma<1\), asymptotics of the phases is given by relation (2.11). Therefore using formula (2.15), we find that
\[\varphi_{n}(z)=2\sqrt{z}(2-\sigma)^{-1}n^{1-\sigma/2}+O(n^{\sigma/2}). \tag{2.20}\]
So, \(e^{i\varphi_{n}(z)}\) exponentially decays if \(z\not\in[0,\infty)\) and oscillates if \(z=\lambda\pm i0\) for \(\lambda>0\). In the case \(\sigma=1\), relation (2.20) is true if \(\sqrt{z}\) is replaced by \(\sqrt{-\tau+z}\).
Note that explicit formulas for \(\theta_{n}(z)\) allow one to find all power terms of asymptotic expansion of \(\theta_{n}(z)\) as \(n\to\infty\). In view of formula (2.15) this yields all growing terms of the phases \(\varphi_{n}(z)\) as \(n\to\infty\).
It follows from asymptotic formula (2.1) for the Jost solutions \(f_{n}(z)\) and the results about the phases \(\varphi_{n}(z)\) stated above that for all \(\sigma\in(0,3/2)\) (for \(\sigma>1\) it is also required that \(\tau\neq 0\)) and \(\operatorname{Im}z\neq 0\), \(f_{n}(z)\) tend to zero exponentially as \(n\to\infty\). In the critical case \(\sigma=3/2\), the same is true if \(\tau>0\). If \(\sigma=3/2\) and \(\tau<0\), then relations (2.1), (2.19) show that
\[f_{n}(\lambda+i\varepsilon)=(-\gamma)^{n}e^{\pm 2i\sqrt{|\tau|}n}n^{\pm i \gamma\lambda_{1}}n^{-1/2-\varepsilon_{1}}\big{(}1+o(1)\big{)}\quad\text{for} \pm\gamma\varepsilon>0, \tag{2.21}\]
where \(\lambda_{1}=\lambda/\sqrt{|\tau|}\), \(\varepsilon_{1}=|\varepsilon|/\sqrt{|\tau|}\).
In particular, we have
**Proposition 2.6**.: _Under the assumptions of any of Theorems 2.1, 2.3 or 2.4 the inclusion_
\[f_{n}(z)\in\ell^{2}(\mathbb{Z}_{+}),\quad z\not\in\operatorname{clos}\mathcal{ S}, \tag{2.22}\]
_holds. In particular, (2.22) is true for \(\operatorname{Im}z\neq 0\)._
Let us compare relation (2.21) with asymptotic formula (2.6) in [28] for the singular case \(\sigma>3/2\), \(\tau<0\). The formula in [28] is true for all \(z\in\mathbb{C}\), the oscillating factor \(e^{\pm 2i\sqrt{|\tau|}n}\) is the same as in (2.21), but the power of \(n\) is \(1/4-\sigma/2\). This coincides with expression (2.2), but \(1/4-\sigma/2<-1/2\) for \(\sigma>3/2\). In this case all solutions of equation (1.1) are in \(\ell^{2}(\mathbb{Z}_{+})\) so that the deficiency indices of the operator \(J_{\min}\) are \((1,1)\).
Finally, we note that, on the absolutely continuous spectrum, formula (2.1) is consistent with a universal relation found in [29]. Indeed, let the assumptions of Theorems 2.1, 2.3 or 2.4 be satisfied. Using asymptotic formulas (2.16), (2.17) or (2.20) and calculating derivatives of the phases \(\varphi_{n}(\lambda\pm i0)\) in \(\lambda\) we see that, with some constant factor \(c_{\pm}(\lambda)\),
\[d\varphi_{n}(\lambda\pm i0)/d\lambda=c_{\pm}(\lambda)n^{\varsigma}(1+o(1)) \tag{2.23}\]
where \(\varsigma=3/2-\sigma\) for \(\sigma\in[1,3/2)\) and \(\varsigma=1-\sigma/2\) for \(\sigma\leq 1\); if \(\sigma=3/2\), then \(n^{\varsigma}\) in (2.23) should be replaced by \(\ln n\). In view of definition (2.2) in all cases the powers of \(n\) in the amplitude and phase in formula (2.1) are linked by the equality
\[2\rho+\varsigma=1. \tag{2.24}\]
This is one of the relations found in [29]; in the case \(\sigma=3/2\), this relation reduces to the equality \(\rho=1/2\).
For a comparison, we mention that, in the non-critical case \(|\gamma|<1\), we have \(\rho=\sigma/2\) and \(\varsigma=1-\sigma\) (see [30]) which is again consistent with equality (2.24).
### Exponentially growing solutions
For regular points \(z\in\mathbb{C}\), the solution \(g_{n}(z)\) of equation (1.1) linearly independent with \(f_{n}(z)\) is constructed by formula (1.18). Using the asymptotic formulas of Sect. 2.1 for the Jost solutions, we find a behavior of \(g_{n}(z)\) as \(n\to\infty\).
**Theorem 2.7**.: _Let one of the following three assumptions be satisfied:_
\(1^{0}\) _the conditions of Theorem_ 2.1 _where either_ \(\tau<0\)_,_ \(\sigma<3/2\) _and_ \(\operatorname{Im}z\neq 0\) _or_ \(\tau>0\) _and_ \(z\in\mathbb{C}\) _is arbitrary_
\(2^{0}\) _the conditions of Theorem_ 2.3 _where either_ \(\gamma=1\) _and_ \(z\not\in[0,\infty)\) _or_ \(\gamma=-1\) _and_ \(z\not\in(-\infty,0]\)__
\(3^{0}\) _the conditions of Theorem_ 2.4 _where either_ \(\gamma=1\) _and_ \(z\not\in[\tau,\infty)\) _or_ \(\gamma=-1\) _and_ \(z\not\in(-\infty,-\tau]\)__
_Then the asymptotics of the solution \(g_{n}(z)\) of equation (1.1) is given by formula (1.19). In particular,_
\[g_{n}(z)\notin\ell^{2}(\mathbb{Z}_{+})\quad\text{if}\quad z\not\in\operatorname {clos}\mathcal{S}. \tag{2.25}\]
We emphasize that the definitions of the numbers \(\rho\) and of the sequences \(\varphi_{n}(z)\) are different under assumptions \(1^{0}\), \(2^{0}\) and \(3^{0}\), but asymptotic formula (1.19) is true in all these cases.
In the critical case \(\sigma=3/2\) (and \(\tau<0\)) the solution \(g_{n}(z)\) of equation (1.1) behaves as a power of \(n\) as \(n\to\infty\).
**Proposition 2.8**.: _If \(\sigma=3/2\), \(\tau<0\) and \(\pm\gamma\varepsilon>0\), then_
\[g_{n}(\lambda+i\varepsilon)=(-\gamma)^{n}e^{\pm 2i\sqrt{|\tau|n}}n^{\pm i \gamma\lambda_{1}}n^{-1/2+\varepsilon_{1}}\big{(}1+o(1)\big{)} \tag{2.26}\]
_where \(\lambda_{1}=\lambda/\sqrt{|\tau|}\), \(\varepsilon_{1}=|\varepsilon|/\sqrt{|\tau|}\). In particular, relation (2.25) is preserved._
Theorem 2.7 and Proposition 2.8 will be proven in Sect. 6.1.
All solutions of equation (1.1) and, in particular, the orthonormal polynomials \(P_{n}(z)\), are linear combinations of the solutions \(f_{n}(z)\) and \(g_{n}(z)\) for \(z\not\in\operatorname{clos}\mathcal{S}\) or of the solutions \(f_{n}(\lambda+i0)\) and \(f_{n}(\lambda-i0)\) for \(z=\lambda\in\mathcal{S}\). Therefore the results stated above yield an asymptotics of \(P_{n}(z)\) as \(n\to\infty\). This is discussed in Sect. 6 - see Theorem 6.6 and 6.11.
### Spectral results
First, we discuss the essential self-adjointness of the minimal operator \(J_{\min}\). According to the limit point/circle theory this is equivalent to the existence of solutions of equation (1.1) where \(\operatorname{Im}z\neq 0\) not belonging to \(\ell^{2}(\mathbb{Z}_{+})\). Therefore the following result is a direct consequence of Theorem 2.7 and Proposition 2.8.
**Proposition 2.9**.: _Let assumptions (1.8), (1.9) with \(|\gamma|=1\) and some \(\sigma\in(0,3/2]\) be satisfied; for \(\sigma>1\) we additionally suppose that \(\tau\neq 0\). Then the minimal operator \(J_{\min}\) is essentially self-adjoint._
Of course, for \(\sigma\leq 1\) one can refer to the Carleman condition (1.2), but for \(\sigma>1\) the series in (1.2) is convergent.
The case \(\sigma>3/2\) was investigated in [28], see Theorem 2.3. According to Part \(2^{0}\) of this theorem, for \(\tau>0\), the operator \(J_{\min}\) remains essentially self-adjoint. The results for the case \(\tau<0\) are more interesting. Combining Proposition 2.9 with Part \(1^{0}\) of Theorem 2.3 in [28], we can state the following result.
**Proposition 2.10**.: _Suppose that assumptions (1.8), (1.9) with \(|\gamma|=1\), \(\tau<0\) and some \(\sigma>0\) are satisfied. Then the minimal operator \(J_{\min}\) is essentially self-adjoint if and only if \(\sigma\leq 3/2\)._
Note that Proposition 2.10 does not contradict Theorem 2.1 of [7] because the assumptions of [7] correspond to the case \(\tau=0\).
Below we always suppose that \(\sigma\leq 3/2\) and denote by \(J=\operatorname{clos}J_{\min}\) the closure of the essentially self-adjoint operator \(J_{\min}\).
Spectral properties of Jacobi operators are determined by a behavior of solutions of equation (1.1) for real \(z=\lambda\). In particular, oscillating solutions correspond to the absolutely continuous spectrum. On the contrary, for regular \(\lambda\) or eigenvalues of \(J\), one solution of (1.1) exponentially decays and another one exponentially grows. On the heuristic level, the results of Sect. 2.1 imply that the absolutely continuous spectrum of a Jacobi operator \(J\) consists of \(\lambda\) where \(-\tau n^{-1}+\gamma\lambda n^{-\sigma}\geq 0\) (for large \(n\)). On the contrary, the points \(\lambda\) where \(-\tau n^{-1}+\gamma\lambda n^{-\sigma}<0\) (again, for large \(n\)) are regular or, eventually, are eigenvalues of \(J\). This intuitive picture turns out to be correct.
**Theorem 2.11**.: _Suppose that assumptions (1.8), (1.9) with \(|\gamma|=1\) are satisfied._
\(1^{0}\) _Let \(\sigma\in(1,3/2]\). If \(\tau<0\), then the spectrum of the operator \(J\) is absolutely continuous and covers the whole real line. If \(\tau>0\), then the spectrum of the operator \(J\) is discrete._
\(2^{0}\) _Let \(\sigma\in(0,1)\). If \(\gamma=1\), then the absolutely continuous spectrum of the operator \(J\) coincides with the half-axis \([0,\infty)\) and its negative spectrum of \(J\) is discrete. If \(\gamma=-1\), then the absolutely continuous spectrum of the operator \(J\) coincides with the half-axis \((-\infty,0]\) and its positive spectrum is discrete._
\(3^{0}\) _Let \(\sigma=1\). If \(\gamma=1\), then the absolutely continuous spectrum of the operator \(J\) coincides with the half-axis \([\tau,\infty)\) and its spectrum below the point \(\tau\) is discrete. If \(\gamma=-1\), then the absolutely continuous spectrum of the operator \(J\) coincides with the half-axis \((-\infty,-\tau]\) and its spectrum above the point \(-\tau\) is discrete._
Parts \(2^{0}\) and \(3^{0}\) of Theorem 2.11 can be considered as generalizations of the classical results about the Jacobi operators with the Laguerre coefficients (1.5). We
emphasize that, in the case \(\sigma\leq 1\), the are no conditions on the parameter \(\tau\). The results of Part \(1^{0}\) seem to be of a new nature.
The results stated above apply to Jacobi operators with the coefficients \(a_{n}\), \(b_{n}\) growing as \(n^{\sigma}\) where \(\sigma\) is an arbitrary number in the interval \((0,3/2]\). Together with the results of [28] where the case \(\sigma>3/2\) was considered, they cover an arbitrary power growth of the Jacobi coefficients.
Thus, our results show that, in the critical case \(|\gamma|=1\), there are two "phase transitions": for \(\sigma=1\) and for \(\sigma=3/2\). Indeed, the absolutely continuous spectrum of the Jacobi operator \(J\) coincides with a half-axis for \(\sigma\leq 1\). In the case \(\sigma\in(1,3/2]\), the spectrum of \(J\) is either absolutely continuous and covers the whole real-axis for \(\tau<0\) or it is discrete for \(\tau>0\). If \(\sigma>3/2\), then the minimal Jacobi operator \(J_{\min}\) has deficiency indices \((1,1)\) and the spectra of all its self-adjoint extensions are discrete.
Our spectral results can be summarized in the following table where \(\boldsymbol{\Sigma}_{\rm ac}\) and \(\boldsymbol{\Sigma}_{\rm ess}\) are the absolutely continuous and essential spectra of the operator \(J\). For definiteness, we choose \(\gamma=1\):
\[\begin{array}{ccc}\sigma\in(0,1)&\Longrightarrow&\boldsymbol{\Sigma}_{\rm ac }=\boldsymbol{\Sigma}_{\rm ess}=[0,\infty)\\ \sigma=1&\Longrightarrow&\boldsymbol{\Sigma}_{\rm ac}=\boldsymbol{\Sigma}_{ \rm ess}=[\tau,\infty)\\ \sigma\in[1,3/2],\;\tau<0&\Longrightarrow&\boldsymbol{\Sigma}_{\rm ac}= \mathbb{R}\\ \sigma\in[1,3/2],\;\tau>0&\Longrightarrow&\boldsymbol{\Sigma}_{\rm ess}= \emptyset\\ \sigma>3/2&\Longrightarrow&\boldsymbol{\Sigma}_{\rm ess}=\emptyset\end{array}\]
## 3. Ansatz
As usual, we suppose that the recurrence coefficients \(a_{n}\), \(b_{n}\) obey conditions (1.8), (1.9) with \(|\gamma|=1\). We define the Ansatz \(\mathcal{A}_{n}=\mathcal{A}_{n}(z)\) by formula (1.15) where the power \(\rho\) and the phases \(\varphi_{n}=\varphi_{n}(\gamma z)\) will be found in this section.
### Construction
Our goal here is to determine \(\rho\) and \(\varphi_{n}\) in such a way that remainder (1.14) satisfies condition (1.20) for
\[\delta=1/2+\sigma\quad\text{if}\quad\sigma>1\quad\text{and some}^{3}\quad \delta>1+\sigma/2\quad\text{if}\quad\sigma<1. \tag{3.1}\]
If \(\sigma=1\), then \(\delta=2\), so that the estimate of the remainder is more precise in this particular case. We emphasize that estimate (1.20) with \(\delta>1\) used in the non-critical case \(|\gamma|\neq 1\) in [30] is not sufficient now.
Put
\[\mathcal{B}_{n}=\frac{\mathcal{A}_{n+1}}{\mathcal{A}_{n}}. \tag{3.2}\]
Then expression (1.14) for the remainder can be rewritten as
\[{\bf r}_{n}(z)=\sqrt{\frac{a_{n-1}}{a_{n}}}\mathcal{B}_{n-1}^{-1}+\sqrt{\frac{a_{ n}}{a_{n-1}}}\mathcal{B}_{n}+2\gamma_{n}-\frac{z}{\sqrt{a_{n-1}a_{n}}}. \tag{3.3}\]
Assumption (1.8) on \(a_{n}\) implies that
\[\sqrt{\frac{a_{n}}{a_{n-1}}}=(n+1)^{\sigma/2}n^{-\sigma/2}(1+O(n^{-2}))=1+ \frac{\sigma/2}{n}+O(n^{-2}) \tag{3.4}\]
and
\[(a_{n}a_{n-1})^{-1/2}=n^{-\sigma}\big{(}1+O(n^{-1})\big{)}.\]
Using also assumption (1.9) on \(b_{n}\), we see that sequence (1.7) satisfies a relation
\[\gamma_{n}=1+(\tau/2)n^{-1}+O(n^{-2}) \tag{3.5}\]
where \(\tau\) is defined by equality (1.10).
We seek \(\mathcal{A}_{n}\) in form (1.15) where the phases \(\varphi_{n}\) are defined as sums (2.3). The power \(\rho\) and the differences
\[\theta_{n}=\varphi_{n+1}-\varphi_{n}\]
will be determined by condition (1.20). The sequences \(\theta_{n}\) constructed below tend to zero as \(n\to\infty\) and satisfy condition (2.4). It follows from (1.15) and (3.2) that
\[\mathcal{B}_{n}=-(n+1)^{-\rho}n^{\rho}e^{i\theta_{n}}=-(1-\rho n^{-1}+O(n^{-2} ))e^{i\theta_{n}}. \tag{3.6}\]
According to relations (3.4) - (3.5) and (3.6) the following intermediary assertion is a direct consequence of expression (3.3).
**Lemma 3.1**.: _Relative remainder (1.14) admits a representation_
\[{\bf r}_{n}=-\big{(}1-(\nu/2)n^{-1}\big{)}e^{-i\theta_{n-1}}-\big{(}1+(\nu/2) n^{-1}\big{)}e^{i\theta_{n}}+2+\tau n^{-1}-zn^{-\sigma}+O(n^{-\delta}) \tag{3.7}\]
_where_
\[\nu=\sigma-2\rho \tag{3.8}\]
_and \(\delta=\min\{2,1+\sigma\}\)._
Note that in view of (2.2) expressions (2.12) and (3.8) for \(\nu\) are equivalent.
For all \(\sigma\in(2/3,3/2]\), the phases \(\theta_{n}\) are defined by the same formulas (1.11) and (2.5), that is,
\[\theta_{n}=\theta_{n}(z)=\sqrt{-\tau n^{-1}+zn^{-\sigma}},\quad\operatorname{ Im}\theta_{n}(z)\geq 0, \tag{3.9}\]
although the estimates of the remainder \({\bf r}_{n}\) are rather different in the cases \(\sigma>1\), \(\sigma=1\) and \(\sigma<1\). For \(\sigma\leq 2/3\), expression (3.9) requires some corrections.
### The case \(\sigma>1\)
For such \(\sigma\), we suppose that \(\tau\neq 0\). We treat the cases \(\tau<0\) and \(\tau>0\) parallelly putting \(\sqrt{-\tau}>0\) if \(\tau<0\) and \(\sqrt{-\tau}=i\sqrt{|\tau|}\) if \(\tau>0\).
It follows from definition (3.9) that \(\theta_{n}=O(n^{-1/2})\), whence
\[e^{i\theta_{n}}=\sum_{k=0}^{3}\frac{i^{k}}{k!}\theta_{n}^{k}+O(n^{-2}). \tag{3.10}\]
Substituting (3.10) into representation (3.7), we see that
\[\mathbf{r}_{n}=\sum_{k=0}^{3}r_{n}^{(k)}+O(n^{-2}) \tag{3.11}\]
where
\[r_{n}^{(0)}= -\big{(}1+(\nu/2)n^{-1}\big{)}-\big{(}1-(\nu/2)n^{-1}\big{)}+2+ \tau n^{-1}-zn^{-\sigma}\] \[= \tau n^{-1}-zn^{-\sigma}=-t_{n}, \tag{3.12}\]
by definition (1.11), and
\[r_{n}^{(1)}= i\big{(}1-(\nu/2)n^{-1}\big{)}\theta_{n-1}-i\big{(}1+(\nu/2)n^{- 1}\big{)}\theta_{n}, \tag{3.13}\] \[2r_{n}^{(2)}= \big{(}1-(\nu/2)n^{-1}\big{)}\theta_{n-1}^{2}+\big{(}1+(\nu/2)n^ {-1}\big{)}\theta_{n}^{2},\] (3.14) \[6r_{n}^{(3)}= -i\big{(}1-(\nu/2)n^{-1}\big{)}\theta_{n-1}^{3}+i\big{(}1+(\nu/2) n^{-1}\big{)}\theta_{n}^{3}. \tag{3.15}\]
Since \(\theta_{n}^{2}=t_{n}\), it follows from (3.14) that
\[2r_{n}^{(2)}= \big{(}1-(\nu/2)n^{-1}\big{)}t_{n-1}+\big{(}1+(\nu/2)n^{-1}\big{)} t_{n}\] \[= 2t_{n}+\big{(}1-(\nu/2)n^{-1}\big{)}(t_{n-1}-t_{n}). \tag{3.16}\]
Comparing this equality with (3.12), we find that
\[r_{n}^{(0)}+r_{n}^{(2)}=2^{-1}\big{(}1-(\nu/2)n^{-1}\big{)}(t_{n-1}-t_{n})=O(n ^{-2}). \tag{3.17}\]
The power \(\rho\) in (1.15) is determined by linear term (3.13) which we write as
\[r_{n}^{(1)}=i(\theta_{n-1}-\theta_{n})-i(\nu/2)n^{-1}(\theta_{n}+\theta_{n-1}). \tag{3.18}\]
Let us distinguish the leading term in (3.9) setting
\[\theta_{n}=\sqrt{-\tau n^{-1}}+\tilde{\theta}_{n} \tag{3.19}\]
where
\[\tilde{\theta}_{n}=\sqrt{-\tau n^{-1}+zn^{-\sigma}}-\sqrt{-\tau n^{-1}}=\frac {zn^{1/2-\sigma}}{\sqrt{-\tau+zn^{1-\sigma}}+\sqrt{-\tau}}=O(n^{1/2-\sigma}). \tag{3.20}\]
Let us substitute (3.19) into (3.18) and observe that
\[(\sqrt{(n-1)^{-1}}-\sqrt{n^{-1}})-(\nu/2)n^{-1}(\sqrt{(n-1)^{-1}}+\sqrt{n^{-1 }})=(2^{-1}-\nu)n^{-3/2}+O(n^{-5/2}).\]
According to (3.20) we have
\[\tilde{\theta}_{n}-\tilde{\theta}_{n-1}=O(n^{-\sigma-1/2}). \tag{3.21}\]
Thus, it follows from (3.18) that
\[r_{n}^{(1)}=i\sqrt{-\tau}(2^{-1}-\nu)n^{-3/2}+O(n^{-\sigma-1/2}).\]
The coefficient of \(n^{-3/2}\) here is zero if \(\nu=1/2\) which, by (3.8), yields \(\rho=\sigma/2-1/4\); in this case \(r_{n}^{(1)}=O(n^{-\sigma-1/2})\).
It remains to consider the term \(r_{n}^{(3)}\). In view of (3.15), it equals
\[6r_{n}^{(3)}=i(\theta_{n}^{3}-\theta_{n-1}^{3})+i(\nu/2)n^{-1}(\theta_{n}^{3}+ \theta_{n-1}^{3}). \tag{3.22}\]
Observe that
\[\theta_{n}^{3}-\theta_{n-1}^{3}=(\theta_{n}-\theta_{n-1})(\theta_{n}^{2}+ \theta_{n}\theta_{n-1}+\theta_{n-1}^{2}). \tag{3.23}\]
It follows from relations (3.19) and (3.21) that the first factor here is \(O(n^{-3/2})\). The second factor is \(O(n^{-1})\) because \(\theta_{n}=O(n^{-1/2})\). Therefore expression (3.23) is \(O(n^{-5/2})\). Obviously, the second term in the right-hand side of (3.22) satisfies the same estimate.
Let us state the result obtained.
**Proposition 3.2**.: _Let the assumptions of Theorem 2.1 be satisfied, and let the phases \(\theta_{n}(z)\) be given by formula (3.9). Define the Ansatz \(\mathcal{A}_{n}(z)\) by formula (1.15) where \(\rho=\sigma/2-1/4\). Then remainder (1.14) satisfies estimate (1.20) where \(\delta=\sigma+1/2\)._
### The intermediary case \(\sigma=1\)
The results of this subsection are a particular case of Proposition 3.2, but the construction of the phases is now simpler:
\[t_{n}=(z-\tau)n^{-1}\quad\text{and}\quad\theta_{n}=\sqrt{z-\tau}n^{-1/2}. \tag{3.24}\]
The estimate of the remainder \(\mathbf{r}_{n}\) is also simpler and more precise than in the general case. Indeed, according to (3.24), we now have
\[\theta_{n-1}-\theta_{n}=2^{-1}\sqrt{z-\tau}n^{-3/2}+O(n^{-5/2}).\]
Therefore, it follows from (3.13) where \(\nu=1/2\) that \(r_{n}^{(1)}=O(n^{-5/2})\). The same estimate for \(r_{n}^{(3)}\) is a direct consequence of (3.23). Estimate (3.17) remains of course true. Thus, using equality (3.11) we can state the limit case of Proposition 3.2.
**Proposition 3.3**.: _Let the assumptions of Theorem 2.4 be satisfied, and let the phases \(\theta_{n}(z)\) be given by formula (3.24). Define the Ansatz \(\mathcal{A}_{n}(z)\) by formula (1.15) where \(\rho=1/4\). Then remainder (1.14) satisfies estimate (1.20) where \(\delta=2\)._
### The case \(\sigma\in(2/3,1)\)
We again define the phases \(\theta_{n}\) by formula (3.9), but now the term \(zn^{-\sigma}\) is dominating so that, instead of (3.19), (3.20), we have a relation
\[\theta_{n}=\sqrt{t_{n}}=\sqrt{z}n^{-\sigma/2}\big{(}1+O(n^{\sigma-1})\big{)}. \tag{3.25}\]
Therefore the scheme exposed in Sect. 3.2 for the case \(\sigma>1\) requires some modifications.
It again suffices to keep 4 terms in expansion of \(e^{i\theta_{n}}\), but the remainders in formulas (3.10) and (3.11) are now \(O(n^{-2\sigma})\). Estimates of \(r_{n}^{(k)}\) where \(k=0,1,2,3\) are the same as in Sect. 3.2 if the roles of the terms \(-\tau n^{-1}\) and \(zn^{-\sigma}\) are interchanged. Relations (3.12) and (3.16) are preserved, but the remainder \(O(n^{-2})\) in (3.17) is replaced by \(O(n^{-1-\sigma})\). It directly follows from definition (1.11) that
\[t_{n-1}-t_{n}=z\sigma n^{-1-\sigma}\big{(}1+O(n^{-1+\sigma})\big{)}. \tag{3.26}\]
Similarly to (3.21), it follows from (3.25), (3.26) that
\[\theta_{n-1}-\theta_{n}=2^{-1}\sqrt{z}\sigma n^{-1-\sigma/2}\big{(}1+O(n^{-1+ \sigma})\big{)}. \tag{3.27}\]
Therefore expression (3.18) equals
\[r_{n}^{(1)}=i\sqrt{z}(\sigma/2-\nu)n^{-1-\sigma/2}+O(n^{-2+\sigma/2}). \tag{3.28}\]
The coefficient at \(n^{-1-\sigma/2}\) is zero if \(\nu=\sigma/2\) which yields \(2\rho=\sigma-\nu=\sigma/2\); in this case \(r_{n}^{(1)}=O(n^{-2+\sigma/2})\). Putting together equality (3.17) and estimate (3.26), we see that \(r_{n}^{(0)}+r_{n}^{(2)}=O(n^{-1-\sigma})\) which is \(O(n^{-2\sigma})\) because \(\sigma<1\). According to (3.25) and (3.27) expression (3.23) is estimated by \(Cn^{-1-3\sigma/2}\). In view of (3.22) the same bound is true for \(r_{n}^{(3)}\).
Thus, we arrive at the following result.
**Proposition 3.4**.: _Let the assumptions of Theorem 2.3 be satisfied with \(\sigma\in(2/3,1)\), and let the phases \(\theta_{n}(z)\) be given by formula (3.9). Define the Ansatz \(\mathcal{A}_{n}(z)\) by formula (1.15) where \(\rho=\sigma/4\). Then remainder (1.14) satisfies estimate (1.20) with \(\delta=\min\{2\sigma,2-\sigma/2\}>1+\sigma/2\)._
We emphasize that for all \(\sigma\in(2/3,3/2]\) the phases \(\theta_{n}\) are given by the same formula (3.9). However asymptotics of \(\theta_{n}\) are different for \(\sigma>1\) and for \(\sigma<1\) - cf. (3.19), (3.20) with (3.25).
### The case \(\sigma\leq 2/3\)
Eikonal equationThe leading term of the asymptotics of the phases \(\theta_{n}\) is again given by formula (3.25), but, additionally, lower order terms appear. Now, we need to keep more terms in expansion (3.10) setting
\[e^{i\theta_{n}}=\sum_{k=0}^{K}\frac{i^{k}}{k!}\theta_{n}^{k}+O(n^{-(K+1)\sigma/ 2}). \tag{3.29}\]
Substituting (3.29) into representation (3.7), we see that
\[{\bf r}_{n}=\sum_{k=0}^{K}r_{n}^{(k)}+O(n^{-(K+1)\sigma/2}), \tag{3.30}\]
where \(r_{n}^{(0)}\) are again given by equality (3.12) and
\[-i^{k}k!r_{n}^{(k)}= \big{(}1-(\nu/2)n^{-1}\big{)}\theta_{n-1}^{k}+(-1)^{k}\big{(}1+( \nu/2)n^{-1}\big{)}\theta_{n}^{k}\] \[= \theta_{n-1}^{k}+(-1)^{k}\theta_{n}^{k}-(\nu/2)n^{-1}\big{(} \theta_{n-1}^{k}-(-1)^{k}\theta_{n}^{k}\big{)},\quad k\geq 1. \tag{3.31}\]
Of course, for \(k=1,2,3\), this expression coincides with (3.13), (3.14), (3.15), respectively. It is convenient to choose an even \(K=2L\) with a sufficiently large \(L\). We suppose that
\[(L+1/2)\sigma>1. \tag{3.32}\]
Let us distinguish the terms corresponding to \(k=0\) and \(k=1\) in sum (3.30) and then split it into the sums over even and odd \(k\):
\[{\bf r}_{n}=r_{n}^{(0)}+r_{n}^{(1)}+{\bf r}_{n}^{(ev)}+{\bf r}_{n}^{(odd)}+O( n^{-(L+1/2)\sigma}),\]
where \(r_{n}^{(0)}\), \(r_{n}^{(1)}\) are given by formulas (3.12), (3.18) and
\[{\bf r}_{n}^{(ev)}=\sum_{l=1}^{L}r_{n}^{(2l)},\quad{\bf r}_{n}^{(odd)}=\sum_{ l=1}^{L-1}r_{n}^{(2l+1)}. \tag{3.33}\]
To satisfy estimate (1.20) with a suitable \(\delta\), we now have to take the even terms \(r_{n}^{(2l)}\) for all \(l\leq L\) into account. The odd terms \(r_{n}^{(2l+1)}\) turn out to be negligible. To be precise, we define the phases \(\theta_{n}\) by formula (2.7) where \(T_{n}\) is sum (2.6). The coefficients \(p_{l}\) will be found from the relation
\[r_{n}^{(0)}+{\bf r}_{n}^{(ev)}=O(n^{-1-\sigma}) \tag{3.34}\]
generalizing (3.17). To satisfy this relation, we use that the differences between \(\theta_{n}\) and \(\theta_{n-1}\) in the expression
\[(-1)^{l+1}(2l)!r_{n}^{(2l)}=\theta_{n-1}^{2l}+\theta_{n}^{2l}-(\nu/2)n^{-1} \big{(}\theta_{n-1}^{2l}-\theta_{n}^{2l}\big{)}\]
(it is a particular case of (3.31)) can be neglected. Thus, we set
\[\Theta_{n}=2\sum_{l=1}^{L}\frac{(-1)^{l+1}}{(2l)!}\theta_{n}^{2l}.\]
Since \(r_{n}^{(0)}=-t_{n}\), we find that
\[r_{n}^{(0)}+{\bf r}_{n}^{(ev)}=(-t_{n}+\Theta_{n})+(1-(\nu/2)n^{-1})\sum_{l=1} ^{L}(-1)^{l}(2l)!^{-1}(\theta_{n}^{2l}-\theta_{n-1}^{2l}). \tag{3.35}\]
As we will see the sum here is negligible, and hence we can replace (3.34) by the (approximate) eikonal equation
\[\Theta_{n}=t_{n}+O(n^{-1-\sigma}). \tag{3.36}\]
Our goal is to solve this equation with respect to \(\theta_{n}^{2}\). Note that \(\Theta_{n}=\theta_{n}^{2}\) if \(L=1\) so that (3.36) again yields expression \(\theta_{n}^{2}=t_{n}\). The following elementary assertion shows that equation (3.36) can be efficiently solved for all \(L\geq 1\). It is convenient to consider this problem in a somewhat more general setting. Denote by \(\mathcal{P}\) the set of all polynomials (of the variable \(t\)), and let \(\mathcal{P}_{L}=t^{L+1}\mathcal{P}\), that is, \(\mathcal{P}_{L}\subset\mathcal{P}\) consists of polynomials with zero coefficients at powers \(t^{k}\) for all \(k=0,1,\ldots,L\).
**Lemma 3.5**.: _Let \(L\geq 2\) and \(a_{2},\ldots,a_{L}\) be arbitrary given numbers. Then there exists a polynomial_
\[P_{L}(t)=\sum_{l=2}^{L}p_{l}t^{l}\]
_such that the polynomial_
\[Q_{L}(t):=P_{L}(t)+\sum_{k=2}^{L}a_{k}(P_{L}(t)+t)^{k}\in\mathcal{P}_{L}. \tag{3.37}\]
Proof.: For arbitrary \(p_{2},\ldots,p_{L}\), the polynomial \(Q_{L}(t)\) defined by (3.37) has degree \(L^{2}\) and it does not contain terms with zero and first powers of \(t\). We have to choose the numbers \(p_{2},\ldots,p_{L}\) in such a way that the coefficients of \(Q_{L}(t)\) at \(t^{l}\) are zeros for all \(l=2,\ldots,L\). This assertion is obvious for \(L=2\) because
\[Q_{2}(t)=P_{2}(t)+a_{2}(P_{2}(t)+t)^{2}=(p_{2}+a_{2})t^{2}+2a_{2}p_{2}t^{3}+a_ {2}p_{2}^{2}t^{4},\]
and, so, \(Q_{2}(t)\in\mathcal{P}_{2}\) if \(p_{2}=-a_{2}\).
Let us pass to the general case. Suppose that (3.37) is satisfied. Then there exists a number \(q_{L+1}\) such that
\[Q_{L}(t)-q_{L+1}t^{L+1}\in\mathcal{P}_{L+1}. \tag{3.38}\]
We will find a number \(p_{L+1}\) such that the polynomial
\[P_{L+1}(t)=P_{L}(t)+p_{L+1}t^{L+1} \tag{3.39}\]
satisfies (3.37) for \(L+1\), that is,
\[Q_{L+1}(t):=P_{L+1}(t)+\sum_{k=2}^{L+1}a_{k}(P_{L+1}(t)+t)^{k}\in\mathcal{P}_{ L+1}. \tag{3.40}\]
Let us calculate the polynomial \(Q_{L+1}(t)\) neglecting terms in \(\mathcal{P}_{L+1}\). First, we observe that, for all \(k=2,\ldots,L,L+1\), the difference
\[(P_{L+1}(t)+t)^{k}-(P_{L}(t)+t)^{k}=\sum_{n=1}^{k}\binom{k}{n}p_{L+1}^{n}t^{( L+1)n}(P_{L}(t)+t)^{k-n}\in\mathcal{P}_{L+1}.\]
Using also (3.39), we see that, up to terms in \(\mathcal{P}_{L+1}\), polynomial (3.40) equals
\[Q_{L+1}(t)=P_{L}(t)+p_{L+1}t^{L+1}+\sum_{k=2}^{L}a_{k}(P_{L}(t)+t)^{k}+a_{L+1}(P_ {L}(t)+t)^{L+1}\]
whence, by assumption (3.37),
\[Q_{L+1}(t)=Q_{L}(t)+(p_{L+1}+a_{L+1})t^{L+1}\in\mathcal{P}_{L+1}.\]
It follows from (3.38) that this relation is equivalent to
\[Q_{L+1}(t)-(p_{L+1}+q_{L+1}+a_{L+1})t^{L+1}\in\mathcal{P}_{L+1}.\]
Thus, inclusion \(Q_{L+1}(t)\in\mathcal{P}_{L+1}\) is true if \(p_{L+1}=-a_{L+1}-q_{L+1}\). This proves (3.37) for \(L+1\).
Note particular cases
\[p_{2}=-a_{2},\quad p_{3}=2a_{2}^{2}-a_{3}.\]
Let us come back to relation (3.36). Let us use Lemma 3.5 with the coefficients \(a_{l}=2(-1)^{l+1}/(2l)!\), \(t=t_{n}\) defined by equality (1.11), and let \(p_{l}\) be the coefficients constructed in this lemma. It follows from equality (3.37) that the phases
\[\theta_{n}^{2}=t_{n}+\sum_{l=2}^{L}p_{l}t_{n}^{l}=:T_{n} \tag{3.41}\]
satisfy, for some coefficients \(q_{l}\), the equation
\[2\sum_{l=1}^{L}\frac{(-1)^{l+1}}{(2l)!}\theta_{n}^{2l}-t_{n}=\sum_{l=L+1}^{L^{ 2}}q_{l}t_{n}^{l}=O(t_{n}^{-L-1}). \tag{3.42}\]
Since \(t_{n}=O(n^{-\sigma})\), the right-hand side here is \(O(n^{-(L+1)\sigma})\) which is \(O(n^{-\delta})\) with \(\delta>1+\sigma/2\) if condition (3.32) is satisfied.
The definition of the phases by formula (3.41) coincides of course with their definition by relations (2.6), (2.7). The asymptotics of \(\theta_{n}\) as \(n\to\infty\) is given by formula (2.11) generalizing (3.25). Next, we estimate the differences
\[\theta_{n-1}-\theta_{n}=\frac{T_{n-1}-T_{n}}{\theta_{n-1}+\theta_{n}}.\]
According to (2.6) we have
\[T_{n-1}-T_{n}=t_{n-1}-t_{n}+\sum_{l=2}^{L}p_{l}(t_{n-1}^{l}-t_{n}^{l})\]
so that it satisfies the same relation (3.26) as \(t_{n}\):
\[T_{n-1}-T_{n}=z\sigma n^{-1-\sigma}\big{(}1+O(n^{-1+\sigma})\big{)}.\]
Combining this relation with (2.11), we see that
\[\theta_{n-1}-\theta_{n}=2^{-1}\sqrt{z}\sigma n^{-1-\sigma/2}\big{(}1+O(n^{- \epsilon})\big{)} \tag{3.43}\]
for some \(\epsilon>0\) (compared with (3.27) only the estimate of the remainder is changed).
It easily follows from (2.11) and (3.43) that
\[|\theta_{n-1}^{k}-\theta_{n}^{k}|\leq C_{k}n^{-1-k\sigma/2} \tag{3.44}\]
for all \(k=1,2,\ldots\).
Let us come back to Ansatz (1.15). Similarly to Sect. 3.4, the power \(\rho\) in (1.15) is determined by the linear term \(r_{n}^{(1)}\) given by equality (3.18). It again satisfies relation (3.28) (with the remainder \(O(n^{-2+\sigma/2})\) replaced by \(O(n^{-\delta})\) for some \(\delta>1+\sigma/2\)). The coefficient at \(n^{-1-\sigma/2}\) is zero if \(\nu=\sigma/2\) which yields \(\rho=\sigma/4\); in this case \(r_{n}^{(1)}=O(n^{-\delta})\).
Given inequalities (2.11) and (3.44), we can estimate the remainder \(\mathbf{r}_{n}\) essentially similarly to Proposition 3.4. The only differences are that estimates of the remainders are slightly weaker and that we have to take into account higher powers of \(\theta_{n}\). First, we consider term (3.35) with even powers of \(\theta_{n}\). Both the first term \(-t_{n}+\Theta_{n}\) and the sum on the right are \(O(n^{-1-\sigma})\) by virtue of relations (3.42) and (3.44), respectively. The term \(\mathbf{r}_{n}^{(odd)}\) is also negligible. Indeed, according to (3.31) and (3.33) it equals
\[\mathbf{r}_{n}^{(odd)}=i\sum_{l=1}^{L-1}\frac{(-1)^{l}}{(2l+1)!}\Big{(}( \theta_{n-1}^{2l+1}-\theta_{n}^{2l+1})-(\nu/2)n^{-1}(\theta_{n-1}^{2l+1}+ \theta_{n}^{2l+1})\Big{)}.\]
Relations (2.11) and (3.44) allow us to estimate all terms here by \(n^{-1-3\sigma/2}\).
Thus, we arrive at the following assertion generalizing Proposition 3.4.
**Proposition 3.6**.: _Let the assumptions of Theorem 2.3 be satisfied, and let the phases \(\theta_{n}(z)\) be given by formulas (2.6), (2.7) with \((L+1/2)\sigma>1\) and the coefficients \(p_{2},\ldots,p_{L}\) constructed in Lemma 3.5. Let the Ansatz \(\mathcal{A}_{n}(z)\) be defined by formula (1.15) where \(\rho=\sigma/4\). Then remainder (1.14) satisfies estimate (1.20) with some \(\delta>1+\sigma/2\)._
**Remark 3.7**.: In all estimates, we suppose that \(z\in\operatorname{clos}\Pi_{0}\), \(0<r\leq|z|\leq R<\infty\) for some \(r\) and \(R\) and \(n\geq N=N(r,R)\). Then it follows from equality (2.6) that \(\pm\operatorname{Im}T_{n}(z)>0\) as long as \(\pm\operatorname{Im}t_{n}(z)>0\), that is, \(\pm\operatorname{Im}z>0\). Therefore \(T_{n}(z)\in\operatorname{clos}\Pi_{0}\), and hence condition (2.4) is satisfied.
Note two particular cases. If \(\sigma>2/3\), then we can take \(L=1\); this is the case considered in Proposition 3.4. If \(\sigma>2/5\), then \(L=2\) so that the formula for \(\theta_{n}\) contains only one additional (compared with (2.5)) term:
\[\theta_{n}=\sqrt{t_{n}+t_{n}^{2}/6}.\]
We, finally, note that constructions of Ansatzen were important steps also in the papers [12, 16]. However, the form of the Ansatz \(\mathcal{A}_{n}(z)\) suggested in this section is different from [12, 16]; in particular, the phases \(\varphi_{n}(z)\) in (1.15) are simplest in the case \(\sigma>2/3\) while this case was excluded in [12]. They are also different from [16] - see Remark 7.9.
## 4. Difference and Volterra equations
Here we reduce a construction of the Jost solutions \(f_{n}(z)\) of the Jacobi equation (1.1) to a Volterra "integral" equation which is then solved by iterations. In this section, we do not make any specific assumptions about the recurrence coefficients \(a_{n}\), \(b_{n}\) and the Ansatz \(\mathcal{A}_{n}(z)\) except of course that \(\mathcal{A}_{n}(z)\neq 0\); for definiteness, we set \(\mathcal{A}_{-1}=1\). We present a general scheme of investigation and then, in Section 5, apply it to Jacobi operators with coefficients \(a_{n}\) and \(b_{n}\) satisfying conditions (1.8) and (1.9) with \(|\gamma|=1\).
### Multiplicative change of variables
For a construction of \(f_{n}(z)\), we will reformulate the problem introducing a sequence
\[u_{n}(z)=\mathcal{A}_{n}(z)^{-1}f_{n}(z),\quad n\in\mathbb{Z}_{+}. \tag{4.1}\]
In proofs, we usually omit the dependence on \(z\) in notation; for example, we write \(f_{n}\), \(u_{n}\), \(\mathbf{r}_{n}\).
First, we derive a difference equation for \(u_{n}(z)\).
**Lemma 4.1**.: _Let the remainder \(\mathbf{r}_{n}(z)\) be defined by formula (1.14). Set_
\[\Lambda_{n}(z)=\frac{a_{n}}{a_{n-1}}\frac{\mathcal{A}_{n+1}(z)}{\mathcal{A}_{ n-1}(z)} \tag{4.2}\]
_and_
\[\mathcal{R}_{n}(z)=-\sqrt{\frac{a_{n}}{a_{n-1}}\frac{\mathcal{A}_{n}(z)}{ \mathcal{A}_{n-1}(z)}}\mathbf{r}_{n}(z). \tag{4.3}\]
_Then equation (1.1) for a sequence \(f_{n}(z)\) is equivalent to the equation_
\[\Lambda_{n}(z)(u_{n+1}(z)-u_{n}(z))-(u_{n}(z)-u_{n-1}(z))=\mathcal{R}_{n}(z)u _{n}(z),\quad n\in\mathbb{Z}_{+}, \tag{4.4}\]
_for sequence (4.1)._
Proof.: Substituting expression \(f_{n}=\mathcal{A}_{n}u_{n}\) into (1.1) and using definition (1.14), we see that
\[(\sqrt{a_{n-1}a_{n}}\mathcal{A}_{n})^{-1} \Big{(}a_{n-1}f_{n-1}+(b_{n}-z)f_{n}+a_{n}f_{n+1}\Big{)}\] \[= \sqrt{\frac{a_{n-1}}{a_{n}}}\frac{\mathcal{A}_{n-1}}{\mathcal{A}_ {n}}u_{n-1}+\frac{b_{n}-z}{\sqrt{a_{n-1}a_{n}}}u_{n}+\sqrt{\frac{a_{n}}{a_{n-1 }}}\frac{\mathcal{A}_{n+1}}{\mathcal{A}_{n}}u_{n+1}\] \[= \sqrt{\frac{a_{n-1}}{a_{n}}}\frac{\mathcal{A}_{n-1}}{\mathcal{A}_ {n}}(u_{n-1}-u_{n})+\sqrt{\frac{a_{n}}{a_{n-1}}}\frac{\mathcal{A}_{n+1}}{ \mathcal{A}_{n}}(u_{n+1}-u_{n})+\mathbf{r}_{n}u_{n}\] \[= \sqrt{\frac{a_{n-1}}{a_{n}}}\frac{\mathcal{A}_{n-1}}{\mathcal{A}_ {n}}\Big{(}(u_{n-1}-u_{n})+\Lambda_{n}(u_{n+1}-u_{n})-\mathcal{R}_{n}u_{n} \Big{)}\]
where the coefficients \(\Lambda_{n}\) and \(\mathcal{R}_{n}\) are defined by equalities (4.2) and (4.3), respectively. Therefore equations (1.1) and (4.4) are equivalent.
Our next goal is to construct a solution of difference equation (4.4) such that
\[\lim_{n\to\infty}u_{n}(z)=1. \tag{4.5}\]
To that end, we will reduce equation (4.4) to a Volterra "integral" equation which can be standardly solved by successive approximations.
### Volterra equation
It is convenient to consider this problem in a more general setting. We now do not make any specific assumptions about the sequences \(\Lambda_{n}\) and \(\mathcal{R}_{n}\) in (4.4) except that \(\Lambda_{n}\neq 0\). Denote
\[X_{n}=\Lambda_{1}\Lambda_{2}\cdots\Lambda_{n} \tag{4.6}\]
and
\[G_{n,m}=X_{m-1}\sum_{p=n}^{m-1}X_{p}^{-1},\quad m\geq n+1. \tag{4.7}\]
The sequence \(u_{n}\) will be constructed as a solution of a discrete Volterra integral equation
\[u_{n}=1+\sum_{m=n+1}^{\infty}G_{n,m}\mathcal{R}_{m}u_{m}. \tag{4.8}\]
Under natural assumptions, this equation can be standardly solved by successive approximations. First, we estimate its iterations.
**Lemma 4.2**.: _Let us set_
\[h_{m}=\sup_{n\leq m-1}|G_{n,m}\mathcal{R}_{m}| \tag{4.9}\]
_and suppose that_
\[(h_{m})\in\ell^{1}(\mathbb{Z}_{+}). \tag{4.10}\]
_Put \(u_{n}^{(0)}=1\) and_
\[u_{n}^{(k+1)}=\sum_{m=n+1}^{\infty}G_{n,m}\mathcal{R}_{m}u_{m}^{(k)},\quad k\geq 0, \tag{4.11}\]
_for all \(n\in\mathbb{Z}_{+}\). Then estimates_
\[|u_{n}^{(k)}|\leq\frac{H_{n}^{k}}{k!},\quad\forall k\in\mathbb{Z}_{+}, \tag{4.12}\]
_where_
\[H_{n}=\sum_{p=n+1}^{\infty}h_{p}, \tag{4.13}\]
_are true._
Proof.: Suppose that (4.12) is satisfied for some \(k\in\mathbb{Z}_{+}\). We have to check the same estimate (with \(k\) replaced by \(k+1\) in the right-hand side) for \(u_{n}^{(k+1)}\). According to definitions (4.9) and (4.11), it follows from estimate (4.12) that
\[|u_{n}^{(k+1)}|\leq\sum_{m=n+1}^{\infty}h_{m}|u_{m}^{(k)}|\leq\frac{1}{k!}\sum _{m=n+1}^{\infty}h_{m}H_{m}^{k}. \tag{4.14}\]
Since \(H_{m-1}=H_{m}+h_{m}\), we have an inequality
\[H_{m}^{k+1}+(k+1)h_{m}H_{m}^{k}\leq(H_{m}+h_{m})^{k+1}=H_{m-1}^{k+1},\]
and hence, for all \(N\in\mathbb{Z}_{+}\),
\[(k+1)\sum_{m=n+1}^{N}h_{m}H_{m}^{k}\leq\sum_{m=n+1}^{N}(H_{m-1}^{k+1}-H_{m}^{k +1})=H_{n}^{k+1}-H_{N}^{k+1}\leq H_{n}^{k+1}.\]
Substituting this bound into (4.14), we obtain estimate (4.12) for \(u_{n}^{(k+1)}\).
Now we are in a position to solve equation (4.8) by iterations.
**Theorem 4.3**.: _Let assumption (4.10) be satisfied. Then equation (4.8) has a bounded solution \(u_{n}\). This solution satisfies an estimate_
\[|u_{n}-1|\leq e^{H_{n}}-1\leq CH_{n} \tag{4.15}\]
_where \(H_{n}\) is sum (4.13). In particular, condition (4.5) holds._
Proof.: Set
\[u_{n}=\sum_{k=0}^{\infty}u_{n}^{(k)} \tag{4.16}\]
where \(u_{n}^{(k)}\) are defined by recurrence relations (4.11). Estimate (4.12) shows that this series is absolutely convergent. Using the Fubini theorem to interchange the order of summations in \(m\) and \(k\), we see that
\[\sum_{m=n+1}^{\infty}G_{n,m}\mathcal{R}_{m}u_{m}=\sum_{k=0}^{\infty}\sum_{m=n+1 }^{\infty}G_{n,m}\mathcal{R}_{m}u_{m}^{(k)}=\sum_{k=0}^{\infty}u_{n}^{(k+1)}=-1 +\sum_{k=0}^{\infty}u_{n}^{(k)}=-1+u_{n}.\]
This is equation (4.8) for sequence (4.16). Estimate (4.15) also follows from (4.12) and (4.16).
**Remark 4.4**.: A bounded solution \(u_{n}\) of (4.8) is of course unique. Indeed, suppose that \((v_{n})\in\ell^{\infty}(\mathbb{Z}_{+})\) satisfies homogeneous equation (4.8), that is,
\[v_{n}=\sum_{m=n+1}^{\infty}G_{n,m}\mathcal{R}_{m}v_{m}.\]
Then, by assumption (4.10), we have
\[|v_{n}|\leq\sum_{m=n+1}^{\infty}h_{m}|v_{m}|.\]
Iterating this estimate, we find that
\[|v_{n}|\leq\frac{1}{k!}H_{n}^{k}\max_{n\in\mathbb{Z}_{+}}\{|v_{n}|\},\quad \forall k\in\mathbb{Z}_{+}.\]
Taking the limit \(k\to\infty\), we see that \(v_{n}=0\). Note however that we do not use the unicity in our construction.
### Back to the difference equation
It turns out that the construction above yields a solution of difference equation (4.4).
**Lemma 4.5**.: _Under assumption (4.10) a solution \(u_{n}\) of integral equation (4.8) satisfies an identity_
\[u_{n+1}-u_{n}=-X_{n}^{-1}\sum_{m=n+1}^{\infty}X_{m-1}\mathcal{R}_{m}u_{m} \tag{4.17}\]
_and difference equation (4.4)._
Proof.: It follows from (4.8) that
\[u_{n+1}-u_{n}=\sum_{m=n+2}^{\infty}(G_{n+1,m}-G_{n,m})\mathcal{R}_{m}u_{m}-G_{ n,n+1}\mathcal{R}_{n+1}u_{n+1}. \tag{4.18}\]
According to (4.7) we have
\[G_{n+1,m}-G_{n,m}=-X_{n}^{-1}X_{m-1}\quad\text{and}\quad G_{n,n+1}=1.\]
Therefore relation (4.18) can be rewritten as (4.17).
Putting together equality (4.17) with the same equality where \(n+1\) is replaced by \(n\), we see that
\[\Lambda_{n}(u_{n+1}-u_{n})-(u_{n}-u_{n-1})=-\Lambda_{n}X_{n}^{-1}\sum_{m=n+1}^{ \infty}X_{m-1}\mathcal{R}_{m}u_{m}+X_{n-1}^{-1}\sum_{m=n}^{\infty}X_{m-1} \mathcal{R}_{m}u_{m}.\]
Since \(X_{n}=\Lambda_{n}X_{n-1}\), the right-hand side here equals \(\mathcal{R}_{n}u_{n}\), and hence the equation obtained coincides with (4.4).
**Corollary 4.6**.: _It follows from (4.17) that_
\[|u_{n+1}-u_{n}|\leq\max_{n\in\mathbb{Z}_{+}}\{|u_{n}|\}\,|X_{n}|^{-1}\sum_{m=n }^{\infty}|X_{m}\mathcal{R}_{m+1}|. \tag{4.19}\]
Lemma 4.5 allows us to reformulate Theorem 4.3 in terms of solutions of equation (4.4).
**Theorem 4.7**.: _Let assumption (4.10) be satisfied. Then difference equation (4.4) has a solution \(u_{n}(z)\) satisfying estimates (4.15) and (4.19). In particular, condition (4.5) holds._
Let us now discuss the dependence on the spectral parameter \(z\). Suppose that the coefficients \(\Lambda_{n}(z)\) and \(\mathcal{R}_{n}(z)\) in equation (4.8) are functions of \(z\in\Omega\) on some open set \(\Omega\subset\mathbb{C}\).
**Lemma 4.8**.: _Let the coefficients \(\Lambda_{n}(z)\) and \(\mathcal{R}_{n}(z)\) be analytic functions of \(z\in\Omega\). Suppose that assumption (4.10) is satisfied uniformly in \(z\) on compact subsets of \(\Omega\). Then the solutions \(u_{n}(z)\) of integral equation (4.8) are also analytic in \(z\in\Omega\). Moreover, if \(\Lambda_{n}(z)\) and \(\mathcal{R}_{n}(z)\) are continuous up to the boundary of \(\Omega\) and assumption (4.10) is satisfied uniformly on \(\Omega\), then the same is true for the functions \(u_{n}(z)\)._
Proof.: Consider series (4.16) for a solution \(u_{n}(z)\) of integral equation (4.8). Observe that if the functions \(u_{m}^{(k)}(z)\) in (4.11) depend analytically (continuously) on \(z\), then the function \(u_{n}^{(k+1)}(z)\) is also analytic (continuous). Since series (4.16) converges uniformly, its sums \(u_{n}(z)\) are also analytic (continuous) functions.
In view of Lemma 4.5 this result applies also to solutions of difference equation (4.4).
## 5. Jost solutions
Here we use the results of the previous section to construct the Jost solutions \(f_{n}(z)\) of the Jacobi equation (1.1) with the coefficients \(a_{n}\) and \(b_{n}\) satisfying conditions (1.8) and (1.9) where \(|\gamma|=1\). This leads to Theorems 2.1, 2.3 and 2.4.
First, in Sect. 5.1 and 5.2, we state some necessary technical results.
### Discrete derivatives
Let us collect standard formulas for "derivatives"
\[x_{n}^{\prime}=x_{n+1}-x_{n}\]
of various sequences \(x_{n}\):
\[(x_{n}^{-1})^{\prime}= -x_{n}^{-1}x_{n+1}^{-1}x_{n}^{\prime},\] \[(e^{x_{n}})^{\prime}= (e^{x_{n}^{\prime}}-1)e^{x_{n}}, \tag{5.1}\] \[(\sqrt{x_{n}})^{\prime}= x_{n}^{\prime}(\sqrt{x_{n}}+\sqrt{x_{n+1}})^{-1}\]
and
\[(x_{n}y_{n})^{\prime}=x_{n+1}y_{n}^{\prime}+x_{n}^{\prime}y_{n}. \tag{5.2}\]
Note the Abel summation formula ("integration by parts"):
\[\sum_{p=n}^{m}x_{p}y_{p}^{\prime}=x_{m}y_{m+1}-x_{n-1}y_{n}-\sum_{p=n}^{m}x_{p -1}^{\prime}y_{p}; \tag{5.3}\]
here \(m\geq n\geq 0\) are arbitrary (we set \(x_{-1}=0\) so that \(x_{-1}^{\prime}=x_{0}\)).
We mention also an obvious estimate
\[|f(x_{n+1})-f(x_{n})|\leq\big{(}\max_{|x|\leq 1}|f^{\prime}(x)|\big{)}|x_{n}^{ \prime}| \tag{5.4}\]
valid for an arbitrary function \(f\in C^{1}\), an arbitrary sequence \(x_{n}\to 0\) as \(n\to\infty\) and sufficiently large \(n\).
Let us now consider equation (1.1). A direct calculation shows that, for two \(f=(f_{n})_{n=-1}^{\infty}\) and \(g=(g_{n})_{n=-1}^{\infty}\) solutions of this equation, their Wronskian
\[W[f,g]:=a_{n}(f_{n}g_{n+1}-f_{n+1}g_{n}) \tag{5.5}\]
does not depend on \(n=-1,0,1,\ldots\). In particular, for \(n=-1\) and \(n=0\), we have
\[W[f,g]=a_{-1}(f_{-1}g_{0}-f_{0}g_{-1})\quad\text{and}\quad W[f,g]=a_{0}(f_{0} g_{1}-f_{1}g_{0})\]
(the number \(a_{-1}\neq 0\) is arbitrary, but the products \(a_{-1}f_{-1}\) do not depend on its choice). Clearly, the Wronskian \(W[f,g]=0\) if and only if the solutions \(f\) and \(g\) are proportional.
### Oscillating sums
Below we need to estimate sums of oscillating or exponentially growing terms. First, we note an integration-by-parts formula. The following elementary assertion does not require specific assumptions about amplitudes \(\boldsymbol{\kappa}_{n}\) and phases \(\boldsymbol{\varphi}_{n}\).
**Lemma 5.1**.: _Set \(\boldsymbol{\theta}_{n}=\boldsymbol{\varphi}_{n+1}-\boldsymbol{\varphi}_{n}\) and_
\[\boldsymbol{\zeta}_{n}=\boldsymbol{\kappa}_{n}(e^{-i\boldsymbol{\theta}_{n}}- 1)^{-1}. \tag{5.6}\]
_Then_
\[\sum_{p=n}^{m}\boldsymbol{\kappa}_{p}e^{-i\boldsymbol{\varphi}_{p}}=\boldsymbol {\zeta}_{m}e^{-i\boldsymbol{\varphi}_{m+1}}-\boldsymbol{\zeta}_{n-1}e^{-i \boldsymbol{\varphi}_{n}}-\sum_{p=n}^{m}\boldsymbol{\zeta}_{p-1}^{\prime}e^{-i \boldsymbol{\varphi}_{p}} \tag{5.7}\]
_for all \(n\) and \(m\)._
Proof.: According to (5.1) the left-hand side of (5.7) can be rewritten as
\[\sum_{p=n}^{m}\boldsymbol{\zeta}_{p}(e^{-i\boldsymbol{\varphi}_{p}})^{\prime}.\]
It follows from formula (5.3) that this sum equals the right-hand side of (5.7).
**Corollary 5.2**.: _Suppose that_
\[\boldsymbol{\zeta}_{n}^{\prime}\in\ell^{1}(\mathbb{Z}_{+}) \tag{5.8}\]
_and \(\operatorname{Im}\boldsymbol{\theta}_{n}\geq 0\). Then_
\[\big{|}\sum_{p=n}^{m}\boldsymbol{\kappa}_{p}e^{-i\boldsymbol{\varphi}_{p}} \big{|}\leq Ce^{\operatorname{Im}\boldsymbol{\varphi}_{m+1}} \tag{5.9}\]
_where the constant \(C\) does not depend on \(n\) and \(m\)._
**Remark 5.3**.: If
\[\boldsymbol{\theta}_{n}^{\prime}\in\ell^{1}(\mathbb{Z}_{+}), \tag{5.10}\]
then condition (5.8) can be replaced by more convenient ones:
\[\frac{\boldsymbol{\kappa}_{n}}{\boldsymbol{\theta}_{n}}\in\ell^{\infty}( \mathbb{Z}_{+})\quad\text{and}\quad\big{(}\frac{\boldsymbol{\kappa}_{n}}{ \boldsymbol{\theta}_{n}}\big{)}^{\prime}\in\ell^{1}(\mathbb{Z}_{+}). \tag{5.11}\]
Proof.: It follows from (5.2) that
\[\boldsymbol{\zeta}_{n}^{\prime}=\big{(}\frac{\boldsymbol{\kappa}_{n}}{ \boldsymbol{\theta}_{n}}\big{)}^{\prime}\frac{\boldsymbol{\theta}_{n+1}}{e^{ -i\boldsymbol{\theta}_{n+1}}-1}+\frac{\boldsymbol{\kappa}_{n}}{\boldsymbol{ \theta}_{n}}\big{(}\frac{\boldsymbol{\theta}_{n}}{e^{-i\boldsymbol{\theta}_{n }}-1}\big{)}^{\prime}. \tag{5.12}\]
Note that the function \(f(\boldsymbol{\theta})=\boldsymbol{\theta}(e^{-i\boldsymbol{\theta}}-1)^{-1}\) is \(C^{1}\) in a neighbourhood of the point \(\boldsymbol{\theta}=0\). Therefore the sequence \(f(\boldsymbol{\theta}_{n})\) is bounded as \(n\to\infty\) and \(f^{\prime}(\boldsymbol{\theta}_{n})\in\ell^{1}(\mathbb{Z}_{+})\) according to estimate (5.4) and condition (5.10). Thus conditions (5.11) imply that both terms in the right-hand side of (5.12) are in \(\ell^{1}(\mathbb{Z}_{+})\).
### Estimate of the "integral" kernel
Recall that the sequences \(\mathcal{A}_{n}=\mathcal{A}_{n}(z)\) and \(\Lambda_{n}=\Lambda_{n}(z)\) are given by relations (1.15) and (4.2), respectively. Our goal is to estimate the matrix elements \(G_{n,m}\) defined by equalities (4.6) and (4.7) and to prove inclusion (4.10). Our estimates apply to all values of \(\sigma\).
Putting together formulas (4.2) and (4.6), we see that
\[X_{n}=ca_{n}\mathcal{A}_{n+1}\mathcal{A}_{n}\]
where the constant \(c=(a_{0}\mathcal{A}_{1}\mathcal{A}_{0})^{-1}\). According to definition (1.15) this yields equality
\[X_{n}^{-1}=-c\boldsymbol{\kappa}_{n}e^{-i\boldsymbol{\varphi}_{n}} \tag{5.13}\]
where
\[\boldsymbol{\kappa}_{n}=n^{\rho}(n+1)^{\rho}a_{n}^{-1} \tag{5.14}\]
and
\[\boldsymbol{\varphi}_{n}=\varphi_{n}+\varphi_{n+1}. \tag{5.15}\]
It follows from condition (1.8) that
\[\boldsymbol{\kappa}_{n}=n^{-\nu}\big{(}1+(\rho-\alpha)n^{-1}+O(n^{-2})\big{)} \tag{5.16}\]
where \(\nu=\sigma-2\rho\) satisfies (2.12).
First, we reformulate Lemma 5.1 and its consequences in a particular form adapted to our problem.
**Lemma 5.4**.: _Let the assumptions of one of Theorems 2.1, 2.3 or 2.4 be satisfied. Define the sequences \(\boldsymbol{\kappa}_{n}\) and \(\boldsymbol{\varphi}_{n}\) by equalities (5.14) and (5.15). Then estimate (5.9) holds._
Proof.: Set
\[\boldsymbol{\theta}_{n}=\boldsymbol{\varphi}_{n+1}-\boldsymbol{\varphi}_{n}= \theta_{n}+\theta_{n+1}. \tag{5.17}\]
It follows from relations (3.21) or (3.43) that inclusion (5.10) holds. Therefore in view of Remark 5.3, it suffices to check inclusions (5.11). By definition (2.7), we have
\[\boldsymbol{\kappa}_{n}\boldsymbol{\theta}_{n}^{-1}=\big{(}n^{\nu}\boldsymbol {\kappa}_{n}\big{)}\big{(}n^{\nu}S_{n}\big{)}^{-1},\quad S_{n}=\sqrt{T_{n}}+ \sqrt{T_{n+1}}, \tag{5.18}\]
where \(T_{n}\) is defined by equality (2.6) (in particular, \(T_{n}=t_{n}\) if \(\sigma>2/3\)). Inclusions \(\big{(}n^{\nu}\boldsymbol{\kappa}_{n}\big{)}\in\ell^{\infty}(\mathbb{Z}_{+})\) and \(\big{(}n^{\nu}\boldsymbol{\kappa}_{n}\big{)}^{\prime}\in\ell^{1}(\mathbb{Z}_{+})\) are direct consequences of formula (5.16). It follows from relations (2.9), (2.10) or (2.11) that the product \(n^{\nu}S_{n}\) has a finite non-zero limit (it is used here that \(z\neq 0\) under the assumptions of Theorem 2.3 and that \(z\neq\gamma\tau\) under the assumptions of Theorem 2.4). The inclusion \(\big{(}n^{\nu}S_{n}\big{)}^{\prime}\in\ell^{1}(\mathbb{Z}_{+})\) is again a consequence of (3.21) or (3.43). Therefore (5.18) implies inclusions (5.11) which yields (5.9).
Now we are in a position to estimate the matrix elements \(G_{n,m}\). First, we note that
\[C_{1}m^{\nu}e^{-\operatorname{Im}\boldsymbol{\varphi}_{m}}\leq|X_{m}|\leq C_ {2}m^{\nu}e^{-\operatorname{Im}\boldsymbol{\varphi}_{m}} \tag{5.19}\]
according to definition (5.13) and relation (5.16). Next, we apply inequality (5.9) to elements (5.13) which yields
\[\big{|}\sum_{p=n}^{m-1}X_{p}^{-1}|\leq Ce^{\operatorname{Im}\boldsymbol{ \varphi}_{m}}. \tag{5.20}\]
Combining (5.19) and (5.20), we obtain a convenient estimate on product (4.7).
**Lemma 5.5**.: _Under the assumptions of any of Theorems 2.1, 2.3 or 2.4, we have an estimate_
\[|G_{n,m}|\leq Cm^{\nu} \tag{5.21}\]
_where \(\nu\) is given by (2.12) and the constant \(C\) does not depend on \(n\) and \(m\)._
### Solutions of the integral equation
Next, we consider integral equation (4.8). Observe that remainder (4.3) obeys the same estimate (1.20) as \(\mathbf{r}_{n}\). Thus, according to the results of Sect. 3 (see Propositions 3.2, 3.3, 3.4 and 3.6)
\[|R_{m}|\leq Cm^{-\delta} \tag{5.22}\]
where \(\delta\) satisfies conditions (3.1).
Putting together (5.21) and (5.22), we obtain an estimate on sequence (4.9):
\[h_{m}\leq Cm^{\nu-\delta}.\]
Comparing (2.12) and (3.1), we see that \(\nu-\delta<-1\). It follows that sum (4.13) satisfies an estimate
\[H_{n}\leq Cn^{\nu-\delta+1}.\]
Therefore condition (4.10) holds, and Theorem 4.7 applies in our case. This yields estimates (4.15) and (4.19). Moreover, the right-hand side of (4.19) can be estimated explicitly. Indeed, note that \(\operatorname{Im}\boldsymbol{\varphi}_{n}\leq\operatorname{Im}\boldsymbol{ \varphi}_{m}\) for \(m\geq n\) according to (2.4). Thus, it follows from (5.13) and (5.16) that
\[|X_{n}^{-1}X_{m}|\leq|k_{n}k_{m}^{-1}|e^{\operatorname{Im}(\boldsymbol{ \varphi}_{n}-\boldsymbol{\varphi}_{m})}\leq Cn^{-\nu}m^{\nu},\quad m\geq n,\]
so that inequality (4.19) yields an estimate
\[|u_{n+1}-u_{n}|\leq Cn^{-\nu}\sum_{m=n}^{\infty}m^{\nu-\delta}\leq C_{1}n^{- \delta+1}.\]
We see that, under the assumptions of any of Theorems 2.1, 2.3 or 2.4, condition (4.10) is satisfied. Hence the following three results are direct consequences of Theorem 4.7 (see also Lemma 4.8). Recall that the number \(\nu\) is defined by relations (2.12) and \(\delta\) satisfies conditions (3.1).
**Theorem 5.6**.: _Let the assumptions of Theorem 2.1 be satisfied._
_If \(\tau<0\), then for every \(z\in\operatorname{clos}\Pi\) equation (4.8) has a solution \(u_{n}(z)\) satisfying asymptotic relations_
\[u_{n}(z)=1+O(n^{\nu-\delta+1}) \tag{5.23}\]
_and_
\[u_{n}^{\prime}(z)=O(n^{-\delta+1}) \tag{5.24}\]
_where \(\nu=1/2\) and \(\delta>1/2+\sigma\). For all \(n\in\mathbb{Z}_{+}\), the functions \(u_{n}(z)\) are analytic in \(\Pi\) and are continuous up to the cut along the real axis._
_If \(\tau>0\), then relations (5.23) and (5.24) are true for all \(z\in\mathbb{C}\). In this case the functions \(u_{n}(z)\) are analytic in the whole complex plane \(\mathbb{C}\)._
_For all \(\tau\neq 0\), asymptotic formula (4.5) is uniform in \(z\) from compact subsets of \(\mathbb{C}\)._
**Theorem 5.7**.: _Let the assumptions of Theorem 2.3 be satisfied. Then for every \(z\neq 0\) such that \(z\in\gamma\operatorname{clos}\Pi_{0}\), equation (4.8) has a solution \(u_{n}(z)\) with asymptotics (5.23), (5.24) where \(\nu=\sigma/2\) and \(\delta>1+\sigma/2\). For all \(n\in\mathbb{Z}_{+}\), the functions \(u_{n}(z)\) are analytic in \(z\in\gamma\Pi_{0}\) and are continuous up to the cut along the half-axis \(\gamma\mathbb{R}_{+}\), with a possible exception of the point \(z=0\)._
**Theorem 5.8**.: _Let the assumptions of Theorem 2.4 be satisfied. Then for every \(z\) such that \(z\in\gamma(\tau+\operatorname{clos}\Pi_{0})\), \(z\neq\gamma\tau\), equation (4.8) has a solution \(u_{n}(z)\) with asymptotics (5.23), (5.24) where \(\nu=1/2\) and \(\delta=2\). For all \(n\in\mathbb{Z}_{+}\), the functions \(u_{n}(z)\) are analytic in \(z\in\gamma(\tau+\Pi_{0})\) and are continuous up to the cut along the half-axis \(\gamma(\tau+\mathbb{R}_{+})\), with a possible exception of the point \(\gamma\tau\)._
### The Jost solutions
Now it is easy to construct solutions of the Jacobi equation (1.1) with asymptotics (2.1) as \(n\to\infty\). We call them the Jost solutions.
According to Lemma 4.1 equation (4.4) for the sequence \(u_{n}(z)\) and equation (1.1) for the sequence
\[f_{n}(z)=(-\gamma)^{n}n^{-\rho}e^{i\varphi_{n}(\gamma z)}u_{n}(z) \tag{5.25}\]
are equivalent. Therefore Theorems 2.1, 2.3 and 2.4 are direct consequences of Theorems 5.6, 5.7 and 5.8, respectively.
Finally, we show that the Jost solutions are determined uniquely by their asymptotics (2.1). This is quite simple for regular \(z\). Recall that the set \(\mathcal{S}\) was defined by relations (2.8).
**Proposition 5.9**.: _Let the assumptions of one of Theorems 2.1, 2.3 or 2.4 be satisfied. If \(\sigma=3/2\), we also assume that \(\tau>0\). Suppose that \(z\not\in\operatorname{clos}\mathcal{S}\). Then the solution of \(f_{n}(z)\) of equation (1.1) satisfying condition (2.1) is unique._
Proof.: Suppose that solutions \(f_{n}\) and \(\tilde{f}_{n}\) of equation (1.1) are given by equality (5.25) where \(u_{n}\) and \(\tilde{u}_{n}\) obey condition (4.5). Then their Wronskian (5.5) equals
\[W[f,\tilde{f}]=-\gamma a_{n}n^{-\rho}(n+1)^{-\rho}e^{i\varphi_{n}}e^{i\varphi_ {n+1}}\big{(}u_{n}\tilde{u}_{n+1}-u_{n+1}\tilde{u}_{n}\big{)}. \tag{5.26}\]
As explained in Sect. 2.2, under the assumptions of Proposition 5.9 the sequence \(e^{i\varphi_{n}}\) tends to zero exponentially as \(n\to\infty\) whence \(W[f,\tilde{f}]=0\) and consequently \(\tilde{f}_{n}=Cf_{n}\) for some constant \(C\). It now follows from (5.25) that \(\tilde{u}_{n}=Cu_{n}\) where \(C=1\) by (4.5).
**Remark 5.10**.: If \(\sigma=3/2\) and \(\tau<0\), then instead of (4.5) we have to require a stronger condition
\[u_{n}=1+O(n^{-1/2}). \tag{5.27}\]
Note that in view of (2.21) this condition is satisfied for the Jost solution \(f_{n}(\lambda+i\varepsilon)\) of equation (1.1) constructed in Theorem 2.1. Suppose that two solutions \(f_{n}\) and \(\tilde{f}_{n}\) are given by formula (5.25) where \(u_{n}\) and \(\tilde{u}_{n}\) satisfy (5.27) whence
\(O(n^{-1/2})\). Since \(\rho=1/2\) now, it follows from asymptotic formula (2.21) and relation (5.26) that
\[|W[f,\tilde{f}]|=O\big{(}a_{n}n^{-1-2|\varepsilon|/\sqrt{|\tau|}}(u_{n}\tilde{u} _{n+1}-u_{n+1}\tilde{u}_{n})\big{)}=O(n^{-2|\varepsilon|/\sqrt{|\tau|}})=0\]
because \(\varepsilon\neq 0\). This implies that \(\tilde{f}_{n}=f_{n}\).
The results for \(z\) in the spectrum of the operator \(J\) are slightly weaker.
**Proposition 5.11**.: _Let the assumptions of one of Theorems 2.1, 2.3 or 2.4 be satisfied. Suppose that \(z=\lambda\pm i0\) where \(\lambda\in\mathcal{S}\). Then the solution \(f_{n}(z)\) of equation (1.1) satisfying relation (5.25) with \(u_{n}\) obeying conditions (4.5) and (5.24) is unique._
Proof.: Suppose that two solutions \(f_{n}\) and \(\tilde{f}_{n}\) of equation (1.1) satisfy these conditions. Their Wronskian is given by equality (5.26) where
\[u_{n}\tilde{u}_{n+1}-u_{n+1}\tilde{u}_{n}=u_{n}(\tilde{u}_{n+1}-\tilde{u}_{n} )+(u_{n}-u_{n+1})\tilde{u}_{n}=O(n^{-\delta+1}).\]
It follows that
\[W[f,\tilde{f}]=O(n^{\nu-\delta+1}),\quad\nu=\sigma-2\rho,\quad n\to\infty.\]
Putting together relations (2.12) and (3.1), we see that \(\nu-\delta+1<0\) for all \(\sigma\in(0,3/2]\). Therefore \(W[f,\tilde{f}]=0\) and, consequently, \(\tilde{f}=f\).
## 6. Orthogonal polynomials
Here we describe an asymptotic behavior as \(n\to\infty\) of all solutions \(F_{n}(z)\) of equation (1.1). In particular, these results apply to the orthonormal polynomials \(P_{n}(z)\). We have to distinguish values of \(z=\lambda\in\mathcal{S}\) (this set was defined by relations (2.8)) in the absolutely continuous spectrum of a Jacobi operator and regular points \(z\in\mathbb{C}\setminus\operatorname{clos}\mathcal{S}\).
### Regular points
Our goal in this subsection is to prove Theorem 2.7 and Proposition 2.8. Let us proceed from the following assertion.
**Proposition 6.1**.: _[_30_, Theorem 2.2]_ _Let \(f(z)=(f_{n}(z))\) be an arbitrary solution of the Jacobi equation (1.1) such that \(f_{n}(z)\neq 0\) for sufficiently large \(n\), say \(n\geq n_{0}\). Then sequence \(g(z)=(g_{n}(z))\) defined by (1.18) also satisfies equation (1.1), and the Wronskian_
\[W[f(z),g(z)]=1,\]
_so that the solutions \(f(z)\) and \(g(z)\) are linearly independent._
In this subsection, we suppose that \(z\in\mathbb{C}\setminus\operatorname{clos}\mathcal{S}\) and \(f_{n}=f_{n}(z)\) is the Jost solution of equation (1.1). Its asymptotics is given by formulas (1.15), (1.16). Our
aim is to find an asymptotic behavior of the solution \(g_{n}=g_{n}(z)\) as \(n\to\infty\). The dependence on \(z\) will be omitted in notation. Let us set
\[\Sigma_{n}=\sum_{m=n_{0}}^{n}(a_{m-1}f_{m-1}f_{m})^{-1},\quad n\geq n_{0}; \tag{6.1}\]
then (1.18) reads as
\[g_{n}=f_{n}\Sigma_{n}. \tag{6.2}\]
Using equalities (1.15), (1.17) and notation (5.14), (5.15), we can rewrite sum (6.1) as
\[\Sigma_{n}=-\gamma\sum_{m=n_{0}-1}^{n-1}\boldsymbol{\kappa}_{m}\mathbf{u}_{m}e ^{-i\boldsymbol{\varphi}_{m}}\quad\text{where}\quad\mathbf{u}_{m}=(u_{m}u_{m+1 })^{-1}. \tag{6.3}\]
In view of identity (5.1), we have
\[e^{-i\boldsymbol{\varphi}_{m}}=\big{(}e^{-i\boldsymbol{\theta}_{m}}-1\big{)}^ {-1}\big{(}e^{-i\boldsymbol{\varphi}_{m}}\big{)}^{\prime},\]
with \(\boldsymbol{\theta}_{m}\) given by (5.17). This allows us to integrate by parts in (6.3). Indeed, using formula (5.3), we find that
\[-\gamma\Sigma_{n}e^{i\boldsymbol{\varphi}_{n}}=\boldsymbol{\zeta}_{n-1} \mathbf{u}_{n-1}-\boldsymbol{\zeta}_{n_{0}-2}\mathbf{u}_{n_{0}-2}e^{-i \boldsymbol{\varphi}_{n_{0}-1}}e^{i\boldsymbol{\varphi}_{n}}+\widetilde{ \Sigma}_{n}e^{i\boldsymbol{\varphi}_{n}} \tag{6.4}\]
where \(\boldsymbol{\zeta}_{n}\) is defined by equality (5.6) and
\[\widetilde{\Sigma}_{n}=-\sum_{m=n_{0}-1}^{n-1}\big{(}\boldsymbol{\zeta}_{m-1} \mathbf{u}_{m-1}\big{)}^{\prime}e^{-i\boldsymbol{\varphi}_{m}}. \tag{6.5}\]
We will see that asymptotics of \(\Sigma_{n}\) as \(n\to\infty\) is determined by the first term in the right-hand side of expression (6.4). Let us calculate it. Recall that \(\mathbf{u}_{n}\to 1\) as \(n\to\infty\) according to Theorem 4.3. Therefore putting together asymptotic formulas (2.14) for \(\boldsymbol{\theta}_{n}\) and (5.16) for \(\boldsymbol{\kappa}_{n}\), we find that
\[\lim_{n\to\infty}\boldsymbol{\zeta}_{n}\mathbf{u}_{n}=\lim_{n\to\infty} \boldsymbol{\zeta}_{n}=i\varkappa \tag{6.6}\]
with the coefficient \(\varkappa=\varkappa(z)\) given by (2.13).
The second term in the right-hand side of (6.4) tends to zero as \(n\to\infty\) due to the factor \(e^{i\boldsymbol{\varphi}_{n}}\). The same is true for the third term. To show this, we need to estimate the derivatives in (6.5).
**Lemma 6.2**.: _Let the sequence \(\boldsymbol{\zeta}_{n}\) be defined by equality (5.6). Then_
\[\boldsymbol{\zeta}_{n}^{\prime}=O(n^{-1-\varepsilon}) \tag{6.7}\]
_for some \(\varepsilon>0\)._
Proof.: Let us write \(\boldsymbol{\zeta}_{n}\) as a product
\[\boldsymbol{\zeta}_{n}=(\boldsymbol{\kappa}_{n}n^{\nu})(n^{\nu}\boldsymbol{ \theta}_{n})^{-1}(\boldsymbol{\theta}_{n}\big{(}e^{-i\boldsymbol{\theta}_{n}}- 1\big{)}^{-1}),\quad\nu=\sigma-2\rho, \tag{6.8}\]
and estimate all factors separately. It follows from relation (5.16) that the product \(\boldsymbol{\kappa}_{n}n^{\nu}\) tends to \(1\) and its derivative is \(O(n^{-2})\) as \(n\to\infty\). Next, we consider \((n^{\nu}\boldsymbol{\theta}_{n})^{-1}\). According to definitions (2.6) and (2.7) we have
\[n^{\nu}\theta_{n}=(n^{\nu}\sqrt{t_{n}})\sqrt{1+\sum_{l=1}^{L-1}p_{l+1}t_{n}^{ l}}. \tag{6.9}\]
By definition (1.11), the factor \(n^{\nu}\sqrt{t_{n}}\) has a finite non-zero limit as \(n\to\infty\). Moreover, its derivative is \(O(n^{-\sigma})\) for \(\sigma>1\) and \(O(n^{\sigma-2})\) for \(\sigma<1\) (it is zero if \(\sigma=1\)). Similarly, the derivative of the second factor in (6.9) is \(O(n^{-2})\) for \(\sigma>1\) and \(O(n^{-1-\sigma})\) for \(\sigma<1\). These arguments also show that \(\theta_{n}^{\prime}=O(n^{-3/2})\) for \(\sigma>1\) and \(\theta_{n}^{\prime}=O(n^{-\sigma/2-1})\) for \(\sigma<1\). Therefore the derivative of the third factor in (6.8) is also \(O(n^{-1-\varepsilon})\), \(\varepsilon>0\). This proves estimate (6.7) on product (6.8).
To estimate sum (6.5), we use the following elementary assertion of a general nature.
**Lemma 6.3**.: _Suppose that a sequence \(x_{n}\in\ell^{1}(\mathbb{Z}_{+})\) and a sequence \(\vartheta_{n}\geq 0\). Set_
\[\phi_{n}=\sum_{m=0}^{n}\vartheta_{m} \tag{6.10}\]
_and assume that_
\[\lim_{n\to\infty}\phi_{n}=\infty. \tag{6.11}\]
_Then_
\[\lim_{n\to\infty}e^{-\phi_{n}}\sum_{m=0}^{n}x_{m}e^{\phi_{m}}=0.\]
Proof.: By definition (6.10), we have
\[e^{-\phi_{n}}\sum_{m=0}^{n}x_{m}e^{\phi_{m}}=\sum_{m=0}^{\infty}X_{m}(n) \tag{6.12}\]
where
\[X_{m}(n)=x_{m}\exp\big{(}-\sum_{p=m}^{n}\vartheta_{p}\big{)}\quad\text{if} \quad m\leq n\]
and \(X_{m}(n)=0\) if \(m>n\). Clearly, \(X_{m}(n)\leq x_{m}\) because \(\vartheta_{n}\geq 0\) and \(X_{m}(n)\to 0\) as \(n\to\infty\) for fixed \(m\) by virtue of condition (6.11). Therefore, by the dominated convergence theorem, sum (6.12) tends to zero as \(n\to\infty\).
Now we are in a position to estimate the third term in (6.4).
**Lemma 6.4**.: _Sum (6.5) satisfies the condition_
\[\lim_{n\to\infty}\widetilde{\Sigma}_{n}e^{i\boldsymbol{\varphi}_{n}}=0. \tag{6.13}\]
Proof.: It follows from estimates (5.24) and (6.7) that
\[|(\boldsymbol{\zeta}_{n}\mathbf{u}_{n})^{\prime}|\leq|\boldsymbol{\zeta}_{n}^ {\prime}||\mathbf{u}_{n}|+|\boldsymbol{\zeta}_{n+1}||\mathbf{u}_{n}^{\prime}| \leq Cn^{-\delta+1} \tag{6.14}\]
where the value of \(\delta\) is indicated in Theorems 5.6, 5.7 and 5.8. Therefore, by definition (6.5) and the differentiation formula (5.1), we have
\[|\widetilde{\Sigma}_{n}|\leq C\sum_{m=n_{0}-1}^{n-1}m^{-\delta+1}e^{\phi_{m}}= C\sum_{m=n_{0}-1}^{n-1}y_{m}\big{(}e^{\phi_{m}}\big{)}^{\prime},\quad\phi_{m}= \operatorname{Im}\boldsymbol{\varphi}_{m}, \tag{6.15}\]
where
\[y_{m}=m^{-\delta+1}\big{(}e^{\vartheta_{m}}-1\big{)}^{-1},\quad\vartheta_{m}= \phi_{m+1}-\phi_{m}. \tag{6.16}\]
Using relation (5.3) and integrating in the right-hand side of (6.15) by parts, we find that
\[|\widetilde{\Sigma}_{n}|\leq C\Big{(}y_{n-1}e^{\phi_{n}}-y_{n_{0}-2}e^{\phi_{ n_{0}-1}}-\sum_{m=n_{0}-1}^{n-1}y_{m-1}^{\prime}e^{\phi_{m}}\Big{)}. \tag{6.17}\]
Let us estimate expression (6.16). It follows from relations (2.9), (2.10) and (2.11) that
\[\phi_{n}=cn^{-\mu}(1+o(1))\]
for some \(c=c_{\sigma,\tau}>0\). Here \(\mu=\sigma/2\) if \(\sigma\leq 1\), \(\mu=1/2\) if \(\sigma\in[1,3/2]\), \(\tau>0\) and \(\mu=\sigma-1/2\) if \(\sigma\in[1,3/2]\), \(\tau<0\). Therefore product (6.16) is estimated as
\[|y_{n}|\leq Cn^{-\delta+1+\mu}. \tag{6.18}\]
Note that \(-\delta+1+\mu<0\) for all values of \(\sigma\) and \(\tau\). Moreover, estimate (6.18) can be differentiated which yields
\[|y_{n}|\leq Cn^{-\varepsilon},\quad|y_{n}^{\prime}|\leq Cn^{-1-\varepsilon}\]
for some \(\varepsilon>0\).
Thus, it follows from inequality (6.17) that
\[e^{-\phi_{n}}|\widetilde{\Sigma}_{n}|\leq C\Big{(}n^{-\varepsilon}+\sum_{m=n_ {0}-1}^{n-1}m^{-1-\varepsilon}e^{-\phi_{n}+\phi_{m}}\Big{)}\]
which in view of Lemma 6.3 implies relation (6.13).
Let us now recall equality (6.4) and put relations (6.6) and (6.13) together. This leads to the following result.
**Lemma 6.5**.: _Sum (6.1) satisfies the condition_
\[\lim_{n\to\infty}\Sigma_{n}e^{i\boldsymbol{\varphi}_{n}}=-i\gamma\varkappa. \tag{6.19}\]
Using equality (6.2) we can now conclude the _proofs_ of Theorem 2.7 and Proposition 2.8. Indeed, combining asymptotics (2.1) and (6.19), we obtain relation (1.19). This implies both formulas (2.21) and (2.26). \(\quad\square\)
Recall (see Sect. 1.1, for more details) that equation (1.1) is in the limit point case if, for \(\operatorname{Im}z\neq 0\), it has a unique, up to a constant factor, non-trivial solution \(f_{n}(z)\) such that inclusion (2.22) is satisfied. This is equivalent to the essential self-adjointness of the minimal Jacobi operator \(J_{\min}\) in the space \(\ell^{2}(\mathbb{Z}_{+})\). In this case we set \(\operatorname{clos}J_{\min}=J_{\max}=:J\).
According to Theorem 2.7 for \(\operatorname{Im}z\neq 0\), the sequences \(g_{n}(z)\) tend to infinity exponentially as \(n\to\infty\) and according to Proposition 2.8 they tend to infinity as a power of \(n\) (or to zero but slower than \(n^{-1/2}\)). In all cases relation (2.25) is satisfied. Therefore it follows from the limit point/circle theory that under our assumptions the operators \(J_{\min}\) are essentially self-adjoint. This proves Proposition 2.9.
Now it is easy find an asymptotics of all solutions \(F=(F_{n})\) of equation (1.1). Indeed, using Proposition 6.1, we see that
\[F_{n}=-W[F,f]g_{n}+cf_{n}\]
for some constant \(c\). The asymptotics of the solutions \(g_{n}\) and \(f_{n}\) are given by formulas (1.19) and (2.1). Obviously, \(f_{n}\) makes no contribution to the asymptotics of \(F_{n}\). This leads to the following result.
**Theorem 6.6**.: _Let one of the following three assumptions be satisfied:_
\(1^{0}\) _the conditions of Theorem_ 2.1 _where either \(\tau<0\) and \(\operatorname{Im}z\neq 0\) or \(\tau>0\) and \(z\in\mathbb{C}\) is arbitrary_
\(2^{0}\) _the conditions of Theorem_ 2.3 _where either \(\gamma>0\) and \(z\not\in[0,\infty)\) or \(\gamma<0\) and \(z\not\in(-\infty,0]\)_
\(3^{0}\) _the conditions of Theorem_ 2.4 _where either \(\gamma>0\) and \(z\not\in[\tau,\infty)\) or \(\gamma<0\) and \(z\not\in(-\infty,-\tau]\)_
_Then an arbitrary solution \(F(z)=(F_{n}(z))\) has an asymptotics, as \(n\to\infty\),_
\[F_{n}(z)=-iW[F(z),f(z)]\varkappa(z)(-\gamma)^{n+1}n^{-\rho}e^{-i\varphi_{n}( \gamma z)}\big{(}1+o(1)\big{)},\quad z\not\in\operatorname{clos}\mathcal{S},\]
_where the coefficient \(\varkappa(z)\) is given by formula (2.13)._
In particular, Theorem 6.6 applies to the orthonormal polynomials \(P_{n}(z)\). Apparently, in the critical case \(|\gamma|=1\), an asymptotic behavior of the orthonormal polynomials \(P_{n}(z)\) for regular points \(z\in\mathbb{C}\) was never investigated before (except of the Laguerre polynomials). This is technically the most difficult part of this paper.
### Continuous spectrum
First, we check that, on the continuous spectrum of the operator \(J\), the Jost solutions \(f_{n}(\lambda+i0)\) and \(f_{n}(\lambda-i0)=\overline{f_{n}(\lambda+i0)}\) of equation (1.1) are linearly independent. Recall that the Wronskian of two solutions of this equation is given by formula (5.5), the number \(\rho\) is defined by equalities (2.2) and the sequences \(\theta_{n}(\lambda)\), \(\varphi_{n}(\lambda)\) are constructed in Theorems 2.1, 2.3 and 2.4. Observe
that boundary values of the coefficient \(\varkappa(z)\) defined by formula (2.13) are given by the equalities
\[\varkappa(\lambda+i0)=\begin{cases}\sqrt{|\tau|}&\text{if}\quad\sigma>1,\,\tau<0,\,\lambda\in\mathbb{R}\\ \sqrt{\lambda}&\text{if}\quad\sigma<1,\,\lambda>0\\ \sqrt{\lambda-\tau}&\text{if}\quad\sigma=1,\,\lambda>\tau\end{cases} \tag{6.20}\]
and \(\varkappa(\lambda-i0)=-\varkappa(\lambda+i0).\)
**Lemma 6.7**.: _Let one of the following three assumptions be satisfied:_
\(1^{0}\) _the conditions of Theorem_ 2.1 _with_ \(\tau<0\) _and_ \(\lambda\in\mathbb{R}\)__
\(2^{0}\) _the conditions of Theorem_ 2.3 _with_ \(\gamma\lambda>0\)__
\(3^{0}\) _the conditions of Theorem_ 2.4 _with_ \(\gamma\lambda>\tau\)_._
_Then the Wronskian_
\[w(\lambda):=\frac{1}{2i}W[f(\lambda+i0),f(\lambda-i0)]=\gamma\varkappa(\gamma (\lambda+i0))>0. \tag{6.21}\]
Proof.: Set \(\varphi_{n}=\varphi_{n}(\gamma(\lambda+i0))\), \(u_{n}=u_{n}(\gamma(\lambda+i0))\). It follows from formulas (1.15) and (1.17) that
\[2iw(\lambda)=-\gamma a_{n}n^{-\rho}(n+1)^{-\rho}\Big{(}e^{i\varphi_{n}}e^{-i \varphi_{n+1}}u_{n}\bar{u}_{n+1}-e^{-i\varphi_{n}}e^{i\varphi_{n+1}}\bar{u}_ {n}u_{n+1}\Big{)}.\]
Using condition (1.8), we see that
\[w(\lambda)=-\gamma n^{\nu}\operatorname{Im}\big{(}e^{-i\theta_{n+1}}u_{n} \bar{u}_{n+1}\big{)}(1+o(1)) \tag{6.22}\]
where \(\theta_{n}=\varphi_{n+1}-\varphi_{n}\) and \(\nu=\sigma-2\rho\). Observe that
\[\operatorname{Im}\big{(}e^{-i\theta_{n+1}}u_{n}\bar{u}_{n+1}\big{)}= \operatorname{Im}\big{(}(u_{n}-u_{n+1})\bar{u}_{n+1}\big{)}-\theta_{n+1} \operatorname{Re}(u_{n}\bar{u}_{n+1})+O(\theta_{n+1}^{2}). \tag{6.23}\]
According to Theorems 5.6, 5.7 or 5.8 the first term in the right-hand side of (6.23) is \(O(n^{-\delta+1})\) where \(\delta-1>\nu\). It follows from (2.14) that the second term is \(-\varkappa(\gamma(\lambda+i0))n^{-\nu}(1+o(1))\). Finally, the contribution of \(O(\theta_{n+1}^{2})\) to (6.22) is zero. Therefore equality (6.21) is a direct consequence of (6.22) and (6.23).
Let us introduce the Wronskian of the solutions \(P(z)=(P_{n}(z))\) and \(f(z)=(f_{n}(z))\) of equation (1.1):
\[\Omega(z):=W[P(z),f(z)]=a_{-1}(P_{-1}(z)f_{0}(z)-P_{0}(z)f_{-1}(z))=-a_{-1}f_{ -1}(z),\quad z\not\in\mathcal{S}. \tag{6.24}\]
**Lemma 6.8**.: _The function \(\Omega(z)\) is analytic in \(\mathbb{C}\setminus\operatorname{clos}\mathcal{S}\) and \(\Omega(z)=0\) if and only if \(z\) is an eigenvalue of the operator \(J\). In particular, \(\Omega(z)\neq 0\) for \(\operatorname{Im}z\neq 0\)._
Proof.: The analyticity of \(\Omega(z)\) is a direct consequence of definition (6.24) because \(f_{-1}(z)\), as well as all functions \(f_{n}(z)\), is analytic. If \(\Omega(z)=0\), then \(P(z)\) and \(f(z)\) are proportional whence \(P(z)\in\ell^{2}(\mathbb{Z}_{+})\) by virtue of Proposition 2.6. Since \(P_{-1}(z)=0\), it follows that \(JP(z)=zP(z)\) so that \(z\) is an eigenvalue of the operator \(J\). For \(\operatorname{Im}z\neq 0\), this is impossible because \(J\) is a self-adjoint operator. Conversely, if \(z\) is an eigenvalue of \(J\), then \(P(z)\in\ell^{2}(\mathbb{Z}_{+})\), and hence \(f(z)\) and \(P(z)\) are proportional.
Now we are in a position to find an asymptotic behavior of the polynomials \(P_{n}(\lambda)\) for \(\lambda\) in the absolutely continuous spectrum (except thresholds) of the Jacobi operator \(J\). Since the Jost solutions \(f_{n}(\lambda\pm i0)\) are linearly independent and \(\overline{P_{n}(\lambda)}=P_{n}(\lambda)\), we see that
\[P_{n}(\lambda)=\overline{c(\lambda)}f_{n}(\lambda+i0)+c(\lambda)f_{n}(\lambda- i0) \tag{6.25}\]
for some complex constant \(c(\lambda)\). Taking the Wronskian of this equation with \(f(\lambda+i0)\), we can express \(c(\lambda)\) via Wronskian (6.24):
\[-c(\lambda)W[f(\lambda+i0),f(\lambda-i0)]=W[P(\lambda),f(\lambda+i0)]=\Omega( \lambda+i0)\]
whence
\[c(\lambda)=-\frac{\Omega(\lambda+i0)}{2iw(\lambda)}.\]
In view of formula (6.25), this yields the following result.
**Lemma 6.9**.: _For all \(\lambda\in\mathcal{S}\), we have the representation_
\[P_{n}(\lambda)=\frac{\Omega(\lambda-i0)f_{n}(\lambda+i0)-\Omega(\lambda+i0)f_ {n}(\lambda-i0)}{2iw(\lambda)},\quad n\in\mathbb{Z}_{+}. \tag{6.26}\]
Properties of the Wronskians \(\Omega(\lambda\pm i0)\) are summarized in the following statement.
**Theorem 6.10**.: _Let the assumptions of Lemma 6.7 be satisfied. Then the Wronskians \(\Omega(\lambda+i0)\) and \(\Omega(\lambda-i0)=\overline{\Omega(\lambda+i0)}\) are continuous functions of \(\lambda\in\mathcal{S}\) and_
\[\Omega(\lambda\pm i0)\neq 0,\quad\lambda\in\mathcal{S}. \tag{6.27}\]
Proof.: The functions \(\Omega(\lambda\pm i0)\) are continuous in the same region as the Jost solutions. If \(\Omega(\lambda\pm i0)=0\), then according to (6.26) \(P_{n}(\lambda)=0\) for all \(n\in\mathbb{Z}_{+}\). However, \(P_{0}(\lambda)=1\) for all \(\lambda\).
Let us set
\[\kappa(\lambda)=|\Omega(\lambda+i0)|,\quad\Omega(\lambda\pm i0)=\kappa(\lambda )e^{\pm i\eta(\lambda)}. \tag{6.28}\]
In the theory of short-range perturbations of the Schrodinger operator, the functions \(\kappa(\lambda)\) and \(\eta(\lambda)\) are known as the limit amplitude and the limit phase, respectively; the function \(\eta(\lambda)\) is also called the scattering phase or the phase shift. Definition (6.28) fixes the phase \(\eta(\lambda)\) only up to a term \(2\pi m\) where \(m\in\mathbb{Z}\). We emphasize that the amplitude \(\kappa(\lambda)\) and the phase \(\eta(\lambda)\) depend on the values of the coefficients \(a_{n}\) and \(b_{n}\) for all \(n\), and hence they are not determined by an asymptotic behavior of \(a_{n}\), \(b_{n}\) as \(n\to\infty\).
Combined together, relations (2.1) and (6.26) yield asymptotics of the orthonormal polynomials \(P_{n}(\lambda)\).
**Theorem 6.11**.: _Let one of the following three assumptions be satisfied:_
\(1^{0}\) _the conditions of Theorem_ 2.1 _with_ \(\tau<0\) _and_ \(\lambda\in\mathbb{R}\)__
\(2^{0}\) _the conditions of Theorem_ 2.3 _with_ \(\gamma\lambda>0\)__
\(3^{0}\) _the conditions of Theorem_ 2.4 _with_ \(\gamma\lambda>\tau\)_._
_Let the number \(\rho\) be defined by equalities (2.2), and let \(\Phi_{n}(\lambda)=\varphi_{n}(\gamma(\lambda+i0))\) where the sequences \(\varphi_{n}(\lambda)\) are constructed in Theorems 2.1, 2.3 and 2.4. Then, for \(\lambda\in\mathcal{S}\),_
\[P_{n}(\lambda)=\kappa(\lambda)w(\lambda)^{-1}(-\gamma)^{n}n^{-\rho}\sin(\Phi_ {n}(\lambda)-\eta(\lambda))\big{(}1+o(1)\big{)},\quad n\to\infty, \tag{6.29}\]
_where the Wronskian \(w(\lambda)\) is given by equalities (6.20), (6.21) and the amplitude \(\kappa(\lambda)\) and the phase \(\eta(\lambda)\) are defined by relations (6.28)._
We emphasize that the definitions of the numbers \(\rho\) and \(\Phi_{n}(\lambda)\) are different under assumptions \(1^{0}\), \(2^{0}\) and \(3^{0}\), but relation (6.29) is true in all these cases. Under the assumptions of Theorem 6.11 the functions \(\Phi_{n}(\lambda)\) are real and \(\Phi_{n}(\lambda)\to\infty\) so that \(P_{n}(\lambda)\) are oscillating as \(n\to\infty\).
A formula completely similar to (6.29) is true for all real solutions of equation (1.1). Only the coefficients \(\kappa(\lambda)\) and \(\eta(\lambda)\) are changed.
## 7. Spectral results
### Resolvent. Discrete spectrum
If the minimal Jacobi operator \(J_{\min}\) is essentially self-adjoint in the space \(\ell^{2}(\mathbb{Z}_{+})\), then, for \(\operatorname{Im}z\neq 0\), equation (1.1) has a unique (up to a constant factor) solution \(f_{n}(z)\in\ell^{2}(\mathbb{Z}_{+})\). Let \(I\) be the identity operator in the space \(\ell^{2}(\mathbb{Z}_{+})\), and let \(R(z)=(J-zI)^{-1}\) be the resolvent of the operator \(J=\operatorname{clos}J_{\min}\). Recall that the Wronskian \(\Omega(z)\) of the solutions \(P_{n}(z)\) and \(f_{n}(z)\) of equation (1.1) was defined by formula (6.24). The following statement is very close to the corresponding result for differential operators.
**Proposition 7.1**.: _[_30_, Proposition 2.1]_ _In the limit point case, for all \(h=(h_{n})\in\ell^{2}(\mathbb{Z}_{+})\), we have_
\[(R(z)h)_{n}=\Omega(z)^{-1}\Big{(}f_{n}(z)\sum_{m=0}^{n}P_{m}(z)h_{m}+P_{n}(z) \sum_{m=n+1}^{\infty}f_{m}(z)h_{m}\Big{)},\quad\operatorname{Im}z\neq 0. \tag{7.1}\]
**Remark 7.2**.: Let \(e_{0},e_{1},\ldots,e_{n},\ldots\) be the canonical basis in the space \(\ell^{2}(\mathbb{Z}_{+})\). Then representation (7.1) can be equivalently rewritten as
\[\langle R(z)e_{n},e_{m}\rangle=\Omega(z)^{-1}P_{n}(z)f_{m}(z)\text{ if }n\leq m \text{ and }\langle R(z)e_{n},e_{m}\rangle=\langle R(z)e_{m},e_{n}\rangle. \tag{7.2}\]
According to Theorem 2.7 and Proposition 2.8, under our assumptions the operator \(J_{\min}\) is essentially self-adjoint in the space \(\ell^{2}(\mathbb{Z}_{+})\). In view of Proposition 2.6, in this case \(f_{n}(z)\) is the Jost solution. Thus, the resolvent of the Jacobi operator \(J\) admits representation (7.1) where \(f_{n}(z)\) is the Jost solution.
Spectral results about the Jacobi operators \(J\) are direct consequences of representation (7.1). As far as the discrete spectrum is concerned, we use that according to
Theorems 2.1, 2.3 and 2.4, the functions \(f_{n}(z)\), \(n=-1,0,1,\ldots\), and, in particular, \(\Omega(z)\) are analytic functions of \(z\in\mathbb{C}\setminus\operatorname{clos}\mathcal{S}\). In view of Lemma 6.8 this yields the part of Theorem 2.11 concerning the discrete spectrum. Let us state it explicitly.
**Theorem 7.3**.: _Let assumptions (1.8), (1.9) with \(|\gamma|=1\) be satisfied._
\(1^{0}\) _If \(\sigma\in(1,3/2]\) and \(\tau>0\), then the spectrum of the operator \(J\) is discrete._
\(2^{0}\) _If \(\sigma\in(0,1)\), then the spectrum of the operator \(J\) is discrete on the half-axis \((-\infty,0)\) for \(\gamma=1\), and it is discrete on \((0,\infty)\) for \(\gamma=-1\)._
\(3^{0}\) _If \(\sigma=1\), then the spectrum of the operator \(J\) is discrete on the half-axis \((-\infty,\tau)\) for \(\gamma=1\), and it is discrete on \((-\tau,\infty)\) for \(\gamma=-1\)._
### Limiting absorption principle. Continuous spectrum
Next, we consider the absolutely continuous spectrum. According to Theorems 2.1, 2.3 and 2.4, the functions \(f_{n}(z)\), \(n=-1,0,1,\ldots\), and, in particular, \(\Omega(z)\) are continuous up to the cut along the interval \(\mathcal{S}\). Therefore the following result is a direct consequence of relation (6.27) and representation (7.1). Recall that the set \(\mathcal{D}\subset\ell^{2}(\mathbb{Z}_{+})\) consists of finite leaf combinations of the basis vectors \(e_{0},e_{1},\ldots\).
**Theorem 7.4**.: _Let the assumptions of Theorems 2.1 for \(\tau<0\), 2.3 or 2.4 be satisfied. Then for all \(u,v\in\mathcal{D}\), the functions \(\langle R(z)u,v\rangle\) are continuous in \(z\) up to the cut along the interval \(\mathcal{S}\) as \(z\) approaches \(\mathcal{S}\) from upper or lower half-planes._
This result is known as the limiting absorption principle. It implies
**Corollary 7.5**.: _The spectrum of the operator \(J\) is absolutely continuous on the closed interval \(\operatorname{clos}\mathcal{S}\), except, possibly, eigenvalues at its endpoints. In particular, it is absolutely continuous and coincides with the whole real axis \(\mathbb{R}\) if \(\sigma\in(1,3/2]\) and \(\tau<0\)._
Let us now consider the spectral projector \(E(\lambda)\) of the operator \(J\). By the Cauchy-Stieltjes-Privalov formula for \(u,v\in\mathcal{D}\), its matrix elements satisfy the identity
\[2\pi i\frac{d\langle E(\lambda)u,v\rangle}{d\lambda}=\langle R(\lambda+i0)u,v \rangle-\langle R(\lambda-i0)u,v\rangle,\quad\lambda\in\mathcal{S}. \tag{7.3}\]
Therefore, the following assertion is a direct consequence of Theorem 7.4.
**Corollary 7.6**.: _For all \(u,v\in\mathcal{D}\), the functions \(\langle E(\lambda)u,v\rangle\) are continuously differentiable in \(\lambda\in\mathcal{S}\)._
Formulas (7.2) and (7.3) allow us to calculate the spectral family \(dE(\lambda)\) in terms of the orthonormal polynomials and the Jost function. Indeed, substituting the expression
\[\langle R(\lambda\pm i0)e_{n},e_{m}\rangle=\Omega(\lambda\pm i0)^{-1}P_{n}( \lambda)f_{m}(\lambda\pm i0),\quad n\leq m,\quad\lambda\in\mathcal{S},\]
into (7.3) and using the identity \(\Omega(\lambda-i0)=\overline{\Omega(\lambda+i0)}\), we find that
\[2\pi i\frac{d\langle E(\lambda)e_{n},e_{m}\rangle}{d\lambda}=P_{n}(\lambda) \frac{\Omega(\lambda-i0)f_{m}(\lambda+i0)-\Omega(\lambda+i0)f_{m}(\lambda-i0)} {|\Omega(\lambda\pm i0)|^{2}}.\]
Combining this representation with formula (6.26) for \(P_{m}(\lambda)\), we obtain the following result.
**Theorem 7.7**.: _Let the assumptions of Theorems 2.1 for \(\tau<0\), 2.3 or 2.4 be satisfied. Then for all \(n,m\in\mathbb{Z}_{+}\), we have the representation_
\[\frac{d\langle E(\lambda)e_{n},e_{m}\rangle}{d\lambda}=(2\pi)^{-1}w(\lambda)| \Omega(\lambda\pm i0)|^{-2}P_{n}(\lambda)P_{m}(\lambda),\quad\lambda\in\mathcal{ S}, \tag{7.4}\]
_where \(w(\lambda)\) and \(\Omega(z)\) are the Wronskians (6.21) and (6.24), respectively. In particular, the spectral measure of the operator \(J\) equals_
\[d\Xi(\lambda):=d\langle E(\lambda)e_{0},e_{0}\rangle=\xi(\lambda)d\lambda, \quad\lambda\in\mathcal{S},\]
_where the weight \(\xi(\lambda)\) is given by the formula_
\[\xi(\lambda)=(2\pi)^{-1}w(\lambda)|\Omega(\lambda\pm i0)|^{-2}. \tag{7.5}\]
**Remark 7.8**.: Formulas (7.4), (7.5) are also true (see [30]) in the non-critical case \(|\gamma|<1\) with \(w=\sqrt{1-\gamma^{2}}\) and \(\mathcal{S}=\mathbb{R}\) as well as (see [26]) for stabilizing coefficients satisfying (1.6) with \(w(\lambda)=2^{-1}\sqrt{1-\lambda^{2}}\) and \(\mathcal{S}=(-1,1)\) (if \(a_{\infty}=1/2\)).
**Remark 7.9**.: For the case \(\sigma\in(0,1)\), another representation for the weight \(\xi(\lambda)\) was obtained in [16] - see formula (4.12) in this paper. It is difficult to compare these two representations because the Jost solutions were defined in [16] in terms of infinite products and formula (4.12) contains an implicit factor (4.8).
Putting together Theorem 6.10 and formula (7.5), we obtain
**Theorem 7.10**.: _Under the assumptions of Theorem 7.7 the weight \(\xi(\lambda)\) is a continuous strictly positive function of \(\lambda\in\mathcal{S}\)._
Note that this result was deduced in [12] from the subordinacy theory. The assumptions of [12] are more restrictive compared to Theorem 7.7; in particular, it was required in [12] that \(\sigma\in(1/2,2/3)\).
In view of (7.5) the scattering amplitude \(\kappa(\lambda)\) defined by (6.28) can be expressed via the weight \(\xi(\lambda)\):
\[\kappa(\lambda)=(2\pi)^{-1/2}w(\lambda)^{1/2}\xi(\lambda)^{-1/2}.\]
Hence asymptotic formula (6.29) can be rewritten as
\[P_{n}(\lambda)=\big{(}2\pi w(\lambda)\xi(\lambda)\big{)}^{-1/2}(-\gamma)^{n}n ^{-\rho}\big{(}\sin(\Phi_{n}(\lambda)-\eta(\lambda))+o(1)\big{)}\]
as \(n\to\infty\). This form seems to be more common for the orthogonal polynomials literature. |
2309.14071 | Charmonium $χ_{c0}$ and $χ_{c2}$ resonances in coupled-channel
scattering from lattice QCD | In order to explore the spectrum of hidden-charm scalar and tensor
resonances, we study meson-meson scattering with $J^{PC}=0^{++}, 2^{++}$ in the
charmonium energy region using lattice QCD. Employing a light-quark mass
corresponding to $m_\pi \approx 391$ MeV, we determine coupled-channel
scattering amplitudes up to around 4100 MeV considering all kinematically
relevant channels consisting of a pair of open-charm mesons or a charmonium
meson with a light meson. A single isolated scalar resonance near 4000 MeV is
found with large couplings to $D\bar{D}$, $D_s \bar{D}_s$ and the kinematically
closed $D^* \bar{D}^*$ channel. A single tensor resonance at a similar mass
couples strongly to $D\bar{D}$, $D\bar{D}^*$ and $D^* \bar{D}^*$. We compare
the extracted resonances to contemporary experimental candidate states,
previous lattice results and theoretical modeling. In contrast to several other
studies, we do not find any significant feature in the scalar amplitudes
between the ground state $\chi_{c0}(1P)$ and the resonance found around 4000
MeV. | David J. Wilson, Christopher E. Thomas, Jozef J. Dudek, Robert G. Edwards | 2023-09-25T12:05:07Z | http://arxiv.org/abs/2309.14071v1 | Charmonium \(\chi_{c0}\) and \(\chi_{c2}\) resonances in coupled-channel scattering from lattice QCD
###### Abstract
In order to explore the spectrum of hidden-charm scalar and tensor resonances, we study meson-meson scattering with \(J^{PC}=0^{++},2^{++}\) in the charmonium energy region using lattice QCD. Employing a light-quark mass corresponding to \(m_{\pi}\approx 391\) MeV, we determine coupled-channel scattering amplitudes up to around \(4100\) MeV considering all kinematically relevant channels consisting of a pair of open-charm mesons or a charmonium meson with a light meson. A single isolated scalar resonance near \(4000\) MeV is found with large couplings to \(D\bar{D}\), \(D_{s}\bar{D}_{s}\) and the kinematically closed \(D^{*}\bar{D}^{*}\) channel. A single tensor resonance at a similar mass couples strongly to \(D\bar{D}\), \(D\bar{D}^{*}\) and \(D^{*}\bar{D}^{*}\). We compare the extracted resonances to contemporary experimental candidate states, previous lattice results and theoretical modeling. In contrast to several other studies, we do not find any significant feature in the scalar amplitudes between the ground state \(\chi_{c0}(1P)\) and the resonance found around \(4000\) MeV.
## I Introduction
In 2003, the discovery of the \(X/\chi_{c1}(3872)\) thrust hadron spectroscopy into a new era. Beginning with the \(B\)-factories and later at BES-III and LHCb, many hadrons with masses consistent with containing a charm-anticharm quark pair have been found in places that were not expected within existing models of charmonium. To date, no theoretical picture has explained the complete pattern of observed states, which have been dubbed the "XYZ" [1].
One key question concerns the relationship between newly observed hadrons and nearby hadron-hadron thresholds. Many states are found in close proximity to thresholds, but is this merely a coincidence or is it the presence of the threshold that drives the existence of the state? One might anticipate that the simplest place to begin to answer this would be close to the lowest open-charm threshold, i.e. where \(D\bar{D}\) is produced. However presently neither the experimental landscape, nor our theoretical understanding of this energy region is clear. In the \(\chi_{c0}\) and \(\chi_{c2}\) channels, where isoscalar \(D\bar{D}\) interact in \(S\) and \(D\)-wave respectively, several candidate states have been reported experimentally.
In the \(\chi_{c0}\) channel, above the unambiguous ground state \(\chi_{c0}(1P)\) at \(3415\) MeV, while simple \(c\bar{c}\) quark models would expect an isolated \(2P\) state near \(3920\) MeV [2], recent experimental analyses suggest multiple candidate states. The lightest, \(\chi_{c0}(3860)\), is so far claimed by only one experiment [3], in \(e^{+}e^{-}\to J/\psi\,D\bar{D}\), appearing as a rather broad resonance. Suggestions of a similar feature have also been made for the \(\gamma\gamma\to D\bar{D}\) process measured at both Belle [4] and BaBar [5], although the resonant interpretation is ambiguous, with much of the structure apparently driven by the Born-term. The \(\chi_{c0}(3860)\) is _not_ seen in a recent evaluation of \(B\) decays to \(D^{+}D^{-}K^{+}\) by the LHCb experiment [6], where such a state might be expected to play significant role. Inclusive production of \(D\bar{D}\) close to threshold shows an enhancement close to \(D^{0}\bar{D}^{0}\) threshold, but this is explained in terms of "feed-down" from decays of the \(X/\chi_{c1}(3872)\)[7], and no additional scalar resonance is needed to describe the data. Theoretical re-analyses of the experimental data, in particular \(\gamma\gamma\to D\bar{D}\), suggests the energy-dependence used to motivate a broad resonance \(\chi_{c0}(3860)\), may actually belong to a _sub-threshold_ pole that behaves like a bound-state in the \(D\bar{D}\) channel [8; 9].
Above \(3900\) MeV there are strong experimental signals for new resonances, although it is not clear how features observed in different hadron-hadron channels relate to each other. The LHCb study of the \(D^{+}D^{-}K^{+}\) final state identifies overlapping narrow \(J^{PC}=0^{++}\) and \(2^{++}\) resonances decaying to \(D^{+}D^{-}\) with masses \(3924\) MeV and \(3927\) MeV [6] respectively. Analysis of three-body final states requires the cross-channel amplitudes, in this case \(DK\), to be modeled, and it is not obvious how sensitive the need for both scalar and tensor \(\chi_{c}\) resonances is to the details of this modeling. A recent LHCb analysis proposes a state decaying to \(D_{s}\bar{D}_{s}\) around \(3960\) MeV [10], which is suggested to be a separate state to the one reported in \(D^{+}D^{-}\), although other studies suggest the \(D\bar{D}\) and \(D_{s}\bar{D}_{s}\) enhancements could be related to a single resonance pole [11; 12]. Earlier results from Belle [13] indicate a low-statistics enhancement in \(\gamma\gamma\to J/\psi\,\omega\) around \(3915\) MeV. This final state can be populated in \(S\)-wave by either \(J^{PC}=0^{++}\) or \(2^{++}\) owing to the vector nature of the \(J/\psi\) and the \(\omega\).
While the experimental situation for excited \(\chi_{c0}\) states, as described above, is currently rather unclear, with even
the number of states not settled, the situation for \(2^{++}\) is a little better. The single \(\chi_{c2}(3930)\) claimed in \(\gamma\gamma\to D\bar{D}\) is the leading candidate [5], although whether some of the enhancements currently assigned to \(0^{++}\) could actually be due to \(2^{++}\) remains to be seen.
Experimental analyses are typically performed final-state by final-state, with descriptions of the resonance content of a particular \(J^{PC}\) being inferred from that single data set, sometimes with inspiration from observations in other final-states, but typically not with analysis of multiple final-states simultaneously. Theoretically the relevant fundamental object is a partial-wave _scattering amplitude_, which is a matrix in the space of coupled multihadron channels. The enhancements seen in experiment for real values of the scattering energy correspond to poles of this scattering amplitude present at complex energies, and it is the pole locations and residues that provide a model-independent description of the resonance content and channel couplings. The scattering matrix is subject to important fundamental constraints, such as unitarity and analyticity, which are not always respected in practical data analysis, and which can give rise to important effects, particularly at kinematic thresholds.
Quantum Chromodynamics (QCD) is the fundamental theory of hadrons, but connecting the strong interactions of quarks and gluons to the presence of resonances in meson-meson scattering is not simple, and for want of a rigorous approach, models have been developed that incorporate some of the features of QCD. While the simplest approaches have only \(c\bar{c}\) bound-states, other approaches include compact tetraquark constructions, or molecular meson-meson bound-states, and in these pictures, many more states are expected. Early suggestions of the importance of meson-meson contributions were put forward long before any XYZ states were discovered [14; 15], and the various possibilities have been discussed in several recent reviews [16; 17; 18; 19; 1]. While models are useful to inform our understanding of the possible mechanisms at work, ultimately they are not QCD, and we must turn to a first-principles approach like _lattice QCD_.
In recent years, approaches for computing scattering processes using lattice QCD have undergone rapid development, reviewed in Ref. [20], such that we are now in a position to consider the challenging \(\chi_{c0}\) and \(\chi_{c2}\) systems. We benefit from several recent breakthroughs, including the ability to compute _coupled-channel_ scattering amplitudes [21; 22], and to consider final states with _mesons of nonzero spin_[23; 24; 25; 26].
This paper reports on a computation in QCD of the coupled-channel scattering matrix in the energy region where the above-mentioned resonance candidates lie. We use an approximation in which \(c\bar{c}\) annihilation is forbidden, and a larger-than-physical light-quark mass corresponding to \(m_{\pi}\approx 391\) MeV. With these choices, no hadron in the energy region we will consider can decay to more than two lighter hadrons. The scattering amplitudes resulting from this approach can be analytically continued to complex energies to determine resonance poles and their channel couplings.
The mass scale of charmonium systems brings in several difficulties that increase the complexity of this calculation with respect to calculations considering lighter hadrons composed primarily of light and strange quarks. The relevant discrete spectra are compressed relative to light quark spectra, and the small energy gap between \(D\) and \(D^{*}\) mesons means that several significant thresholds open almost simultaneously. Considering also closed-charm channels, involving a charmonium meson and a light meson, we are forced to account for physics in a large number of coupled-channels. Much of this article is dedicated to disentangling those channels in which strong scattering effects occur from those which are decoupled and weak.
We will report coupled-channel amplitudes with \(J^{PC}=0^{++}\) and \(2^{++}\), constrained using large numbers of discrete energy levels taken from three lattice volumes, in both the rest frame and moving frames. One key new feature of this work is computation of a "complete" \(S\)-matrix in these quantum numbers, one which includes all kinematically accessible channels, with no a priori assumptions of which might be "weak enough to ignore".
Comparing with previous attempts to study this scattering system in lattice QCD [27; 28], we find that previously claimed near-threshold features at \(D\bar{D}\) and \(D_{s}\bar{D}_{s}\) do not appear in the present study. We find no broad resonance low in the \(S\)-wave \(D\bar{D}\) amplitude, nor any bound state between the ground state \(\chi_{c0}(1P)\) and the first hadron-hadron threshold.
A quite simple picture arises from our study, with only a single relatively narrow resonance in each of \(J^{PC}=0^{++}\) and \(2^{++}\). Each is found to have large couplings to several open-charm \(D\)-meson decay modes, channels consisting of pairs of open-charm mesons, with only relatively small couplings to closed-charm channels such as \(J/\psi\,\omega\).
This article is organized as follows. In Section II, we describe the lattices and methods used to compute the finite-volume spectra, present the masses of the stable scattering hadrons, give the partial-waves that feature in the irreducible representations of the lattice symmetry, and outline the operators that are required to access the spectrum. The determined finite volume energy levels are presented in Section III. Section IV explains how these energies are translated into scattering amplitudes through extensions of the Luscher formalism. In Section V we describe how the amplitudes are determined, in increasing complexity, beginning with just a few energies at rest, before ultimately making use of more than 200 energy levels including systems with nonzero total momentum. We discuss the pole singularities present in the determined scattering amplitudes in Section VI. In Section VII, we offer some interpretations and comparisons of our results to experiment, prior lattice calculations, and other theoretical approaches. We conclude with a brief summary in Section VIII. A concise description of this work may be found in Ref. [29].
Lattice QCD setup
Within lattice QCD, the discrete spectrum of eigenstates of QCD in a finite-volume can be obtained from the time-dependence of two-point correlation functions. For the current calculation, we use \(N_{f}=2+1\) flavours of dynamical quarks with exact isospin symmetry, and opt to work with a larger-than-physical value of the degenerate \(u,d\) quark mass, while the strange quark mass is approximately at the physical value. The quenched charm quark mass is tuned to approximately reproduce the physical \(\eta_{c}\) mass [30]. We disallow \(c\bar{c}\) annihilation so that low-lying charmonia like the \(\eta_{c}\) and \(J/\psi\) are absolutely stable.1
Footnote 1: In effect we implement two degenerate non-dynamical charm quarks, and study only the charm-isospin=1 sector. In this way the approximation is self-consistent.
The quark dynamics is implemented by an anisotropic Wilson-clover action as described in Ref. [31; 32], with a temporal lattice spacing \(a_{t}\), finer than that in space \(a_{s}\), by a factor \(\xi=a_{s}/a_{t}\approx 3.5\). Distillation is used to smear the quark fields and enable efficient computation of all contributions, including those where light quarks or strange quarks annihilate [33]. The three lattice volumes used are summarized in Table 1, where \(L\) and \(T\) are the spatial and temporal extent of the lattice respectively. Correlation functions are averaged over several (\(N_{\rm tsrcs}\)) source timeslices. The lattice scale is set using the physical \(\Omega\) baryon mass, leading to \(a_{t}^{-1}=5667\) MeV [34]. The pion mass is determined to be \(a_{t}m_{\pi}=0.06906(13)\), corresponding to \(m_{\pi}\approx 391\) MeV [35].
In the finite cubic volume defined by the lattice, continuous rotation symmetry is broken, and states are characterized as lying in irreducible representations (irreps) of the cubic group at rest, and of relevant little groups at non-zero momentum, rather than by spin, \(J\). Charge-conjugation, \(C\), remains a good symmetry, but states are only of definite parity, \(P\), at rest. The finite periodic boundary implies that momentum is quantized, \(\vec{p}=\frac{2\pi}{L}\vec{n}\), where \(\vec{n}=(i,j,k)\) is a triplet of integers (we will often use a shorthand notation \(\vec{p}=[ijk]\)).
The finite-volume quantization condition which relates the discrete spectrum to continuous scattering amplitudes is sensitive to the total momentum of the scattering system, and as such we compute spectra in moving frames (nonzero overall momentum \(\vec{P}\)) as well as in the rest frame to obtain more constraint on the scattering amplitudes. We refer to irreps at rest with the labels \([000]\Lambda^{P}\), and to moving-frame irreps as \([ijk]\Lambda\).
For each irrep, we compute a matrix of correlation functions constructed using a wide range of operators resembling single-, two- and three-hadron constructions. The resulting correlation matrix as a function of Euclidean time, \(C_{ij}(t)=\left\langle 0\middle|\mathcal{O}_{i}(t)\mathcal{O}_{j}^{\dagger}(0) \middle|0\right\rangle\), is then analyzed variationally to obtain the discrete spectrum contributing to these correlation functions [36; 37; 38], with our implementation described in Ref. [39]. The analysis takes the form of solving a generalized eigenvalue problem on each timeslice,
\[\mathbf{C}(t)\,v^{\mathfrak{n}}=\lambda_{\mathfrak{n}}(t)\,\mathbf{C}(t_{0})\,v^{ \mathfrak{n}}\,, \tag{1}\]
where the discrete spectrum \(\{E_{\mathfrak{n}}\}\) is obtained from the time dependence of the eigenvalues. The eigenvectors can be used to construct optimized operators as described below. They also provide helpful qualitative information by indicating which states are produced dominantly by particular operator constructions via overlap factors,
\[Z_{i}^{\mathfrak{n}}=\left\langle\mathfrak{n}\middle|\mathcal{O}_{i}^{\dagger} \middle|0\right\rangle=\sqrt{2E_{\mathfrak{n}}}\,e^{E_{\mathfrak{n}}t_{0}/2}v_ {j}^{\mathfrak{n}*}C_{ji}(t_{0})\,.\]
The construction of single-hadron-like operators, as fermion bilinears using gamma matrices \(\mathbf{\Gamma}\) and up to three gauge-covariant derivatives, is as described in Refs. [39; 40; 41]. These are of the form \(\mathcal{O}^{\dagger}\sim\bar{q}\,\mathbf{\Gamma}\overleftarrow{D}...\overleftarrow{D }\,q\) with a definite continuum \(J\) that is projected into cubic group irreps \(\Lambda\).
The construction of two-hadron-like and three-hadron-like operators is as described in Ref. [35] and Ref. [25] respectively. Both leverage _optimized_ single-hadron operators that have reduced excited state contamination (when compared to using only a single operator). A variationally-optimal single-hadron operator for meson \(M_{1}\) is formed from a linear combination of a basis of single-hadron operators with \(M_{1}\) quantum numbers where the coefficients are given by the eigenvectors from the variational analysis, \(\Omega_{M_{1}}^{\dagger}\sim\sum_{i}v_{i}^{\mathfrak{n}}\mathcal{O}_{i}^{\dagger}\). These are then used in product pairs to form two-hadron operators,
\[\mathcal{O}_{M_{1}M_{2}}^{\dagger}(\vec{p})\sim\sum_{\vec{p}_{1},\vec{p}_{2}} \text{CGs}\,\Omega_{M_{1}}^{\dagger}(\vec{p}_{1})\,\Omega_{M_{2}}^{\dagger}( \vec{p}_{2})\,,\]
for the \(M_{1}M_{2}\) channel with overall momentum \(\vec{p}\) where the sum is over all momenta related by an allowed lattice rotation such that \(\vec{p}=\vec{p}_{1}+\vec{p}_{2}\) and "CGs" represents the necessary lattice Clebsch-Gordan coefficients to project to the appropriate quantum numbers. A recursive approach can be adopted to form three-hadron-like operators from optimized single-hadron and two-hadron operators.
### Stable hadrons
The systems of coupled-channel scattering we will consider feature a number of hadrons which are stable against
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} \(L/a_{s}\) & \(T/a_{t}\) & \(N_{\rm cfg}\) & \(N_{\rm vec}\) & \(N_{\rm tsrcs}\) & \(L/\text{fm}\) & \(m_{x}L\) \\ \hline
16 & 128 & 478 & 64 & 8–16 & 1.9 & 3.8 \\
20 & 256 & 288 & 128 & 4–8 & 2.4 & 4.8 \\
24 & 128 & 553 & 160 & 2–4 & 2.9 & 5.7 \\ \end{tabular}
\end{table}
Table 1: Lattices used: \(N_{\rm cfg}\) is the number of gauge configurations, \(N_{\rm vec}\) is the number of distillation vectors, and \(N_{\rm tsrcs}\) is the number of time sources.
strong decay on the lattices used in this study. Their energies as a function of momentum are determined using spectra extracted from matrices of correlation functions.2 Figure 1 shows dispersion relations for a selection of the stable hadrons used in this study, along with fits using the relativistic expression,
Footnote 2: The current calculation makes use of a \(20^{3}\times 256\) lattice [42; 43], while earlier calculations with charm quarks [44; 45; 46] used a shorter time-extent, \(20^{3}\times 128\), lattice.
\[\left(a_{t}E\right)^{2}=\left(a_{t}m\right)^{2}+\left|\vec{n}\right|^{2}\left( \frac{2\pi}{\xi L/a_{s}}\right)^{2} \tag{2}\]
from which the rest mass \(a_{t}m\), and the anisotropy \(\xi\), are determined separately for each hadron. The masses resulting from such fits are presented in Table 2 along with relevant kinematic thresholds for isospin-0, \(C=+\) scattering channels.
In the dispersion relation fits, points are observed to be scattered around the mean with a deviation beyond what would be expected from statistical fluctuations alone - comparisons can be seen in Fig. 25 in Appendix A. The most significant deviations are seen for the \(\eta_{c}\), \(\chi_{c0}\), and the \(\chi_{c2}\), with largest values of \(a_{t}\delta E\approx 0.00030,0.00050,0.00100\) respectively. These are tiny in absolute terms, but relatively large on the scale of the very small statistical uncertainties. We choose to treat these deviations as an additional systematic uncertainty on the energy levels to be added to the statistical uncertainties when used in energy level fits to determine scattering amplitudes. In practice, we include an additional systematic uncertainty on the \(E_{\rm cm}\) energies of \(a_{t}\delta E_{\rm syst.}=0.00050\) when the amplitude we wish to determine line is \(J=0,1\). If the amplitude we wish to determine has \(J\geq 2\), we use \(a_{t}\delta E_{\rm syst.}=0.00100\).
Similar effects have been observed in other lattice studies with charm quarks by other groups. There is a consensus that this needs to be accounted for in some way. An alternative used in Refs. [48; 49; 28] is to apply an energy shift configuration by configuration to force these num
\begin{table}
\begin{tabular}{l|l|l} hadron & \(J^{P(C)}\) & \(a_{t}m\) \\ \hline \hline \(\pi\) & \(0^{-+}\) & \(0.06906(13)\) \\ \(K\) & \(0^{-}\) & \(0.09698(9)\) \\ \(\eta\) & \(0^{-+}\) & \(0.10364(19)\) \\ \(\eta^{\prime}\) & \(0^{-+}\) & \(0.16410(100)\) \\ \(\sigma\) & \(0^{++}\) & \(0.1316(9)^{\ddagger}\) \\ \(\omega\) & \(1^{--}\) & \(0.15541(29)\) \\ \(\phi\) & \(1^{--}\) & \(0.17949(21)\) \\ \hline \(D\) & \(0^{-}\) & \(0.33281(9)^{\dagger}\) \\ \(D^{\star}\) & \(1^{-}\) & \(0.35464(14)^{\dagger}\) \\ \(D_{s}\) & \(0^{-}\) & \(0.34424(11)^{\dagger}\) \\ \(D_{s}^{\star}\) & \(1^{-}\) & \(0.36566(14)^{\dagger}\) \\ \(D_{0}^{\star}\) & \(0^{+}\) & \(0.40170(18)^{\dagger}\) \\ \(D_{s}^{\star}\) & \(0^{+}\) & \(0.4200(5)^{\ddagger}\) \\ \hline \(\eta_{c}\) & \(0^{-+}\) & \(0.52312(4)^{\dagger}\) \\ \(\psi\) & \(1^{--}\) & \(0.53715(5)^{\dagger}\) \\ \(h_{c}\) & \(1^{+-}\) & \(0.61662(26)^{\dagger}\) \\ \(\chi_{c0}\) & \(0^{++}\) & \(0.60422(25)^{\dagger}\) \\ \(\chi_{c1}\) & \(1^{++}\) & \(0.61488(46)^{\dagger}\) \\ \(\chi_{c2}\) & \(2^{++}\) & \(0.62110(28)^{\dagger}\) \\ \(\eta_{c}^{\dagger}\) & \(0^{-+}\) & \(0.64160(55)^{\dagger}\) \\ \(\psi^{\prime}\) & \(1^{--}\) & \(0.64566(111)^{\dagger}\) \\ \hline \(\Omega\) & \(\frac{3}{2}^{+}\) & \(0.2951(22)\) \\ \end{tabular} \begin{tabular}{l|l} channel & \(a_{t}E_{\rm thr.}\) \\ \hline \hline \(\eta_{c}\eta\) & \(0.6268(2)\) \\ \(\eta_{c}\pi\pi\) & \(0.6612(2)\) \\ \(\eta_{c}\eta^{\prime}\) & \(0.6872(10)\) \\ \(\chi_{c0}\) & \(0.6926(3)\) \\ \(\chi_{c0}\eta\) & \(0.7079(3)\) \\ \(\psi\phi\) & \(0.7166(2)\) \\ \(\eta_{c}K\bar{K}\) & \(0.7171(1)\) \\ \(\chi_{c1}\eta\) & \(0.7185(5)\) \\ \(\chi_{c2}\eta\) & \(0.7247(3)\) \\ \(\eta_{c}\pi\pi\pi\) & \(0.7303(3)\) \\ \(\eta_{c}\eta\eta\) & \(0.7304(3)\) \\ \(\psi K\bar{K}\) & \(0.7311(1)\) \\ \(\chi_{c0}\sigma\pi\) & \(0.7424(3)\) \\ \hline \(\eta_{c}\) & \(0^{-+}\) & \(0.52312(4)^{\dagger}\) \\ \(\psi\) & \(1^{--}\) & \(0.53715(5)^{\dagger}\) \\ \(h_{c}\) & \(1^{+-}\) & \(0.61662(26)^{\dagger}\) \\ \(\chi_{c0}\) & \(0^{++}\) & \(0.60422(25)^{\dagger}\) \\ \(\chi_{c1}\) & \(1^{++}\) & \(0.61488(46)^{\dagger}\) \\ \(\chi_{c2}\) & \(2^{++}\) & \(0.62110(28)^{\dagger}\) \\ \(\eta_{c}^{\dagger}\) & \(0^{-+}\) & \(0.64160(55)^{\dagger}\) \\ \(\psi^{\prime}\) & \(1^{--}\) & \(0.64566(111)^{\dagger}\) \\ \hline \(\Omega\) & \(\frac{3}{2}^{+}\) & \(0.2951(22)\) \\ \end{tabular}
\begin{tabular}{l|l} channel & \(a_{t}E_{\rm thr.}\) \\ \hline \hline \(\eta_{c}\eta\) & \(0.6268(2)\) \\ \(\eta_{c}\pi\pi\) & \(0.6612(2)\) \\ \(\eta_{c}\eta^{\prime}\) & \(0.6872(10)\) \\ \(\chi_{c0}\eta\) & \(0.6926(3)\) \\ \(\chi_{c0}\eta\) & \(0.7079(3)\) \\ \(\psi\phi\) & \(0.7166(2)\) \\ \(\eta_{c}K\bar{K}\) & \(0.7171(1)\) \\ \(\chi_{c1}\eta\) & \(0.7185(5)\) \\ \(\chi_{c2}\eta\) & \(0.7247(3)\) \\ \(\chi_{c2}\eta\pi\pi\) & \(0.7303(3)\) \\ \(\eta_{c}\eta\eta\) & \(0.7304(3)\) \\ \(\psi K\bar{K}\) & \(0.7311(1)\) \\ \(\chi_{c0}\sigma\pi\) & \(0.7424(3)\) \\ \hline \(\eta_{c}\) & \(0.656(1)\) \\ \(\psi\) & \(1^{--}\) & \(0.53715(5)^{\dagger}\) \\ \(h_{c}\) & \(1^{+-}\) & \(0.61662(26)^{\dagger}\) \\ \(\chi_{c0}\) & \(0^{++}\) & \(0.60422(25)^{\dagger}\) \\ \(\chi_{c1}\) & \(1^{++}\) & \(0.61488(46)^{\dagger}\) \\ \(\chi_{c2}\) & \(2^{++}\) & \(0.62110(28)^{\dagger}\) \\ \(\eta_{c}^{\dagger}\) & \(0^{-+}\) & \(0.64160(55)^{\dagger}\) \\ \(\psi^{\prime}\) & \(1^{--}\) & \(0.64566(111)^{\dagger}\) \\ \hline \(\Omega\) & \(\frac{3}{2}^{+}\) & \(0.2951(22)\) \\ \end{tabular}
\end{table}
Table 2: Stable hadron
bers into agreement. We consider the approach adopted in the current paper to be more conservative, treating the difference as a systematic uncertainty that will be propagated through into scattering amplitude determinations. Further details are given in Appendix A.
The anisotropies, \(\xi=a_{s}/a_{t}\), obtained for different stable hadrons differ somewhat, and we choose to increase the uncertainty on the value obtained for the pion, \(\xi_{\pi}=3.444(6)\)[35], to account for such deviations, using in practice \(\xi=3.444(50)\), which spans the extracted values for \(\eta_{c}\) and \(J/\psi\).3 This uncertainty is propagated through when center-of-momentum frame energies are obtained from computed lattice frame energies.
Footnote 3: This range is almost exactly the same as used in Ref. [25] that was chosen to account for the observed differences in the helicity components of the vector \(\omega\).
### Resonance expectations and partial-waves
The goal of this calculation is to obtain coupled-channel partial wave amplitudes with \(J^{PC}=0^{++}\) and \(2^{++}\) up to around 4100 MeV, \(a_{t}E_{\sf cm}=0.72\) in lattice units. This runs to slightly above the \(D^{*}\bar{D}^{*}\) threshold, and corresponds to an energy region where resonant features have been observed experimentally.
An earlier lattice QCD calculation of the spectrum on the current lattices using _only single-hadron-like operators_[30] found results that suggest narrow resonant states may appear. Updating these calculations using more time sources and a longer time-extent for the \(20^{3}\) lattice, leads to the spectra presented in Fig. 2 in irreps relevant for \(J^{PC}=0^{++},2^{++},\ldots\). The pattern is reminiscent of the \(J=0\), 2, 3 and 4 members of \(q\bar{q}\) quark-model multiplets, \(nL=1P\), \(2P\) and \(1F\), and the overlap of states onto operators subduced from particular \(J^{PC}\)[30] is in agreement with this. Working up to \(a_{t}E_{\sf cm}\approx 0.72\) appears to be sufficient to capture the \(2P\)-like \(\chi_{c0},\chi_{c2}\) states in this energy region.4
Footnote 4: We will reserve comment on the \(J^{PC}=1^{++}\) member for a future study. It will not contribute in any of the irreps used in this work.
As shown in Fig. 2, the expected \(\chi_{cJ}\) states lie above a number of kinematical thresholds, and hence should appear as _resonances_ in meson-meson scattering amplitudes. We label each meson-meson channel according to its total spin \(S\) (combining the spin quantum numbers of the two scattering hadrons), orbital angular momentum \(\ell\), and total angular momentum \(J\), using the standard notation, \({}^{2S+1}\ell_{J}\). Meson-meson partial wave amplitudes grow at threshold according to their orbital angular momenta \(\ell\), via \(k_{i}^{2\ell}\) where \(k_{i}\) is the cm momentum for meson-meson pair \(i\). This is a relevant property since it establishes a hierarchy among partial-waves whereby the lowest \(\ell\) dominate, unless disturbed by some nearby singularity.
Due to the presence of scattering mesons with nonzero spin, for a given \(J^{PC}\), amplitudes with more than one \(\ell\) can contribute. The \(J^{PC}=0^{++}\) amplitudes are relatively simple, consisting of pairs of pseudoscalars and vectors in a relative \(S\)-wave (\({}^{1}\!S_{0}\)). The lowest threshold is \(\eta_{c}\eta\), followed by \(DD\), \(\eta_{c}\eta^{\prime}\), \(D_{s}\bar{D}_{s}\), \(\psi\omega\), and \(D^{*}\bar{D}^{*}\). The first contributions from partial waves with \(\ell>0\) arise from \(\psi\omega\) in \({}^{5}D_{0}\), and \(\chi_{c1}\eta\) in \({}^{3}\!P_{0}\). There can be no contributions from vector-pseudoscalar channels such as \(D\bar{D}^{*}\). The \(J^{PC}=2^{++}\) amplitudes contain pairs of pseudoscalars in \({}^{1}\!D_{2}\) combinations, while vector-vector channels can arise in \(S\)-wave through \({}^{5}\!S_{2}\). Furthermore, vector-pseudoscalar channels such as \(D\bar{D}^{*}\) now contribute, the lowest combination being \({}^{3}\!D_{2}\). We summarise the meson-meson partial wave contributions relevant at low energy for each \(J^{PC}\) considered in this study in Table 3.
### Irreps and operators
In order to constrain the coupled-channel scattering amplitudes for \(J^{PC}=0^{++}\) and \(2^{++}\) we will compute finite-volume spectra in several irreps.
Working at zero overall momentum, the \([000]A_{1}^{+}\) irrep constrains \(0^{++}\) with contaminations from \(4^{++}\) and higher. \(J^{PC}=2^{++}\) information can be obtained from \([000]E^{+}\) and \([000]T_{2}^{+}\), where the second of these also receives contributions from \(3^{++}\). In order to constrain this \(3^{++}\) component, it is advantageous to consider \([000]A_{2}^{+}\) where it is the leading contribution.
When working at nonzero overall momentum, partial waves of both parities appear. For example, in the moving frame \([ijk]A_{1}\) irreps we have contributions from \(0^{++},1^{-+}\) and higher. In the moving frame \([00i]B_{1,2}\) irreps where \(2^{++}\) is present, \(2^{-+},3^{\pm+}\) and higher also appear. In order to determine the \(1^{-+}\), \(2^{-+}\) and \(3^{-+}\) amplitudes, we also consider the rest frame irreps \([000]T_{1}^{-}\), \([000]E^{-}\), \([000]T_{2}^{-}\) and \([000]A_{2}^{-}\).
In summary, our selection of irreps enables us to determine scattering amplitudes for \(J^{PC}=0^{++},1^{-+},2^{\pm+}\) and \(3^{\pm+}\).
Within each irrep we establish a basis of operators, including all single-hadron-like operators with up to 3 derivatives at-rest and up to 2 derivatives in moving frames. These are supplemented with all two and three-hadron operators expected to be relevant in the energy region of interest based on their corresponding non-interacting energy. For \(N_{\rm had.}=2\) or 3 hadrons, this is determined from
\[a_{t}E_{\rm n.i.}=\sum_{a=1}^{N_{\rm had.}}\left(\left(a_{t}m_{a}\right)^{2}+ \left|\vec{n}_{a}\right|^{2}\left(\frac{2\pi}{\xi L/a_{s}}\right)^{2}\right)^ {1/2} \tag{3}\]
where \(n_{a}=(i,j,k)\) is a vector of integers and \(m_{a}\) is the scattering hadron mass. If this 'lattice-frame' energy, when boosted into the cm frame, lies below \(a_{t}E_{\sf cm}=0.743\)
the corresponding operator is included in the basis.5; 6 Full lists of operators used for the presented results can be found in the supplemental material.7 When analysing the correlation matrices, the operator basis is varied to explore the sensitivity to the precise selection, and in this process some of the highest lying operators are discarded as their presence does not affect low-lying levels.
Footnote 5: This upper limit corresponds to the \(\chi_{c0}\pi\pi\) threshold.
Footnote 6: Two exceptions are an \(\eta_{c}[012]\eta[001]\) in \([002]B_{1}\) on the \(L/a_{s}=16\) volume, and very high-lying \(\chi_{c2}\pi\) operators that would be expected to produce levels above the energy limits used in the scattering analyses below.
Figure 2: Energy levels obtained from diagonalising correlation matrices constructed using _only_\(c\bar{c}\)-like operators in irreps relevant for \(J^{PC}=0^{++},2^{++},\ldots\) with zero overall momentum. The locations of hadron-hadron thresholds are marked. The right panel shows a summary of the observed levels based on the pattern of levels across irreps, and the quark-model–like labeling follows from Ref. [30].
Three-meson operator constructions are only expected to be relevant at relatively high energies, owing to the large light quark mass and prohibition of \(c\bar{c}\) annihilation in this calculation. There are very few relevant three-hadron operators in this energy region, and no four-hadron operators. The lowest three-hadron operators arise from \(\eta_{c}\pi\pi\) combinations where \(\pi\pi\sim\sigma\) and there is relatively high orbital angular momentum between the '\(\sigma\)' and the \(\eta_{c}\) (typically an \(F\)-wave). These operators are constructed as described for the \(\mathbb{R}\mathbb{M}\) operators in Sec. II.C of Ref. [25]. The projection coefficients for the near-threshold \(\sigma\) are obtained from the analysis performed in Ref. [42] and these are combined with the single-hadron \(\eta_{c}\) optimized operator. If there were strong interactions in the \(\eta_{c}\pi\) subsystems, it is possible that these operators alone may not be sufficient.
Further three-hadron contributions arise from \(\chi_{cJ}\pi\pi\sim\chi_{cJ}\sigma\)-like combinations. In \([000]A_{1}^{+}\), one might naively expect a level at the threshold energy \(m_{\chi_{c0}}+2m_{\pi}\). However, we know from Refs. [42; 47] that on these lattices the \(\sigma\) channel has a level below threshold with a large volume dependence owing to a bound-state \(\sigma\) strongly coupled to \(\pi\pi\). We will see that this feature survives the addition of a \(\chi_{c0}\) operator, producing a level below \(m_{\chi_{c0}}+2m_{\pi}\) with an energy that slowly rises with \(L/a_{s}\) (similar to the \(\sigma\) in \(\pi\pi\) scattering in \([000]A_{1}^{+}\)). Further details are given in the next section. A few other three-hadron channels are listed in Table 2. These are not expected to produce levels in the energy region of interest. When determining scattering amplitudes we will not utilize any energy levels found to have large overlaps with three-meson operators.
## III Finite-volume spectra
The operator bases described in the previous section are used to compute a matrix of correlation functions for each irrep and these are then analyzed variationally as discussed above. The resulting spectrum in the \([000]A_{1}^{+}\) irrep on the \(24^{3}\) volume is presented in Fig. 3. The plot also shows histograms of the overlap factors, \(Z_{i}^{\text{n}}\), for each state where these have been normalized such that the largest value for a particular operator, considered across all states, takes value 1. Clear patterns emerge, and in several cases levels are dominated by a single operator construction. These are often close to a non-interacting energy level, as determined by Eq. 3, with a potential explanation of there being a decoupled channel with only weak interactions.
In Figs. 4 and 5 we plot all of the finite-volume energy levels extracted from the variational analysis to be used to constrain scattering amplitudes. The uncertainties shown are estimated using jackknife. In several cases, in particular where there is a significant variation in the extracted energy for different \(t_{0}\) values (in Eq. 1) or timeslice fit ranges and forms, the uncertainties are enlarged to provide a conservative estimate of the energy value. We also vary the operator bases by adding and removing operators when possible to ensure the spectrum is stable with respect to small and reasonable changes.8
Footnote 8: The plots do not show the additional systematic uncertainty \(a_{\pi}\delta E_{\text{gsr}}\), discussed in Section II.1 that is added to every level. This will be shown in later plots as an additional error bar on each point.
In Figs. 3, 4 and 5 we choose a presentation scheme where those states having a single dominant operator overlap are colored to indicate which operator is dominant. Black points show levels that are dominated by single-hadron-like operators with \(C=+\), and/or operators constructed from a pair of \(D\)-mesons - as seen in Fig. 3, the mixing between these sectors appears to be large. Levels shown in cyan have dominant overlap with single-hadron-like operators subduced from \(J^{PC}=2^{-+}\) (we will later associate them with a bound \(2^{-+}\) state).
Non-interacting energies are shown by the continuous curves, colored according to the meson-meson combination, and when a non-interacting level is degenerate it is shown by a repeated curve, slightly displaced vertically above. Wide brown bands indicate three-body combinations of \(\eta_{c}\pi\pi\) and \(\chi_{cJ}\pi\pi\) where the \(\pi\pi\) part is taken from the \(\sigma\) channel, similar to the "2+1" non-interacting energies described in Sec. II.C of Ref. [25]. At this light quark mass, the \(\sigma\) appears as a near-threshold bound state that exerts an influence over a relatively wide region of energy, both above and below threshold, owing to its strong coupling to the \(\pi\pi\) channel. The curves are determined from
\[E_{\text{n.i.}}^{(2+1)}=E_{\text{n, cm}}^{\sigma}(\Lambda^{P},L)+\left(m_{3}^{2}+| \vec{p}_{3}|^{2}\right)^{\frac{1}{2}}, \tag{4}\]
where \(m_{3}\) and \(\vec{p}_{3}\) are the mass and momentum of the \(\eta_{c}\) or \(\chi_{cJ}\), and this \(E_{\text{n.i}}^{(2+1)}\) in the lattice frame is then boosted back to the cm-frame.
The (almost) volume independent levels lying below the lowest two-meson threshold, \(\eta_{c}\eta\), near \(a_{t}E_{\text{cm}}\approx 0.63\) in all irreps where \(0^{++}\) and/or \(2^{++}\) contributes correspond to the stable \(\chi_{c0}(1P)\) and \(\chi_{c2}(1P)\) states. The impact of such bound-states on the scattering amplitudes above threshold is modest and can be modeled as smooth "background". We will not explicitly include description of these subthreshold energy levels as part of our amplitude analysis.9
Footnote 9: We will perform a limited analysis to check that they can indeed be neglected in Appendix B.
At the highest energies, the extracted spectra can become rather dense, and levels can overlap at the level of the statistical uncertainty (although they are distinguished by their orthogonal eigenvectors in the variational approach). This high density is to be expected, given the relatively small mass splittings in the charmed meson sector. The fact that vector mesons appear in scattering channels leads to a large number of possible spin-combinations and these can often subduce into irreps in several different
ways for a given momentum combination. In practice, we will make only limited use of the densest parts of the spectrum.
Some indications of the likely resonant content can be read off from the gross structure of the extracted spectra, particularly on the smallest volume. In the \([000]\Lambda^{+}\) irreps which have contributions from \(J=0,2\), there are clear additional levels (beyond the number of expected non-interacting levels) around \(a_{t}E_{\mathsf{cm}}\approx 0.7\), and these have large overlaps onto operators with single-hadron-like and \(D\bar{D}\)-like constructions (see also Fig. 3). This could be interpreted as indicating that something remains of the picture inferred from the spectrum found using only single-hadron-like operators, with these states possibly being \(0^{++}\) and \(2^{++}\) resonances with \(D\bar{D}\) decays. In addition, the \([000]A_{2}^{+}\) irrep shows a pattern of levels that could indicate a single isolated \(3^{++}\) resonance around \(a_{t}E_{\mathsf{cm}}\approx 0.73\).
On the other hand, aside for a low-lying volume-independent level likely interpretable as a \(2^{-+}\) bound state, the \(PC=-+\) irreps feature only levels lying on the non-interacting curves, suggesting the absence of any significant scattering strength at these energies.
In this first calculation, we do not consider the energy region above \(a_{t}E_{\mathsf{cm}}\approx 0.72\), and will not address the possible presence of a \(J^{PC}=4^{++}\) state, nor a second \(J^{PC}=2^{++}\) state that might be a member of the \(q\bar{q}\) quark-model \(1F\) multiplet.
Figure 3: The spectrum and normalised operator overlaps \(Z_{i}^{\mathsf{n}}\) for the \([000]A_{1}^{+}\) irrep on the \(L/a_{s}=24\) volume. The spectrum obtained from the lattice QCD calculation is shown in the center, colored according to the largest operator overlap as described in the text. Solid curves in the center show the non-interacting energies. The magnitudes of operator overlaps \(Z_{i}^{\mathsf{n}}\) for each state, normalised as described in the text, are shown in the dotted boxes.
Figure 4: Finite-volume spectra in irreps with zero overall momentum and positive parity, and irreps with non-zero overall momentum. Points with uncertainties are the energy levels determined from lattice QCD correlation matrices, colored according to the dominant operator overlap as described in the text. Solid thin curves are non-interacting energies, thick light-brown curves are “2+1” non-interacting levels as described in the text, and dashed horizontal lines are hadron-hadron thresholds. A single red star in \([002]B_{1}\) for \(L/a_{s}=16\) indicates an \(\eta_{c}[012]\eta[001]\) operator not included in the basis.
To make more quantitative and robust conclusions about the resonant content, in the next section we use the Luscher approach to relate the finite-volume spectra to infinite-volume scattering amplitudes.
## IV Scattering amplitudes from finite volume spectra
In order to translate the finite-volume spectra into infinite-volume scattering amplitudes we make use of the extensions of Luscher's finite volume formalism to coupled-channel hadron-hadron scattering [23]. In terms of the infinite-volume scattering \(t\)-matrix, the phase-space matrix, \(\mathbf{\rho}\), and a matrix of known kinematic functions \(\mathbf{\mathcal{M}}\), this can be expressed as
\[\det[\mathbf{D}] = 0\,,\] \[\mathbf{D} = \mathbf{1}+i\,\mathbf{\rho}\cdot\mathbf{t}\cdot(\mathbf{1}+i\,\mathbf{\mathcal{M}}). \tag{5}\]
The matrices exist in a space of coupled meson-meson channels and partial-waves [20]. Given some parameterized \(t\)-matrix, the roots of Eq. 5 yield the finite-volume spectrum in a given volume and irrep, \(\{E_{\text{n}}^{\text{par}}\}\). The so-obtained spectra can matched level-by-level with the lattice QCD obtained spectrum, \(\{E_{\text{n}}\}\), and a correlated \(\chi^{2}\) formed, as defined in Eq. 8 of Ref. [22]. Minimization of this \(\chi^{2}\) under variation of the free parameters in \(\mathbf{t}(E_{\text{cm}})\) then gives a lattice QCD constrained scattering amplitude. Finding the roots of Eq. 5 in the case of many coupled channels and/or partial-waves can be efficiently achieved by making use of an eigenvalue decomposition of \(\mathbf{D}\), where the eigenvalues can be seperately searched for zeros [51]. The corresponding eigenvectors are also useful, as described below.
In our previous applications of this approach to lattice QCD data, matching finite-volume energy levels between the lattice calculation and the parameterization has not been a significant issue. The simplest algorithm is to pair levels by their energy order starting from the lowest first. Another straightforward algorithm is to pair levels working from the smallest energy _differences_ first. A third option, which suffers from combinatoric growth with the number of levels, is to compute all possible pairings and choose the combination that produces the smallest \(\chi^{2}\). The somewhat novel case encountered in this study features a relatively high density of states high in the spectrum, and here the algorithms above prove to be somewhat imperfect, yielding ambiguous level matching that does not yield a smooth variation of the \(\chi^{2}\) over parameter space.
An improved approach makes use of the eigenvector information obtained in the decomposition of \(\mathbf{D}\). For small changes in the scattering amplitude, the eigenvectors at each eigenvalue zero vary relatively slowly and can thus be used to help match the spectra obtained from two evaluations with similar parameter values. One method is to insist that the dot product of the eigenvectors for a given energy level for slightly different parameter values is significantly far from zero. We have found that under a \(\chi^{2}\) minimization procedure, this helps to ensure a smooth evolution of \(\chi^{2}\) value with changing amplitude parameters, and provides well-defined minima even for very dense spectra.10
Footnote 10: An extension of this use of the eigenvectors of \(\mathbf{D}\) helps to identify levels that can be associated with decoupled channels. These typically have overlap only onto meson-meson operators of a
Figure 5: As Figure 4 but for irreps with zero overall momentum and negative parity.
Parameterizations of coupled-channel partial-wave \(t\)-matrices are required that exactly respect unitarity, which was assumed in the derivation of Eq. 5. In this study, we make use of forms that include the flexibility for there to be bound-states and resonances in the \(s\)-channel scattering process, in particular the \(K\)-matrix expressions,
Footnote 11: See Appendix B of Ref. [22] for implementation details. The resulting function has the same logarithms as found from scalar loop integrals often implemented in amplitude modeling and effective field theory approaches, see Appendix B of Ref. [52].
\[K_{ij} = \sum_{p}\frac{g_{i}^{(p)}g_{j}^{(p)}}{m_{p}^{2}-s}+\sum_{a}\gamma _{ij}^{(a)}s^{a}\] \[\left[\mathbf{t}^{-1}\right]_{ij} = (2k_{i})^{-\ell_{i}}[\mathbf{K}^{-1}]_{ij}(2k_{j})^{-\ell_{j}}+\mathbf{I} _{ij}\,, \tag{6}\]
where \(\mathbf{K}\) is a real symmetric matrix for real \(s=E_{\text{cm}}^{2}\), and \(g_{i}^{(p)}\), \(m_{p}\) and \(\gamma_{ij}^{(a)}\) are real parameters. The factors \((2k_{i})^{\ell_{i}}\) implement the behavior at threshold required by angular momentum conservation.
The matrix \(\mathbf{I}\) is diagonal and has imaginary part \(\text{Im}\,I_{ij}=-\rho_{i}=-2k_{i}/\sqrt{s}\), which precisely accounts for \(s\)-channel unitarity in the scattering process. The real part of \(I_{ij}\) can be fixed to zero, which we will sometimes refer to as a "simple" phase space. Alternatively, a dispersion relation can be used to generate a real part from the known imaginary part, which we will refer to as a "Chew-Mandelstam" phase space.12 The resulting integral features a subtraction, and the location of the subtraction point is a free choice, with convenient choices being the kinematic threshold, or one of the pole locations, \(s=m_{p}^{2}\).
Footnote 12: In Appendix B we briefly explore amplitudes in which such a bound-state is explicitly included.
These amplitude parameterizations do not directly parameterize physics in the \(t\)- or \(u\)-channels that can generate "left-hand cuts" which might appear in the energy region considered. We will return to a discussion of this point in Section VI.
## V Scattering amplitude determinations
Our approach to determining scattering amplitudes constrained by the spectra presented in Section III will be to proceed systematically, beginning with description of rest-frame irreps receiving contributions from a minimal set of partial waves. Setting the contributions of higher partial waves using rest-frame irreps in which they are leading, we will then be able to analyse moving frame energies with these waves fixed. The workflow is presented in Fig. 6, where each grey box represents a subsection below.
(J^{pc}=0^{++}\) below \(\eta_{c}\eta^{\prime}\) and \(D_{s}\bar{D}_{s}\) threshold from the rest-frame \([000]A_{1}^{+}\) irrep
At the lowest energies, \(0^{++}\) is a coupled-channel system of \(S\)-wave closed-charm \(\eta_{c}\eta\) and open-charm \(D\bar{D}\) scattering. In the \([000]A_{1}^{+}\) irrep these are the only kinematically open channels below \(a_{t}E_{\text{cm}}=0.684\), a cutoff selected to lie some way below the \(\eta_{c}\eta^{\prime}\) and \(D_{s}\bar{D}_{s}\) thresholds.
Figure 4 indicates only small departures from the relevant non-interacting energies on all three volumes, with possibly very mild attraction at the \(D\bar{D}\) threshold. Nothing in the spectrum suggests a near-threshold resonance or bound-state.
The presence in the spectrum of an energy level near \(a_{t}E_{\text{cm}}=0.604\) on each volume is explained by the stable ground state \(\chi_{c0}\). Such a deeply bound state will have no direct effect on the scattering amplitudes above threshold, so its presence is not included in amplitude parameterizations.13
Footnote 13: In Appendix B we briefly explore amplitudes in which such a bound-state is explicitly included.
The 10 energy levels in this energy region can be described by a constant \(K\)-matrix implemented with a threshold-subtracted Chew-Mandelstam phase space, with best-fit parameters,
\[\gamma_{\eta_{c}\eta\to\eta_{c}\eta} = (0.34\pm 0.23\pm 0.09)\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,
anisotropy within their errors.13 The matrix on the right gives the correlations between parameters. The resulting amplitude is presented in Fig. 7, where it is clear that the system in this energy region is compatible with there being no significant scattering, and there certainly being no near-threshold \(D\bar{D}\) bound-state. The simplicity of the spectrum indicates no need for more elaborate amplitude parameterizations.
Footnote 13: We perform four additional determinations of the amplitudes, two using the hadron masses at their mean \(\pm 1\sigma\) values from Table 2, and two from varying the anisotropy to \(\xi_{-}=3.438\) and \(\xi_{+}=3.450\), the \(\pm 1\sigma\) values determined from the pion.
\(J^{pc}=0^{++}\) up to and including \(\psi\phi\) threshold from the rest-frame \(A_{1}^{+}\) irrep
Extending description of the \([000]A_{1}^{+}\) irrep to higher energies requires inclusion of the \(\eta_{c}\eta^{\prime}\), \(D_{s}\bar{D}_{s}\) and \(\psi\omega\) channels which appear in \(S\)-wave almost simultaneously.14
Footnote 14: \(\psi\omega\) also produces \({}^{5}\!D_{0}\) and \({}^{5}\!D_{4}\) waves that can contribute in \([000]A_{1}^{+}\) but these are expected to be suppressed at energies close to threshold.
In Fig. 4 we see that levels with large overlap onto \(\eta_{c}\eta^{\prime}\) operators tend to be compatible with the corresponding non-interacting energies, but only within rather large uncertainties across all three volumes.
An "extra" level, beyond the counting expected from non-interacting energies, is observed on each volume slightly above \(\psi\omega\) threshold, at an energy close to that seen in the spectrum obtained using only single-hadron-like operators presented in Fig. 2. As shown in Fig. 3, this level has large overlaps onto both the single-hadron-like operators in the basis and the operator resembling \(D_{[000]}^{*}D_{[000]}^{*}\), motivating the inclusion of the kinematically-closed \(D^{*}\bar{D}^{*}\{^{1}\!S_{0}\}\) channel into our analysis.15
Footnote 15: When including moving frames later, we will consider a more limited energy region below \(D^{*}\bar{D}^{*}\) threshold in Appendix C.2 where the \(D^{*}\bar{D}^{*}\) channel can be neglected.
We proceed by considering a system of coupled \(\eta_{c}\eta\), \(D\bar{D}\), \(D_{s}\bar{D}_{s}\), \(\eta_{c}\eta^{\prime}\), \(\psi\omega\), \(D^{*}\bar{D}^{*}\) and \(\psi\phi\) scattering, where each pair is in \(S\)-wave only, constrained by 45 energy levels (the colored and black levels, excluding the \(\chi_{c0}\) bound state, shown in the \([000]A_{1}^{+}\) panel of Fig. 4). We have included the \(\psi\phi\) channel for which constraint is provided by three levels close to \(\psi\phi\) threshold dominated by a \(\psi\phi\)-like operator construction.
A small complication comes from the presence of a degenerate pair of \(\psi_{[100]}\omega_{[100]}\) levels in the non-interacting limit. In order for there to be two such solutions to the quantization condition, the \(t\)-matrix must feature a \(\psi\omega\)\(D\)-wave as well as \(S\)-wave, although the impact of what will be a very weak amplitude near threshold is just to supply an energy level lying very close to the non-interacting energy. The simplest option, which we will adopt, is to add a \(J^{PC}=4^{++}\) amplitude, \(\psi\omega\{^{5}\!D_{4}\}\to\psi\omega\{^{5}\!D_{4}\}\), parameterized with a \(K\)-matrix constant.
The \(J^{PC}=0^{++}\)\(t\)-matrix is parameterized using a \(K\)-matrix with a single pole and a matrix of constants,
\[K_{ij}=\frac{g_{i}\,g_{j}}{m^{2}-s}+\gamma_{ij}\,, \tag{8}\]
and in practice the spectrum mainly lying on the non-interacting energies suggests that we can fix many of the free parameters to zero. In particular, many parameters that when allowed to vary resulted in a value consistent with zero, are then fixed to zero, and the minimization re-run. The Chew-Mandelstam phase-space subtracted at the \(K\)-matrix pole location is used. An example result is,
Figure 7: Left panel: \(J^{PC}=0^{++}\) scattering amplitudes corresponding to Eq. 7. Amplitudes are only determined up to the \(\eta_{c}\eta^{\prime}\) threshold indicated as the pink circle on the horizontal axis. Right panel: The finite volume spectrum in \([000]A_{1}^{+}\) from Fig. 4 (points) plotted with the solutions of Eq. 5 with the scattering amplitude as defined by Eq. 7 (orange dashed lines with bands). The effect of the “additional” systematic uncertainty applied before determining the amplitudes as described in Sec. II is shown by the outer grey error bars on each energy level (in this case, for most points it is barely visible).
where all parameters not listed have been fixed to zero. The \(K\)-matrix pole couplings to open-charm channels appear to be significantly non-zero. Figure 8 shows the corresponding amplitudes, where clear peaks are visible in the \(D\bar{D}\) and \(D_{s}\bar{D}_{s}\) amplitudes, along with a rapid turn-on at threshold of amplitudes leading to \(D^{*}\bar{D}^{*}\). Examination of the complex energy-plane singularities of this amplitude, reported on later in the manuscript, will lead us to conclude that these effects are due to a single resonance. We will later show that the main features of this amplitude are robust when we vary the specific parameterization used, and when we add constraints from moving-frame irrep spectra.
The value of \(\chi^{2}/N_{\rm dof}\) for this fit suggests that there is some mild tension between the computed spectrum and this amplitude, but correlations between computed energy levels play a significant role. All levels are described with a maximum deviation of \(1.5\sigma\), and the same amplitude in an _uncorrelated_ fit results in \(\chi^{2}/N_{\rm dof}=\frac{34.0}{45-11}=1.00\). The "global" systematic uncertainty on the input spectrum introduced in Section II.1 influences the \(\chi^{2}\) and the associated parameter errors (removing it yields \(\chi^{2}/N_{\rm dof}=\frac{74.4}{45-11}=2.19\), and errors roughly half as large), but both the parameter central values and the qualitative behavior of the amplitudes remains unchanged. The result presented in Eq. 9 should be viewed as being a conservative estimate of the amplitudes.
The inclusion of the \(\eta_{c}\eta^{\prime}\) channel and the associated levels introduced the level-matching problem described in Section III. The large uncertainties on these levels, coupled with the high density of zeroes of the quantization condition, makes many matching assignments plausible. Fortunately, the noisy levels appear to overlap only with the \(\eta_{c}\eta^{\prime}\) operators, suggesting a decoupling that can be built into the amplitude. When \(D\) in Eq. 5 was eigendecomposed, the zeroes found in the eigenvalue associated with the \(\eta_{c}\eta^{\prime}\{{}^{1}S_{0}\}\) channel could be matched with the levels that have large overlap with the \(\eta_{c}\eta^{\prime}\) operators. The \(\eta_{c}\eta\) and \(\psi\phi\) channels, which also appear to be decoupled, were paired in the same way with the eigendecomposition solutions of the quantization condition. The remaining levels were matched by pairing levels working from the smallest energy difference first.
In order to make use of moving-frame irrep spectra to provide additional constraint on the \(0^{++}\) amplitude, we must first constrain the other \(J^{PC}\) amplitudes which enter into these irreps by considering other rest-frame irreps in which they are leading.
Figure 8: Scattering amplitudes with \(J^{PC}=0^{++}\) determined from the \([000]A_{1}^{+}\) irrep plotted as \(\rho_{i}\rho_{j}|t_{ij}|^{2}\) which is limited to a maximum value of 1 by unitarity. Circles on the horizontal axes indicate kinematic thresholds. The open circles at the bottom show the locations of the energy levels providing constraint on the amplitudes.
### \(J^{PC}=3^{++}\) from rest-frame \(A_{2}^{++}\) irrep
\(3^{++}\) amplitudes appear in several irreps from which we wish to extract \(0^{++}\) or \(2^{++}\), but we can constrain their low-energy behavior using the spectrum in the \([000]A_{2}^{++}\) irrep (Fig. 4) where it appears in relative isolation. In the energy region below \(a_{t}E_{\mathsf{cm}}=0.72\), where we wish to constrain the amplitudes for use in extracting \(0^{++}\) and \(2^{++}\), very few levels are present. The lowest level is dominated by overlap with a \(\psi\omega\) operator, and on the \(L/a_{s}=24\) volume this is located close to its non-interacting energy. At higher energies larger shifts can be seen and a narrow resonance may be present, as anticipated in Fig. 2 where the use of only \(q\bar{q}\)-like operator constructions results in a level around \(a_{t}E_{\mathsf{cm}}\approx 0.725\).
Meson-meson channels contributing to \(3^{++}\) are (in order of threshold opening): \(D\bar{D}^{*}\{{}^{3}\!D_{3}\}\), \(\psi\omega\{{}^{3,5}\!D_{3}\}\), \(D_{s}\mathcal{D}_{s}^{*}\{{}^{3}\!D_{3}\}\), \(D^{*}\mathcal{D}^{*}\{{}^{5}\!D_{3}\}\), \(\psi\phi\{{}^{3,5}\!D_{3}\}\), and \(D_{s}^{*}D_{s}^{*}\{{}^{5}\!D_{3}\}\). Excluded from this list is \(\eta_{c}\sigma\{{}^{1}\!F_{3}\}\) which we expect to be heavily suppressed by the angular momentum barrier.16
Footnote 16: We included in our basis a three-hadron \(\eta_{c}\pi\pi\sim\eta_{c}\sigma\) operator constructed from an \(\eta_{c}\) and the variational solution in the \(\pi\pi\) system corresponding to the \(\sigma\) which is a near-threshold \(\pi\pi\) bound-state on these lattices [42; 47]. We observe that a level with large overlap onto this operator is consistent with an \(\eta_{c}\) combined with the lowest \(\sigma\) level observed in Ref. [42] with no clear “additional interactions”. This level is located around \(a_{t}E_{\mathsf{cm}}=0.74\) with a relatively large uncertainty.
We consider 16 levels below \(a_{t}E_{\mathsf{cm}}=0.743\) having excluded the single level with large overlap onto the \(\eta_{c}\sigma\) operator. A description using a \(K\)-matrix pole and matrix of constants, with Chew-Mandelstam phase-space subtracted at the pole, is given by,
\[a_{t}m = (0.7295\pm 0.0017\pm 0.0002)\] \[g_{D\bar{D}^{*}\{{}^{3}\!D_{3}\}} = (2.51\pm 0.35\pm 0.08)\cdot a_{t}\] \[g_{D^{*}\bar{D}^{*}\{{}^{3}\!D_{3}\}} = (0.00\pm 1.38\pm 0.12)\cdot a_{t}\] \[g_{D_{s}\bar{D}^{*}\{{}^{3}\!D_{3}\}} = (0.00\pm 0.69\pm 0.07)\cdot a_{t}\] \[\gamma_{D\bar{D}^{*}\{{}^{3}\!D_{3}\}\to D\bar{D}^{*}\{{}^{3}\!D_{3 }\}} = (53\pm 153\pm 40)\cdot a_{t}^{4}\] \[\gamma_{D_{s}\bar{D}^{*}\{{}^{3}\!D_{3}\}\to D\bar{D}^{*}\{{}^{3} \!D_{3}\}} = (-462\pm 122\pm 105)\cdot a_{t}^{4}\] \[\gamma_{D_{s}\bar{D}^{*}\{{}^{3}\!D_{3}\}\to D_{s}\bar{D}^{*}\{{} ^{3}\!D_{3}\}} = (54\pm 184\pm 24)\cdot a_{t}^{4}\] \[\gamma_{\psi\omega\{{}^{3}\!D_{3}\}\to\psi\{{}^{3}\!D_{3}\}} = (343\pm 210\pm 55)\cdot a_{t}^{4}\] \[\gamma_{\psi\omega\{{}^{5}\!D_{3}\}\to\psi\{{}^{5}\!D_{3}\}} = (-26\pm 40\pm 13)\cdot a_{t}^{4}\] \[\gamma_{\psi\phi\{{}^{3}\!D_{3}\}\to\psi\phi\{{}^{3}\!D_{3}\}} = (-19\pm 628\mp 75)\cdot a_{t}^{4}\] \[\chi^{2}/N_{\mathrm{dof}}=\tfrac{8.34}{16-10}=1.39\,,\]
where a noticeable feature is a pole with a significant coupling to \(D\bar{D}^{*}\{{}^{3}\!D_{3}\}\). We will refer to this description as the "reference amplitude". The reproduction of the lattice QCD energy levels from the finite volume formalism is shown in Fig. 9, and the amplitudes in Eq. 10 appear in the left panel of Fig. 10.
The amplitude above proves to not be a _unique_ description of the finite volume spectrum, with other parameterizations giving solutions which have a significant \(D\bar{D}^{*},D^{*}\bar{D}^{*}\) cross-term, as shown in the right panel of Fig. 10 and summarised in Table 7 in Appendix E.1. For the purposes of serving as a "background wave" in irreps where we seek \(0^{++}\) and \(2^{++}\) amplitudes, we only require the \(3^{++}\) amplitude below \(a_{t}E_{\mathsf{cm}}\approx 0.72\), and there the various amplitude descriptions all broadly agree. We will use the reference amplitude presented above for this purpose.
Figure 9: As for the right panel of Fig. 7, except in the \([000]A_{2}^{++}\) irrep with the solutions from the amplitude in Eq. 10. Several channels such as \(\eta_{c}\eta\) and \(\eta_{c}\pi\pi\) open below the plotted range.
\(\rho_{i}\rho_{j}|t_{ij}|^{2}\)
\(D\bar{D}^{*}\{^{3}\!D_{3}\}\to D\bar{D}^{*}\{^{3}\!D_{3}\}\)
\(D\bar{D}^{*}\{^{3}\!D_{3}\}\to D^{*}\bar{D}^{*}\{^{5}\!D_{3}\}\)
\(D^{*}\bar{D}^{*}\{^{5}\!D_{3}\}\to D^{*}\bar{D}^{*}\{^{5}\!D_{3}\}\)
\(\rho_{i}\rho_{j}|t_{ij}|^{2}\)
\(D\bar{D}^{*}\{^{3}\!D_{3}\}\to D\bar{D}^{*}\{^{3}\!D_{3}\}\)
\(D\bar{D}^{*}\{^{3}\!D_{3}\}\to D^{*}\bar{D}^{*}\{^{5}\!D_{3}\}\)
\(\rho_{i}\rho_{j}|t_{ij}|^{2}\)
\(D\bar{D}^{*}\{^{3}\!D_{3}\}\to D\bar{D}^{*}\{^{3}\!D_{3}\}\)
\(\rho_{i}\rho_{j}|t_{ij}|^{2}\)
\(D\bar{D}^{*}\{^{3}\!D_{3}\}\to D\bar{D}^{*}\{^{3}\!D_{3}\}\)
\(\rho_{i}\rho_{j}|t_{ij}|^{2}\)
\(D\bar{D}^{*}\{^{3}\!D_{3}\}\to D^{*}\bar{D}^{*}\{^{3}\!D_{3}\}\)
\(\rho_{i}\rho_{j}|t_{ij}|^{2}\)
\(D\bar{D}^{*}\{^{3}\!D_{3}\}\to D\bar{D}^{*}\{^{3}\!D_{3}\}\)
\(\rho_{i}\rho_{j}|t_{ij}|^{2}\)
\(D\bar{D}^{*}\{^{3}\!D_{3}\}\to D\bar{D}^{*}\{^{3}\!D_{3}\}\)
\(\rho_{i}\rho_{j}|t_{ij}|^{2}\)
\(D\bar{D}^{*}\{^{3}\!D_{3}\}\to D\bar{D}^{*}\{^{3}\!D_{3}\}\)
\(\rho_{i}\rho_{j}|t_{ij}|^{2}\)
\(D\bar{D}^{*}\{^{3}\!D_{3}\}\to D\bar{D}^{*}\{^{3}\!D_{3}\}\)
\(\rho_{i}\rho_{j}|t_{ij}|^{2}\)
\(D\bar{D}^{*}\{^{3}\!D_{3}\}\to D\bar{D}^{*}\{^{3}\!D_{3}\}\)
\(\rho_{i}\rho_{j}|t_{ij}|^{2}\ll 0.1\)\(D_{s}\bar{D}^{*}_{s}(^{3}\!D_{3})\to D_{s}\bar{D}^{*}_{s}(^{3}\!D_{3})\)
\(\psi\omega(^{3}\!D_{3})\to\psi\omega(^{3}\!D_{3})\)
\(\psi\phi(^{3}\!D_{3})\to\psi\phi(^{3}\!
### \(J^{PC}=\{1,2,3\}^{-+}\) from rest-frame irreps
In order to use moving-frame irreps to constrain \(0^{++}\) and \(2^{++}\) partial-waves, we must also consider negative parities, and these are most directly extracted from the at-rest irreps presented in Fig. 5. The spectra indicate that the interactions are relatively weak, with the only non-trivial feature being an "extra" level located around \(a_{t}E_{\sf cm}=0.68\) in the \(E_{2}^{-}\) and \(T_{2}^{-}\) irreps. The mild volume-dependence and large overlaps of this level with single-hadron-like operators subduced from \(2^{-+}\) suggest that this is a stable \(\eta_{c2}\) state. In Ref. [30], which used only single-hadron-like operators, the pattern of states extracted in this energy region resembled a quark model \(1D\) multiplet, with this state being the \(q\bar{q}(^{1}D_{2})\) member [2]. The state lies below all open-charm decay thresholds, but slightly above \(\eta_{c}\pi\pi\). For the same reasons as in Section V.3, we expect only a very weak coupling to \(\eta_{c}\pi\pi\) and do not include this three-meson channel in our amplitude analysis.
Considering \(\{1,2,3\}^{-+}\), the meson-meson partial-waves that contribute are given in Table 3. We choose to first determine \(J^{PC}=2^{-+}\) and exotic \(3^{-+}\) using energy levels in the \(E^{-}\), \(T_{2}^{-}\) and \(A_{2}^{-}\) irreps. Considering all levels below \(a_{t}E_{\sf cm}=0.73\) except those with dominant overlap with an \(\eta_{c}\pi\pi\) operator (which are decoupled from other operators), we have 41 energies. Using a pole plus constant \(K\)-matrix for \(2^{-+}\) and a matrix of constants for \(3^{-+}\), the following parameters provide a good description of the finite-volume spectra:
\[a_{t}m_{2^{-}} = (0.67538\pm 0.00063\pm 0.00025)\] \[g_{DD^{*}\{^{3}p_{2}\}} = (-1.73\pm 0.64\pm 0.25)\] \[\gamma_{DD^{*}\{^{3}p_{2}\}\to DD^{*}\{^{3}p_{2}\}} = (43.1\pm 35.6\pm 20.3)\cdot a_{t}^{2}\] \[\gamma_{DD^{*}\{^{3}p_{2}\}\to DD^{*}\{^{3}p_{2}\}} = (3676\pm 2118\pm 1640)\cdot a_{t}^{6}\] \[\gamma_{D_{2}D_{2}^{*}\{^{3}p_{2}\}\to D_{2}D_{2}^{*}\{^{3}p_{2}\}} = (21.9\pm 27.3\pm 8.8)\cdot a_{t}^{2}\] \[\gamma_{D^{*}D^{*}\{^{3}p_{2}\}\to D^{*}D^{*}\{^{3}p_{2}\}} = (-4.9\pm 24.2\pm 62.2)\cdot a_{t}^{2}\] \[\gamma_{\phi\omega}\{^{3}p_{2}\}\to\psi\omega\{^{3}p_{2}\} = (-6.32\pm 11.4\pm 68.3)\cdot a_{t}^{2}\] \[\gamma_{\phi\omega}\{^{5}p_{2}\}\to\psi\omega\{^{5}p_{2}\} = (-6.63\pm 12.0\pm 59.7)\cdot a_{t}^{2}\] \[\gamma_{\chi_{c2}\eta\{^{5}\!S_{2}\}\to\chi_{c2}\eta\{^{5}\!S_{2} \}} = (-0.42\pm 0.91\pm 1.56)\] \[\gamma_{\eta_{c}\eta\{^{5}\!S_{3}\}\to D\eta^{*}\{^{3}p_{2}\}} = (243\pm 295\pm 49)\cdot a_{t}^{6}\] \[\gamma_{DD^{*}\{^{3}F_{3}\}\to DD^{*}\{^{3}F_{3}\}} = (-13\pm 1758\pm 460)\cdot a_{t}^{6}\] \[\gamma_{\phi\omega}\{^{5}p_{3}\}\to\psi\omega\{^{5}p_{3}\} = (-14.4\pm 9.9\pm 51.3)\cdot a_{t}^{2}\] \[\chi^{2}/N_{\rm dof}=\frac{24.9}{41-12}=0.86\,,\]
The \(K\)-matrix pole in \(2^{-+}\) is allowed a coupling only to the lowest-lying open-charm partial-wave, \(D\bar{D}^{*}\{^{3}P_{2}\}\); all other couplings are fixed to zero.
An independent description of energy levels in the \(T_{1}^{-}\) and \(A_{2}^{-}\) irreps yields \(1^{-+}\) and \(3^{-+}\) amplitudes. While a hybrid meson is expected in \(1^{-+}\), it will lie above the considered energy region, and well above open-charm three-meson thresholds. Using 20 levels below \(a_{t}E_{\sf cm}=0.721\), the small energy shifts can be described by constant \(K\)-matrices,
\[\gamma_{DD^{*}\{^{3}P_{1}\}\to DD^{*}\{^{3}P_{1}\}} = (14.0\pm 7.6\pm 8.1)\cdot a_{t}^{2}\] \[\gamma_{\eta_{c}\eta\{^{1}P_{1}\}\to\eta_{c}\eta\{^{1}P_{1}\}} = (1.73\pm 1.51\pm 0.63)\cdot a_{t}^{2}\] \[\gamma_{\psi\omega\{^{1}P_{1}\}\to\psi\omega\{^{1}P_{1}\}} = (-36.4\pm 22.8\pm 6.14)\cdot a_{t}^{2}\] \[\gamma_{\psi\omega\{^{3}P_{1}\}\to\psi\omega\{^{3}P_{1}\}} = (-1.61\pm 26.64\pm 5.89)\cdot a_{t}^{2}\] \[\gamma_{\psi\omega\{^{5}P_{1}\}\to\psi\omega\{^{5}P_{1}\}} = (23.33\pm 27.14\pm 6.01)\cdot a_{t}^{2}\] \[\gamma_{\chi_{c1}\eta\{^{3}S_{1}\}\to\chi_{c1}\eta\{^{3}S_{1}\}} = (0.65\pm 1.06\pm 0.99)\]
and \(A_{2}^{-}\) irreps yields \(1^{-+}\) and \(3^{-+}\) amplitudes. While a hybrid meson is expected in \(1^{-+}\), it will lie above the considered energy region, and well above open-charm three-meson thresholds. Using 20 levels below \(a_{t}E_{\sf cm}=0.721\), the small energy shifts can be described by constant \(K\)-matrices,
\[\gamma_{\eta_{c}\eta_{\{}^{1}F_{3}\}\to\eta_{c}\eta^{\{}^{1}F_{3}\}} = (-282\pm 201\pm 61)\cdot a_{t}^{6}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
Considering all \(S\)-wave channels open below \(a_{t}E_{\text{cm}}=0.717\), and those \(D\)-wave channels opening at lower energies (except for \(\eta_{c}\eta^{\prime}\)), we will describe the \(2^{++}\) sector as a coupled \(\eta_{c}\eta\{{}^{1}D_{2}\}\), \(D\bar{D}\{{}^{1}D_{2}\}\), \(D\bar{D}{{}^{*}\{{}^{3}D_{2}\}}\), \(D_{s}\bar{D}_{s}\{{}^{1}D_{2}\}\), \(\psi\omega\{{}^{5}\!S_{2}\}\), \(D^{*}\bar{D}{{}^{*}\{{}^{5}\!S_{2}\}}\) and \(\psi\phi\{{}^{5}\!S_{2}\}\) system.17
Footnote 17: Similar to section V.2, some \(\psi\omega\)\(D\)-waves are required to produce a sufficient number of solutions. There proves to be insufficient constraint to uniquely determine all amplitudes featuring \(\psi\omega\) in \(D\)-wave. The required number of levels in the considered energy region is obtained from the finite-volume determinant condition provided \({}^{5}\!S_{2}\), \({}^{3}\!D_{3}\) and \({}^{5}\!D_{3}\) waves are included (the latter two from \(3^{++}\)).
A \(K\)-matrix of the form in Eq. 6, which includes the appropriate \(k_{i}^{\ell}\) threshold factors, is capable of describing the energy levels. One suitable example in which the \(K\)-matrix pole has couplings only to open-charm channels, and a Chew-Mandelstam phase-space subtracted at the \(K\)-matrix pole location, is,
\[\begin{array}{rl}a_{t}m=&(0.7030\pm 0.0010\pm 0.0002)\\ g_{D{D^{*}}^{*}\{{}^{3}D_{2}\}}=&(-30.1\pm 4.5\pm 0.8)\cdot a_{t}\\ g_{D_{s}D_{s}\{{}^{1}D_{2}\}}=&(1.53\pm 2.17\pm 0.40)\cdot a_{t}\\ g_{D^{*}D^{*}}^{*}\{{}^{5}\!S_{2}\}=&(1.67\pm 0.18\pm 0.13)\cdot a_{t}^{-1}\\ \gamma_{\eta_{c}\eta\{{}^{1}D_{2}\},\eta_{c}\eta\{{}^{1}D_{2}\}}=&(20.4\pm 23.9 \pm 8.17)\cdot a_{t}^{4}\\ \gamma_{D{\bar{D}}\{{}^{1}D_{2}\},D_{s}D_{s}\{{}^{1}D_{2}\}}=&(182\pm 138\pm 18) \cdot a_{t}^{4}\\ \gamma_{\phi\omega}\{{}^{5}\!S_{2}\},\psi\omega\{{}^{5}\!S_{2}\}=&(-0.884\pm 0. 449\pm 0.057)\\ \gamma_{\phi\phi}\{{}^{5}\!S_{2}\},\psi\{{}^{5}\!S_{2}\}=&(1.61\pm 0.77\pm 0.04)\\ g_{D{D{}^{1}D_{2}\}}=&10\cdot a_{t}\ \text{(fixed)}\\ &\chi^{2}/N_{\text{dof}}=\frac{48.0}{47-8}=1.23\,,\end{array} \tag{13}\]
where the resulting amplitude is plotted in Fig. 13. A clear resonance-like bump is observed in \(D\bar{D}\) and \(D\bar{D}{{}^{*}}\), along with a rapid turn-on of \(D{{}^{*}\bar{D}{{}^{*}}}\) at threshold.
The amplitude presented in Eq. 13 has the unusual feature that the \(K\)-matrix pole coupling to \(D\bar{D}\) is _fixed_ to an apparently arbitrary value. The origin of this is an empirical observation that when describing the finite-volume spectra, there proves to be essentially no sensitivity to the _absolute scale_ of the couplings \(g\), but only to their _ratios_. This is a novel finding, so far unique to this case, but one which seems to have an explanation in terms of there being a \(\chi_{c2}\) resonance having a large coupling to the _kinematically closed_\(S\)-wave \(D{{}^{*}\bar{D}{{}^{*}}}\) channel.
The coupling-ratio phenomenon can be illustrated using a simple two-channel Flatte amplitude specialized to describe a resonance lying above threshold for channel 1, and below threshold for channel 2,18
Footnote 18: For simplicity we will put both channels in \(S\)-wave, although the logic requires only the higher channel to be in \(S\)-wave.
\[t_{ij}(s)=\frac{g_{i}\,g_{j}}{m_{0}^{2}-s-i\sum_{k=1}^{2}g_{k}^{2}\,\rho_{k}( s)}\,, \tag{14}\]
where it is convenient to remove the channel 2 "self-energy" contribution to the resonance mass by defining an \(m\) such that \(m_{0}^{2}=m^{2}-g_{2}^{2}|\rho_{2}(m^{2})|\), so that the Flatte denominator takes the form
\[D(s)=m^{2}-s-ig_{1}^{2}\,\rho_{1}(s)-ig_{2}^{2}\left(\rho_{2}(s)-\rho_{2}(m^{2 })\right).\]
If we restrict to the region around \(s=m^{2}\), taken to be below the threshold for channel 2, we can approximate
\[\begin{array}{rl}D(s\approx m^{2})=\\ \quad-g_{2}^{2}\,\frac{\beta}{m}\left(1+\frac{2m^{2}}{g_{2}^{2}\beta}\right) \left[\sqrt{s}-m+i\big{(}\frac{g_{1}}{g_{2}}\big{)}^{2}\frac{m/\beta}{1+\frac{ 2m^{2}}{g_{2}^{2}\beta}}\rho_{1}(s)\right]\,,\end{array}\]
where \(\beta=\frac{4m_{0}^{2}}{m^{2}}\frac{1}{|\rho_{2}(m^{2})|}\). This indicates an amplitude that depends only on the _ratio_\(g_{1}/g_{2}\) in the limit that \(g_{2}\gg\sqrt{\frac{2m^{2}}{\beta}}\). Some consequences of this property are investigated in Appendices D and G.
Interpretations of the amplitude given in Eq. 13 in terms of resonant content and channel couplings will be presented in Section VI by considering the rigorously defined complex-energy pole content of the \(t\)-matrix.
### \(J^{PC}=2^{++}\) from rest and moving-frame irreps
Additional constraint on the \(2^{++}\) amplitude comes from moving-frame irreps. In order to use energy levels in the \([001],[002]B_{1,2}\) irreps to further constrain the \(2^{++}\) amplitude, we supply our previously determined \(2^{-+}\) and \(3^{\pm+}\) amplitudes, fixing them to the central values found. The determinant condition one is working with here is of unprecedented size, featuring seven \(2^{++}\) channels, seven \(2^{-+}\) channels, three \(3^{-+}\) channels and six \(3^{++}\) channels. The techniques presented in Ref. [51] are invaluable in handling such a large dimensional problem.
In practice we choose to reduce the complexity of the minimization problem by adding only the energy levels from the irreps \([001]B_{1}\) and \([002]B_{1}\) on the \(L/a_{s}=20\), 24 volumes, leading to a total of 86 levels to constrain the \(2^{++}\) interactions. We checked that the resulting amplitudes also give a reasonable description of the computed finite-volume spectra in the \([001]B_{2}\) and \([002]B_{2}\) irreps.
A challenge associated with using these 86 levels in a minimization is that there are considerable data correlations between energy levels computed on the same lattice volume. Upon eigendecomposition, the data correlation matrix is found to have a relatively small number of large eigenvalues which are likely to be reliably determined, and many more much smaller eigenvalues that may not be well determined on a limited number of gauge configurations. In an earlier study, various approaches to deal with such correlations were explored, such as uncorrelated fits, fitting to subsets of the data, and removing the smallest eigenvalues by a singular value decomposition (SVD) [45]. Alternative strategies and summaries of the issue can be found in Refs. [53; 54; 55; 56]. Here we opt to remove eigenmodes with the smallest eigenvalues when inverting the covariance matrix, associating the cut with a reduction in the number of degrees of freedom by which we judge the \(\chi^{2}\). We find that retaining all eigenvalues \(\lambda_{i}\) where \(\Lambda=\lambda_{i}/\max(\lambda)>0.02\) results in a reasonable description of the data. This cut leads to removal of 4 and 19 eigenmodes from the \(L/a_{s}=20\) and 24 spectra respectively. A detailed discussion is presented in Appendix F.
Using the same \(J^{PC}=2^{++}\) parameterization as in Eq. 13, with the additional moving-frame energy levels included, we obtain,
\[a_{t}m = (0.7025\pm 0.0012\pm 0.0007)\] \[g_{DD^{*}\{}^{3}D_{2}\} = (-37.9\pm 5.0\pm 3.94)\cdot a_{t}\] \[g_{D_{s}D_{s}\{}^{1}D_{2}\} = (-3.3\pm 4.3\pm 2.5)\cdot a_{t}\] \[g_{D^{*}D^{*}\{}^{1}S_{2}\} = (1.58\pm 0.15\pm 0.22)\cdot a_{t}^{-1}\] \[\gamma_{\mathfrak{h}_{0}\mathfrak{T}^{1}D_{2}\} = (16.3\pm 23.1\pm 7.5)\cdot a_{t}^{4}\] \[\gamma_{D\tilde{D}\{}^{1}D_{2}\} = (-81\pm 129\pm 100)\cdot a_{t}^{4}\] \[\gamma_{\phi\omega}\{^{S_{2}}\}_{\omega\omega\omega}\{^{S_{2}}\} = (0.55\pm 0.72\pm 0.81)\] \[\gamma_{\phi\phi}\{^{S_{2}}\}_{\omega\phi\phi}\{^{S_{2}}\} = (2.19\pm 0.77\pm 0.11)\] \[g_{DD^{\{}D_{2}\}} = 10\cdot a_{t}\ (\text{fixed})\] \[\chi^{2}/N_{\text{dof}}=\tfrac{62.8}{86-8-23}=1.14\,,\]
which agrees within uncertainties with the amplitude determined from the \([000]E^{++}\) and \([000]T_{2}^{++}\) energies
alone.19 This parameterization's description of the rest frame energy levels can be seen in the middle and right panels of Fig. 17.
Footnote 19: The number of degrees of freedom is taken to be \(N_{\text{dof}}=N_{\text{levols}}-N_{\text{pars.}}-N_{\text{reset}}\), where \(N_{\text{levols}}\) is the number of energies, \(N_{\text{pars.}}\) is the number of free parameters, and \(N_{\text{reset}}\) is the number of eigenmodes removed from the covariance matrix by the cut on eigenvalues.
The choice of parameterization form is varied to investigate bias associated with choosing any specific form. We take the form implied in Eq. 15 and vary which constants and couplings are present or set to zero compared to Eq. 15, we vary the Chew-Mandelstam subtraction point, and also replace it with the simple phase-space. In addition, the choice of which \(K\)-matrix pole coupling is fixed is varied, choosing \(g_{D_{\bar{D}}\langle^{\uparrow}D_{\underline{2}}\rangle}\) or \(g_{D\bar{D}^{\ast}}\langle^{\downarrow}D_{\underline{2}}\rangle\) rather than \(g_{D\bar{D}\langle^{\uparrow}D_{\underline{2}}\rangle}\), although this is found to have a negligible effect. We also explore the sensitivity to the data correlation eigenvalue cutoff. Overall, we consider 24 parameterizations that give a reasonable description of the finite-volume spectra and these are summarised in Table 8 in Appendix E.2. The central values of these parameterizations are compared with the reference parameterization from Eq. 15 in Fig. 14, and we find that the central values of the majority of parameterizations fall within the error bands obtained from Eq. 15. 20
Footnote 20: One obvious weakness is that there are relatively few parameterizations with couplings between open-charm channels and \(\psi\omega\langle^{\uparrow}S_{2}\rangle\). Although no good \(\chi^{2}\) minima were found with the \(g_{\psi\omega\langle^{\uparrow}S_{2}\rangle}\) parameter allowed to vary, many were attempted and all of these appeared to produce a small \(\psi\omega\langle^{\uparrow}S_{2}\rangle\) amplitude. This parameter was freed in some amplitude determinations of the rest-frame energies, only one of which is given in Eq. 11, and in that case it was found to be consistent with zero. A \(\gamma_{D\bar{D}^{\ast}D_{\underline{2}}\to\psi\omega\langle^{\uparrow}S_{2}\rangle}\) term was included in one parameterization. Considering the relevant spectra in Fig. 4, there are no clear large shifts involving the \(\psi\omega\) levels, and so perhaps it is a reasonable conclusion that these amplitudes are small.
The amplitudes in Fig. 14 that are not small show very similar features to those in Fig. 13. There is a clear resonance-like bump in \(D\bar{D}\) and \(D\bar{D}^{\ast}\), and a rapid turn-on of \(D^{\ast}\bar{D}^{\ast}\) at threshold. A small number of parameterizations appear to have some large \(D_{s}\bar{D}_{s}\) amplitudes at high energies, although there is relatively little constraint in this region. The other amplitudes, including all closed-charm channels, are consistent with being small. We will explore the singularity content of these amplitudes in Section VI.
Figure 14: As Fig. 13, but for amplitudes with \(J^{PC}=2^{++}\) in Eq. (15) determined from the rest and moving frame irreps. Solid curves show the central values from the parameterization variations summarized in Table 8.
(J^{PC}=0^{++}\) below \(\eta_{c}\eta^{\prime}\) and \(D_{s}\bar{D}_{s}\) thresholds including moving frame energies
The region around the \(D\bar{D}\) threshold, previously constrained using only rest-frame irrep energy levels, can be reconsidered including the additional constraint from moving-frame irreps. This analysis further confirms the previous conclusion that there is no near-threshold scalar bound-state in this system. In addition to \([000]A_{1}^{+}\), constraint comes from energy levels in the \([001]A_{1}\), \([111]A_{1}\) and \([002]A_{1}\) irreps, with a total of 43 energy levels below \(\eta_{c}\eta^{\prime}\) threshold. For this selection of irreps, in this energy region, all higher partial waves can be neglected.21
Footnote 21: We choose to exclude energy levels in the \([011]A_{1}\) irrep, which receive contributions from \(2^{-+}\) that may not be negligible due to a \(2^{-+}\) bound state. We later show in Fig. 16 that the levels in this irrep are in fact in good agreement in this energy region.
For these levels, a reasonable description using a constant \(K\)-matrix is found,
\[\gamma_{\eta_{c}\eta\to D\bar{D}} = \text{-}0.638\pm 0.157\pm 0.988\quad\begin{bmatrix}1.00& \text{-}0.37&0.06\\ &1.00&\text{-}0.31\\ &&&1.00\end{bmatrix}\] \[\gamma_{DD\to DD} = \text{-}0.172\pm 0.324\pm 2.162\quad\begin{bmatrix}&&&1.00\\ &&&1.00\end{bmatrix} \tag{16}\] \[\chi^{2}/N_{\text{dof}}=\tfrac{40.5}{43-3-5}=1.16\,,\]
that is in qualitative agreement with the amplitude found earlier. The description of the finite-volume spectra is shown in Figure 16. This parameterization and 9 other variations22 are plotted in Fig. 15, where we again observe no signal indicating strong interactions near \(D\bar{D}\) threshold.
Footnote 22: Details of the parameterization variations are provided in Appendix E.3.a.
### \(J^{PC}=0^{++}\) up to and including \(\psi\phi\) threshold
Our most highly constrained \(0^{++}\) amplitude comes from simultaneously describing energy levels in the \([000]\,A_{1}^{+}\), \([001]\,A_{1}\), \([111]\,A_{1}\) and \([002]\,A_{1}\) irreps up to \(a_{t}E_{\mathsf{cm}}=0.724\) at rest (just above \(\psi\phi\) threshold) and up to \(a_{t}E_{\mathsf{cm}}=0.69\) in moving frame irreps (just above \(D_{s}\bar{D}_{s}\) threshold). The 90 energy levels are subject to a significant degree of data correlation which we mollify by removing small eigenmodes below a cutoff \(\Lambda=0.02\) as described in Section V.6.23
Footnote 23: This results in the removal of 16 eigenmodes. A range of values are used when we vary the parameterization, including neglecting the correlations entirely.
An amplitude of the form used in Section V.2 describes the spectra with parameter values,
Figure 15: As Fig. 13, but for coupled-channel \(\eta_{c}\eta\{^{1}S_{0}\}-D\bar{D}\{^{1}S_{0}\}\) amplitudes with \(J^{PC}=0^{++}\) determined from \([000]A_{1}^{+}\) and moving frame energies. The individual curves correspond to the central values of the parameterization variations listed in Table 9. The bands show the extent of the uncertainties from the amplitude in Eq. 16.
where the description of the rest frame energy levels can be seen in the leftmost panel of Fig. 17. We have fixed the \(J^{PC}=1^{-+},3^{-+}\) amplitudes to the results in Eq. 12, the \(2^{++}\) amplitude is fixed to the result in Eq. 15, and the \(3^{++}\) amplitude is fixed to the result in Eq. 10.
Exploring a range of \(K\)-matrix parameterizations, we find certain features that must be present to successfully describe the lattice QCD spectra. Coupling a \(K\)-matrix pole to the open-charm channels (\(D\bar{D}\), \(D_{s}\bar{D}_{s}\), \(D^{*}\bar{D}^{*}\)) appears to be required, and these couplings are always significantly non-zero, while the \(K\)-matrix entries corresponding to the \(\psi\omega\) channels are always small and typically consistent with zero.
The results of describing the finite-volume spectra with a range of parameterization choices (listed in Table 10) are presented in Fig. 18, where we see that they are qualitatively similar with a single large enhancement around \(a_{t}E_{\sf cm}\approx 0.705\). The \(D^{*}\bar{D}^{*}\{{}^{1}S_{0}\}\) channel opens rapidly at threshold, a phenomenon we will later associate with a large resonance coupling to \(D^{*}\bar{D}^{*}\{{}^{1}S_{0}\}\).
With a set of well-constrained \(0^{++}\) and \(2^{++}\) scattering amplitudes in hand, in the following sections we will determine their pole singularities and present a corresponding interpretation in terms of resonances.
Figure 16: Energy levels from \([000]A_{1}^{+}\) and moving frame \([ijk]A_{1}\) irreps as in Fig. 4 (points) compared with the spectra from the coupled-channel \(\eta_{c}\eta\{{}^{1}S_{0}\}-D\bar{D}\{{}^{1}S_{0}\}\) amplitude in Eq. 16 using the finite-volume quantization condition Eq. 5 (dashed orange curves and bands). Energies plotted in gray were not used in this amplitude determination, neither are the bound state levels around \(a_{t}E_{\sf cm}=0.62\) and below corresponding to the stable \(\chi_{c0,2}(1P)\).
Figure 17: As Fig. 16 but for spectra in \([000]A_{1}^{+}\), \([000]E^{+}\) and \([000]T_{2}^{+}\) irreps compared with solutions from the amplitudes in Eqs. 17 (left panel) and 15 (middle and right panels).
## VI Resonance poles
Scattering amplitudes, considered as a function of _complex_ values of the scattering energy squared can have only certain features due to analyticity. As well as the branch cuts required by unitarity, _pole singularities_ can be present, having an interpretation as the bound-states and resonances of the scattering system.
A new branch cut for each channel, opening at the kinematical threshold, defines a Riemann sheet structure, with the _physical sheet_, where scattering occurs for real energies, having \(\operatorname{Im}k_{i}>0\) for all channels, \(i\). For a given energy, the unphysical sheet reached by moving down through the cut, known as the _proximal sheet_, has \(\operatorname{Im}k_{i}<0\) for all kinematically open channels and \(\operatorname{Im}k_{i}>0\) for all closed channels.
Close to a pole,
\[t_{ij}(s\approx s_{0})=\frac{c_{i}\,c_{j}}{s_{0}-s}\,, \tag{18}\]
and a nearby pole on the proximal sheet will generate rapid energy dependence on the real energy axis, typically taking the form of a peak, the canonical resonance lineshape. The pole location in the complex energy plane has an interpretation in terms of the resonance mass and width, \(\sqrt{s_{0}}=m\pm\frac{i}{2}\Gamma\), while the factorized pole residues give the channel couplings, \(c_{i}\). Except when they lie close to thresholds, poles on _other_ unphysical sheets typically have only a weak influence on physical scattering.
The \(K\)-matrix parameterizations we have explored in this paper have good analytic properties, such that they can be continued into the complex energy plane without difficulty. We will explore to what extent resonance pole locations and channel couplings are independent of the details of the specific parameterization chosen. Experience shows that for narrow resonances, where the pole is close to physical scattering (such as the \(\rho\)[57, 58]), very little variation under changes in parameterization is seen, while for broad resonances, lying far into the complex plane (such as the \(\sigma\)[47, 59, 60] or the \(D_{0}^{*}\)[52]), a much more significant scatter over parameterizations can be observed.
While factorized pole-residue couplings, \(c_{i}\), are the most rigorous way to quantify the coupling of a resonance to a channel, it is also common to use partial-widths, \(\Gamma_{i}\), or branching ratios, \(\operatorname{Br}_{i}\), to describe decay rates to _open_ channels. A prescription relating couplings to partial-widths, expected to be reasonable for narrow resonances, has been provided by the PDG [61],
\[\operatorname{Br}_{i}=\frac{\Gamma_{i}}{\Gamma}=|c_{i}|^{2}\frac{\rho_{i}( \operatorname{Re}s_{0})}{\sqrt{(\operatorname{Re}s_{0})}}\,. \tag{19}\]
Figure 18: As Fig. 14 but for \(J^{PC}=0^{++}\) amplitudes with the band showing the amplitude in Eq. 17. Parameterization variations are summarized in Table 10.
### Scalar resonance
For \(J^{PC}=0^{++}\), considering the analysis in section V.8 with variation of parameterizations summarized in appendix E.3, we consistently find a pole on the proximal sheet between \(\psi\omega\) and \(D^{*}\bar{D}^{*}\) thresholds. We denote the relevant Riemann sheet using the notation
\[\mathrm{sign}(\mathrm{Im}(k_{\eta_{\eta}},k_{D\bar{D}},k_{\eta_{ \eta^{\prime}}},k_{D_{s}D_{s}},k_{\psi\omega},k_{D^{*}D^{*}},k_{\psi\phi}))\] \[=(-,-,-,-,-,+,+)\] \[=({}^{\eta_{\eta}[-],DD[-],\eta_{\eta^{\prime}}}{}^{\eta^{\prime} }[-],D_{s}D_{s}[-],\psi\omega[-],D^{*}D^{*}[+],\psi\phi[+]).\]
We always order the channels by their threshold energies, so that a proximal sheet can be identified by a sequence of "\(-\)" followed by a sequence of "\(+\)". Thus at these energies \(({}^{\eta_{\eta}[-],DD[-],\eta_{\eta}}{}^{\eta^{\prime}}[-],D_{s}D_{s}[-], \psi\omega[-],D^{*}D^{*}[+],\psi\phi[+])\) is the proximal sheet. The pole on this sheet found when varying the parameterization is shown in Figure 19. The pole is located at
\[a_{t}\sqrt{s_{0}}= (0.7050\pm 0.0025)-\tfrac{i}{2}(0.0120\pm 0.0070)\] \[\sqrt{s_{0}}\approx 3995\pm 14-\tfrac{i}{2}(67\pm 38)\,\mathrm{MeV}\,,\]
where the quoted uncertainties are conservatively taken as the envelope of the individual uncertainties from each parameterization, and the quoted central values are taken as the centre of the envelope in complex-\(E_{\mathrm{cm}}\).
The pole residue factorizes into channel couplings,
\[a_{t}|c_{{}^{\eta_{L}}_{\eta}\{}^{1}S_{0}\}| \approx 0\] \[a_{t}|c_{D\bar{D}\{}^{*}\{}^{1}S_{0}\}| =0.093(28)\] \[a_{t}|c_{{}^{\eta_{\eta}}\{}^{1}S_{0}\}| \approx 0\] \[a_{t}|c_{{}^{\eta_{L}}_{\bar{D}_{s}}\{}^{1}S_{0}\}| =0.128(56)\] \[a_{t}|c_{{}^{\psi\omega}\{}^{1}S_{0}\}| =0.083(83)\] \[a_{t}|c_{{}^{D^{*}D^{*}}\{}^{1}S_{0}\}| =0.227(97)\] \[a_{t}|c_{{}^{\psi\phi}\{}^{1}S_{0}\}| \approx 0\,, \tag{20}\]
where the uncertainties quoted again reflect the envelope over all of the individual parameterizations. We find no evidence for significant couplings to channels with a charmonium and light meson.
The corresponding partial widths are,
\[\Gamma(D\bar{D}\{^{1}S_{0}\}) =0.0040(23)\,a_{t}^{-1}\approx 23(13)\,\mathrm{MeV}\] \[\Gamma(D_{s}\bar{D}_{s}\{^{1}S_{0}\}) =0.0049(46)\,a_{t}^{-1}\approx 28(26)\,\mathrm{MeV}\] \[\Gamma(\psi\omega\{^{1}S_{0}\}) =0.016(^{+31}_{-16})\,a_{t}^{-1}\approx 9^{+18}_{-9}\,\mathrm{MeV}\,, \tag{21}\]
and summing these we obtain a value in good agreement with the total width obtained from the pole location: \(60(34)\) MeV compared with \(67(38)\) MeV from \(2\)\(\mathrm{Im}\sqrt{s_{0}}\).
In all cases, the scattering amplitudes contain several additional poles. When a pole has a non-zero imaginary part in \(s\), a complex-conjugate pair of poles \((s_{0},s_{0}^{*})\) must arise on a common Riemann sheet. These are exact complex conjugates and so are easily identifiable. However, the distance of the relevant _half_\(s\)-plane should be considered. Since physical scattering occurs at \(s+i\epsilon\) on the _upper_ half \(s\)-plane of the physical sheet, then the part of the closest unphysical sheet (the proximal sheet) that is nearby is its _lower_ half \(s\)-plane. Other relevant unphysical sheets can be connected to these via their _upper_ half \(s\)-planes as we shall see below.
Due to the presence of a large number of channels, it is inevitable that additional poles on other Riemann sheets are present in all cases. Many of these can be considered "trivial copies" of the resonance pole identified above, others are far from the region where energy levels are present and so cannot be reliably claimed. Very distant poles, far from any (real-valued) energy levels, typically vary between parameterizations or are sometimes absent entirely, and can thus be considered inessential to describe the physics present.24
Footnote 24: Occasionally distant poles occur on the physical sheet. In the scalar channel, these are typically a GeV or more in their imaginary parts and so are not considered relevant.
One family of poles that can be dismissed as "trivial copies" of the resonance pole are found on the sheets where the sign of \(\mathrm{Im}\,k_{i}\) for a decoupled (or very weakly coupled) channel is flipped. For example, since \(\eta_{c}\eta\) is decoupled, there is no sensitivity to the sign of \(\mathrm{Im}\,k_{\eta_{c}\eta}\). This can be seen from simple Flatte-like amplitudes which have a denominator like
\[D=m^{2}-s-ig_{1}^{2}\rho_{1}-ig_{2}^{2}\rho_{2}\,. \tag{22}\]
It is the zeros of \(D\) that are the poles of the amplitude. If any \(g_{i}\) tends to zero then the dependence on the choice of sheet for channel \(i\) drops out since \(\rho_{i}=2k_{i}/\sqrt{s}\) and a pole will be present for both signs of \(\mathrm{Im}\,k_{i}\). We thus _expect_ there to be trivial copies due to the possible signs of \(\mathrm{Im}\,k_{i}\) for \(\eta_{c}\eta\), \(\eta_{c}\eta^{{}^{\prime}}\), \(\psi\omega\) and \(\psi\phi\), which are typically observed to have zero or small couplings. For the remainder of this subsection, we do not consider these trivial copies, and focus only on the sheets defined by the signs of \(\mathrm{Im}\,k_{i}\) for \(D\bar{D}\), \(D_{s}\bar{D}_{s}\) and \(D^{*}\bar{D}^{*}\).
Considering the 8 possibilities for \(D\bar{D}\), \(D_{s}\bar{D}_{s}\) and \(D^{*}\bar{D}^{*}\), for any given real scattering energy only a few sheets are relevant. Aside from the physical sheet and the proximal sheet, further "hidden" sheets may also be important. These are sheets that are not continuously connected to the real scattering line away from thresholds. Thus poles on such sheets can only exert their influence close to the relevant threshold where the distance in the complex plane to the physical scattering axis is short.
One relevant sheet for \(0^{++}\) where we find an additional pole is \((D\bar{D}[+],D_{s}\bar{D}_{s}[-],D^{*}\bar{D}^{*}[+])\). This is sometimes referred to as the "4th" sheet, and its upper half \(s\)-plane is continuously connected to the lower half \(s\)-planes of the \((D\bar{D}[-],D_{s}\bar{D}_{s}[+],D^{*}\bar{D}^{*}[+])\) sheet above \(D_{s}\bar{D}_{s}\) threshold, and the \((D\bar{D}[-],D_{s}\bar{D}_{s}[-],D^{*}\bar{D}^{*}[+])\) sheet below \(D_{s}\bar{D}_{s}\) threshold.25 This is conveniently illustrated through a
plot of the complex \(k_{D_{s}\bar{D}_{s}}\) plane, which opens out the Riemann surface in \(s\) into a single connected plane for the sheets nearest to \(D_{s}\bar{D}_{s}\) threshold. In Fig. 20 we show the position of this additional pole, along with the pole on the proximal sheet, in both the complex \(k_{D_{s}\bar{D}_{s}}\) and complex \(\sqrt{s}\) planes.26
Footnote 26: In just one parameterization (the \(\Lambda=0.032\) entry in table 10) this pole is located on the \((DD[-],D_{s}\bar{D}_{s}[+],D^{*}\bar{D}^{*}[+])\) sheet instead of the \((DD[+],D_{s}\bar{D}_{s}[-],D^{*}\bar{D}^{*}[+])\) sheet.
It is well-known that narrow resonances in coupled-channel systems often produce such a second pole, sometimes called a "mirror" pole.27 Suggestions have been made that further information may be inferred from the arrangement of poles [63; 64], and in that interpretation, the arrangement in Fig. 20 corresponds to an "ordinary" narrow resonance, as opposed to a state present due to strong attraction at threshold.
Footnote 27: In a two-channel Flaté amplitude, the \((-,+)\) (or \((+,-)\)) pole is relevant for resonances coupled to both channels but found below the second threshold, while the \((-,-)\) pole becomes important for narrow resonances coupled to both channels above both thresholds.
Figure 19: The \(J^{PC}=0^{++}\) pole and couplings found on the “proximal” sheet between \(\psi\phi\) and \(D^{*}\bar{D}^{*}\) thresholds. In the left panel grey points indicate the pole position on each successful amplitude parameterization and the black point shows the final quoted pole position and uncertainty as described in the text. In the right panel each histogram bar represents the value of that coupling found in one successful parameterization.
Figure 20: Poles found in the \(J^{PC}=0^{++}\) amplitudes plotted in the complex \(k_{D_{s}\bar{D}_{s}}\) and \(\sqrt{s}\) planes. The poles of the reference parameterization Eq. 17 are identified in black. The resonance pole on the proximal sheet is plotted in blue, and additional poles on other sheets are plotted in red and green. In one parameterization a “mirror” pole is found on the \((-,+,+)\) sheet, in all others it is found on the \((+,-,+)\) sheet.
### Tensor resonance
The amplitudes in Section V.6 describing \(J^{PC}=2^{++}\) are found to consistently feature a pole, shown in Figure 21, located at
\[a_{t}\sqrt{s_{0}}= (0.6990\pm 0.0026)-\tfrac{i}{2}(0.0115\pm 0.0026)\] \[\sqrt{s_{0}}\approx 3961\pm 15-\tfrac{i}{2}(65\pm 15)\,\text{MeV}\,,\]
on the proximal sheet between \(\psi\omega\) and \(D^{*}\bar{D}^{*}\) thresholds, \(\left(n_{\eta}\langle\cdot|-\rangle,DD\langle\cdot|-\rangle,DD\langle\cdot|- \rangle,D_{\bar{D}}\langle\cdot|-\rangle,\psi\omega[-],D^{*}\bar{D}^{*}[+],\psi \phi[+]\right)\).
The couplings of this pole are determined to be
\[a_{t}c_{\eta_{e}\eta(^{\downarrow}D_{2})} \approx 0\] \[a_{t}c_{D\bar{D}(^{\uparrow}D_{2})} =0.103(25)\] \[a_{t}c_{D\bar{D}^{*}(^{\downarrow}D_{2})} =0.123(39)\] \[a_{t}c_{D_{s}D_{s}(^{\downarrow}D_{2})} =0.032(32)\] \[a_{t}c_{\psi\omega(^{\uparrow}S_{2})} \approx 0\] \[a_{t}c_{D^{*}\bar{D}^{*}(^{\uparrow}S_{2})} =0.336(99)\] \[a_{t}c_{\psi\phi(^{\uparrow}S_{2})} \approx 0\,. \tag{23}\]
As in the scalar case, no significant coupling to closed-charm channels is observed. The relatively large coupling to \(D^{*}\bar{D}^{*}\), with the resonance lying someway below threshold for decay into this channel, explains the rapid turn-on of \(D^{*}\bar{D}^{*}\) at threshold. Note that the peculiar dependence only upon the ratio of \(K\)-matrix couplings \(g_{i}\), and not the absolute scale, discussed in Section V.6, is a property only of the parameterization, and not of the rigorously defined \(t\)-matrix pole couplings \(c_{i}\), which take very similar values regardless of the choice of fixed "\(g\)" coupling.
The corresponding partial widths are,
\[\Gamma(D\bar{D}\{^{1}D_{2}\}) =0.0046(22)\,a_{t}^{-1}\approx 26(12)\,\text{MeV}\] \[\Gamma(D\bar{D}^{*}\{^{3}D_{2}\}) =0.0039(25)\,a_{t}^{-1}\approx 22(14)\,\text{MeV}\] \[\Gamma(D_{s}\bar{D}_{s}\{^{1}D_{2}\}) =0.0003(^{5}_{3})\,a_{t}^{-1}\approx 2^{+3}_{-2}\,\text{MeV}\,, \tag{24}\]
and summing these produces 50(17) MeV, compared with 65(15) MeV obtained from the pole location. The large coupling to the closed \(D^{*}\bar{D}^{*}\) channel is not accounted for in this prescription, which may explain the slight difference.
As was the case in \(0^{++}\), additional poles are present for \(2^{++}\), and they warrant further attention. Given the approximate decoupling observed to closed-charm final states, it is convenient to label sheets considering only \(D\bar{D}\), \(D\bar{D}^{*}\), \(D_{s}\bar{D}_{s}\) and \(D^{*}\bar{D}^{*}\) channels. Additional poles on hidden sheets are present, and are presented in Appendix G. On the _physical sheet_, poles are observed for all parameterizations, and their presence is a concern given that it signals a violation of causality in the amplitude description. These poles are discussed in detail in Appendix G where they are found to be related to the \(D\)-wave barrier factor associated with the \(D\bar{D}^{*}\)\({}^{\{}}\)\(D_{2}\)) channel, and upon modification of this factor, they disappear without the resonance pole being changed significantly.
Figure 21: The \(J^{PC}=2^{++}\) pole and couplings found on the “proximal” sheet between \(\psi\omega\) and \(D^{*}\bar{D}^{*}\) thresholds for a set of successful amplitude parameterizations. The \(\eta_{e}\eta^{\prime}\) channel is not included in the parameterizations. Couplings to the \(\eta_{e}\eta\), \(\psi\omega\) and \(\psi\phi\) channels are found to be small, but only limited freedom is present in the parameterizations used.
### States in \(J^{pc}=3^{++}\) and \(2^{-+}\)
We determined \(J^{PC}=3^{++}\) amplitudes, primarily to constrain them as "background" waves in our determination of \(2^{++}\). Successful descriptions of the finite-volume spectra include a \(3^{++}\) resonance pole coupled to \(D\bar{D}^{*}\{^{3}\!D_{3}\}\) and \(D^{*}\bar{D}^{*}\{^{5}\!D_{3}\}\). Several caveats apply to this result, as described in section V.3. As shown in Fig. 22, considering multiple parameterizations, a pole is consistently found with
\[a_{t}\sqrt{s_{0}}= (0.7276\pm 0.0025)-\tfrac{i}{2}(0.0098\pm 0.0040)\] \[\sqrt{s_{0}}\approx 4123\pm 14-\tfrac{i}{2}(56\pm 23)\,\mathrm{MeV}\]
and couplings
\[a_{t}c_{D\bar{D}^{*}\{^{3}\!D_{3}\}} =0.148(37)\] \[a_{t}c_{\psi\omega\{^{3}\!D_{3}\}} \approx 0\] \[a_{t}c_{\psi\omega\{^{5}\!D_{3}\}} \approx 0\] \[a_{t}c_{D^{*}\bar{D}^{*}\{^{5}\!D_{3}\}} =0.061(61)\] \[a_{t}c_{D_{s}D_{s}^{*}\{^{5}\!D_{3}\}} \approx 0\] \[a_{t}c_{\psi\phi\{^{5}\!D_{3}\}} \approx 0\,, \tag{25}\]
showing that, again, coupling to closed-charm channels is not significant. The partial widths are
\[\Gamma(D\bar{D}^{*}\{^{3}\!D_{3}\}) =0.0098(50)\,a_{t}^{-1}\approx 55(38)\,\mathrm{MeV}\] \[\Gamma(D^{*}\bar{D}^{*}\{^{5}\!D_{3}\}) =0.0011^{+22}_{-11}\,a_{t}^{-1}\approx 6^{+13}_{-6}\,\mathrm{MeV}\,. \tag{26}\]
In determining \(J^{PC}=2^{-+}\), a stable bound-state pole coupled to \(D\bar{D}^{*}\{^{3}\!P_{2}\}\) was found at \(a_{t}\sqrt{s_{0}}=0.67538(68)\) in the reference parameterization given in Eq. 11. This corresponds to a bound-state \(\eta_{c2}\) pole with \(\sqrt{s_{0}}\approx 3827(4)\) MeV. A coupling \(a_{t}c_{D\bar{D}^{*}\{^{3}\!P_{2}\}}=25(15)i\) was also determined.
### Other possible singularities
The amplitude parameterizations we have used have the advantage of exactly implementing coupled-channel unitarity in the physical \(s\)-channel scattering region where we have constraint from the finite-volume spectrum. They have the cuts implied by \(s\)-channel unitarity, and are flexible enough to describe pole singularities corresponding to resonances, bound states and virtual bound states. What they do not contain is the physics of "left-hand cuts", i.e. the projection into \(s\)-channel partial-waves of scattering processes in the \(t\)- and \(u\)-channels. In many simple cases these cuts appear far from the physical \(s\)-channel region, and are of limited relevance, but in certain circumstances they can enter in a way that may have a significant impact.
The closest such cuts relevant to the current study are due to \(t,u\)-channel pion exchanges, that arise when at least one of the scattering hadrons has nonzero intrinsic spin, leading to the analogue of the "short nucleon cut" [65; 66]. Such cuts open only a few tens of MeV below the physical \(s\)-channel threshold and thus may be of concern. Since the cut will generate an imaginary part in partial-wave amplitudes that is not accounted for in the derivation of the Luscher formalism, dealing with it correctly may require a modification of the finite-volume formalism [67]. A recent example considering the closely related case of doubly-charmed \(I=0\)\(DD^{*}\) scattering can be found in Ref. [68] which discusses the lattice calculation presented in Ref. [69].
We leave the issue of explicitly accounting for "left-hand cuts" as a problem for future studies. Given that no internal inconsistencies have been observed in this calculation, with finite-volume spectra described perfectly well by amplitudes lacking explicit left-hand cut structures, it is possible that this effect is largely negligible. Indeed, if these effects _are_ large, then the issue of these cuts is likely to be of concern in _all_ studies of unstable charmonia.
## VII Interpretation and comparisons
Our key finding in this work is that, for \(m_{\pi}\approx 391\,\mathrm{MeV}\), the \(0^{++}\) and \(2^{++}\) charmonium sectors contain _only_ a single narrow resonance each, lying above the \(D_{s}\bar{D}_{s}\) threshold, but slightly below the \(D^{*}\bar{D}^{*}\) threshold. The scalar resonance has significant couplings to all open-charm decay channels, and the tensor to all open-charm except \(D_{s}\bar{D}_{s}\). Neither resonance has any significant coupling to closed-charm channels. There are also bound states well below threshold corresponding to the \(\chi_{c0}(1P)\) and the \(\chi_{c2}(1P)\). There is no indication of any further states in the energy region considered. In particular, there is no sign of a scalar bound-state lying just below the \(D\bar{D}\) threshold, where no significant attraction is observed. The results also suggest the existence of a narrow \(3^{++}\) resonance and a \(2^{-+}\) bound state.
Throughout the course of this calculation, we considered several \(S\)-wave channels involving a closed-charm meson and a light meson: \(\eta_{c}\eta\{^{1}\!S_{0}\}\), \(\eta_{c}\eta^{\prime}\{^{1}\!S_{0}\}\), \(\psi\omega\{^{1}\!S_{0}\}\), \(\psi\omega\{^{5}\!S_{2}\}\), \(\psi\phi\{^{1}\!S_{0}\}\), \(\psi\phi\{^{5}\!S_{2}\}\), \(\chi_{c1}\eta\{^{3}\!S_{1}\}\), and \(\chi_{c2}\eta\{^{5}\!S_{2}\}\). None were found to have large scattering amplitudes, and
Figure 22: The \(J^{PC}=3^{++}\) pole and couplings found on the “proximal” sheet between \(\psi\phi\) and \(D_{s}^{*}\bar{D}_{s}^{*}\) thresholds for a set of successful amplitude parameterizations.
no near-threshold singularities were identified associated with these channels.
The calculation was performed on three lattice volumes, but only a single lattice spacing, and a single choice of the degenerate light quark, strange quark and charm quark masses, with the light quarks being unphysically heavy. As indicated in Table 4, which shows stable hadron masses, there is evidence that the charm-quark mass, and perhaps the strange-quark mass, may have been tuned to be slightly smaller than their physical values. Any phenomena that are sensitive to the mass difference between the up and down quarks, or QED effects, will not be correctly captured in this calculation. Discretization effects, while likely small for light mesons, can be larger for charmed and charmonium systems. For example, the \(J/\psi\)-\(\eta_{c}\) hyperfine splitting, is around 33(1) MeV smaller than observed experimentally, as determined from the values in Table 2 and Ref. [61]. The deliberate removal of \(c\bar{c}\) annihilation likely plays at most a modest role and may contribute to small discrepancies such as the \(\chi_{cJ}(1P)\) mass difference with respect to experiment (and of course to these states being stable in this calculation).
Bearing these caveats in mind, we first summarise the extracted amplitudes and then discuss interpretations, comparing the results to prior lattice QCD calculations, to phenomenological models, and to experimental candidate states.
### State content of amplitudes by \(J^{pc}\)
We now summarize and discuss the spectroscopic content of each \(J^{PC}\) considered in this work. \(J^{PC}=\{1,3\}^{-+}\) amplitudes were also computed and found to be very small in the energy region considered and are not discussed further. There are also indications of a \(J^{PC}=4^{++}\) state based on Ref. [30] and Fig. 2, however this would lie at a slightly higher energy than has been considered in this work.
#### v.1.1 \(J^{pc}=0^{++}\)
Lying well below \(D\bar{D}\) threshold, the \(\chi_{c0}(1P)\) state is clearly present and, owing to our deliberate removal of \(c\bar{c}\) annihilation, it is stable. Its presence plays no significant role in the determination of scattering amplitudes at higher energies.
Below the \(D_{s}\bar{D}_{s}\) and \(\psi\omega\) thresholds, _no other poles are found_ in \(0^{++}\), either as bound states or as resonances in \(D\bar{D}\) or \(\eta_{c}\eta\). Distant virtual poles occur well below threshold in some parameterizations, but they have negligible impact on the physical scattering region, and are likely to be artefacts of extrapolating far outside the region of constraint. The small negative energy shifts in \([000]A_{1}^{+}\) relative to non-interacting \(D_{[000]}\bar{D}_{[000]}\) energies are explained in amplitude terms by very mild attraction at threshold, at a level far below that needed for a bound-state to be present.
Around \(D_{s}\bar{D}_{s}\) threshold, a similar but slightly larger negative energy shift is observed, but again description in terms of (coupled-channel) amplitudes indicates insufficient strength to require a nearby pole.28
Footnote 28: In certain extreme cases, where a very limited set of energy levels were used, we were able to produce a virtual bound-state pole. Further details can be found in Appendices 6 and 7.
In summary, our findings suggest no strong features close to \(D\bar{D}\) or \(D_{s}\bar{D}_{s}\) thresholds, with only modest attraction appearing there.
In the energy region above the \(\psi\omega\) threshold near 3900 MeV, more significant departures from the non-interacting energy spectrum are present, which amplitude analysis shows are due to the presence of _a single narrow scalar resonance_. Large couplings to the open \(D\bar{D}\) and \(D_{s}\bar{D}_{s}\) channels are found, along with a large coupling to the kinematically closed \(D^{*}\bar{D}^{*}\) channel. Only upper limits were found for the coupling to the \(\psi\omega\) channel, while no evidence was found for coupling to the \(\eta_{c}\eta\) and \(\eta_{c}\eta^{\prime}\) channels.
#### v.1.2 \(J^{pc}=2^{++}\)
In \(2^{++}\) the \(\chi_{c2}(1P)\) state is well below \(D\bar{D}\) threshold and plays no significant role in the scattering amplitudes at higher energies.
The \(D\)-wave nature of \(D\bar{D}\{^{1}D_{2}\}\) and \(D_{s}\bar{D}_{s}\{^{1}D_{2}\}\) suppresses any near-threshold interaction, with the first significant feature being a peak in the diagonal \(D\bar{D}\{^{1}D_{2}\}\) amplitude followed by a peak roughly 50 MeV higher in energy in the diagonal \(D\bar{D}^{*}\{^{3}D_{2}\}\) amplitude. The off-diagonal \(D\bar{D}\{^{1}D_{2}\}\to D\bar{D}^{*}\{^{3}D_{2}\}\) amplitudes peak roughly in the middle, and these observations likely reflect the different peak-shaping effects of the \(D\)-wave barrier factors for the displaced thresholds.
\begin{table}
\begin{tabular}{r|r r r|r r r|r r r} Meson mass / MeV & \(\pi\) & \(K\) & \(\eta\) & \(D\) & \(D_{s}\) & \(D^{*}\) & \(\eta_{c}\) & \(J/\psi\) & \(\chi_{c0}\) & \(\chi_{c2}\) \\ \hline this calc. & 391 & 550 & 587 & 1886 & 1951 & 2010 & 2965 & 3044 & 3423(3) & 3519(2) \\ expt. & 140 & 494 & 548 & 1865 & 1969 & 2007 & _2984_ & 3097 & _3415_ & 3556 \\ \end{tabular}
\end{table}
Table 4: Comparing stable meson masses determined on this lattice, with the scale fixed using the physical \(\Omega\)–baryon mass, to their values in experiment (where states with significant decay widths have their masses shown in italics) [61]. (Statistical uncertainties less than 0.5 MeV on lattice masses are not shown.)
As the \(S\)-wave \(D^{*}\bar{D}^{*}\) channel opens, sharp features are observed in all open-charm amplitudes, with the diagonal \(D^{*}\bar{D}^{*}\{^{5}\!S_{2}\}\) amplitude turning on rapidly as was seen for the corresponding wave in the \(0^{++}\) case. Only a very weak coupling to the \(D_{s}\bar{D}_{s}\{^{1}\!D_{2}\}\) channel is observed, but this may reflect, at least in part, the \(D\)-wave barrier suppression, \((k_{D_{s}\bar{D}_{s}}/k_{DD})^{2}\sim 0.3\) in the peak region.
These features are found to be due to _a single narrow resonance_ lying between \(D_{s}\bar{D}_{s}\) threshold and \(D^{*}\bar{D}^{*}\) threshold.
#### v.1.3 \(J^{PC}=3^{++}\)
Our results suggest the existence of an as-yet-unobserved narrow \(J^{PC}=3^{++}\) resonance coupled dominantly to \(D\bar{D}^{*}\{^{3}\!D_{3}\}\), with a possible coupling to \(D^{*}\bar{D}^{*}\{^{5}\!D_{3}\}\) and only small couplings to closed-charm final states.
#### v.1.4 \(J^{PC}=2^{-+}\)
We find a \(J^{PC}=2^{-+}\) bound state \(\eta_{c2}\) around 3830 MeV. In the computed amplitudes its presence is not obviously indicated by any strong scattering behavior above threshold, but it is clearly present as a nearly volume-independent energy level well below threshold.
At the physical light quark mass, it is likely that this state remains below the relevant \(D\bar{D}^{*}\) open-charm threshold, and will only generate a non-zero width through \(c\bar{c}\) annihilation. On these grounds we'd expect it to be rather narrow, and it might be observable in radiative transitions.
### Comparisons with Prelovsek et al, Ref. [28]
The most complete previous attempt to study the charmonium scalar and tensor sectors in lattice QCD appears in Ref. [28] where \(D\bar{D}\) and \(D_{s}\bar{D}_{s}\) channels are investigated using light and strange quark masses somewhat lighter than those used in this study (the pion mass is 280 MeV). The lightest channel that can couple, \(\eta_{c}\eta\), is assumed decoupled by fiat and is ignored completely, while \(J/\psi\,\omega\) is investigated but ultimately not included in determinations of scattering amplitudes.
In \(0^{++}\), Ref. [28] claims that _three_ states are required to describe their computed finite-volume spectrum: a stable bound-state lying 4 MeV below \(D\bar{D}\) threshold, an extremely narrow resonance lying less than 1 MeV below \(D_{s}\bar{D}_{s}\) threshold, and a resonance with a width of around 60 MeV lying some way above \(D_{s}\bar{D}_{s}\) threshold, but well below \(D^{*}\bar{D}^{*}\) threshold.
Only limited consideration of the \(2^{++}\) sector is made. A single resonance is claimed, lying some way above \(D_{s}\bar{D}_{s}\) threshold, and only slightly below \(D^{*}\bar{D}^{*}\) threshold, a channel which is not included in the analysis.
The authors compute finite volume spectra in the \([000]\,A_{1}^{+}\), \([100]\,A_{1}\), \([110]\,A_{1}\) and \([100]\,B_{1}\) irreps on two volumes using operator bases that feature single-hadron-like \(c\bar{c}\) operators and \(D\bar{D}\), \(D_{s}\bar{D}_{s}\) meson-meson operators with relevant momenta.29 The lowest energy \(D^{*}\bar{D}^{*}\) operator is included in the rest-frame only. \(J/\psi\,\omega\) operators are included, but the energy levels found to have overlap with them are discarded. No \(\eta_{c}\eta\) operators are included, despite this being nominally the lowest threshold channel in the problem.30
Footnote 29: Unlike in the meson-meson operators used in this paper, Ref. [28] does not make use of optimized single-hadron operator constructions, which may lead to slower relaxation of correlation functions to the relevant energy eigenstates with increasing Euclidean time.
Footnote 30: In our calculation we have observed complete decoupling of the \(\eta_{c}\eta\) from the rest of the scattering problem, and have found that the spectrum outside those levels with overlap onto \(\eta_{c}\eta\) operators remains unchanged if the \(\eta_{c}\eta\) operators are excluded. As such, it may be the case that Ref. [28]’s exclusion of \(\eta_{c}\eta\) operators has not introduced a significant error.
Ref. [28] opts to adjust energy shifts to account for the difference between computed single hadron energies and those predicted by the relativistic dispersion relation, and these shifts can be of order 10 MeV, reflecting significant discretisation effects warranting further investigation [70]. The authors choose not to associate any systematic error with this process. In contrast, in the current paper, we propagate conservative errors in \(m\) and \(\xi\) coming from the slightly different dispersion relations for different species of single hadron into the \(E_{\rm cm}\) values which go into the Luscher analysis, and we also implement an additional systematic error onto every energy level to reflect the modest observed departures from relativistic dispersion (see Appendix A). Hence, to a certain extent we are placing part of the discretization uncertainty into the amplitude errors, and offering a more conservative estimate of the precision of determination of the scattering process.
The different light and strange quark masses and volumes make a direct comparison of spectra presented in Ref. [28] to those presented in the current paper impossible, but certain key features can be considered. Focussing on the \([000]\,A_{1}^{+}\) spectrum, a difference is immediately apparent, with the energy levels nearest to \(D\bar{D}\) threshold in Ref. [28] being found _significantly below_ threshold, suggesting strong attraction, while in this paper, the corresponding levels lie very close to the threshold.
The large downward shifts of these levels, when analysed using the Luscher approach, lead to the claim of a bound state in this scattering system. Figure 23 shows the \(D\bar{D}\) elastic scattering phase-shift corresponding to low-lying energy levels from Ref. [28], the current paper, and an earlier calculation at two differing light quark masses [27]. The levels below \(D\bar{D}\) threshold in Ref. [28] generate the two red points at negative values of \(k\cot\delta\) requiring a fit curve that crosses \(-\sqrt{-k^{2}}\) below threshold and hence a bound-state. In contrast, the black points
from the current calculation indicate weak interaction near threshold and no bound-state.
Ref. [28] considers different energy regions separately and determines amplitudes in the coupled-channel region using piecewise-in-energy forms rather than the continuous forms with good analytic properties used in the current paper.31 The pole lying near the \(D\bar{D}\) threshold is not present in the amplitude from which the higher two poles are extracted. A coupled-channel analysis is performed above \(D_{s}\bar{D}_{s}\) threshold, but only a single parameterization is considered that restricts the possible pole content - it can support two poles decoupled from each other, but cannot straightforwardly describe a single resonance coupled to both the \(D\bar{D}\) and \(D_{s}\bar{D}_{s}\) channels.
Footnote 31: In Appendix E. of Ref. [28], a continuous function is shown. However, this is formed by joining the piecewise analyses together using smoothed-step or sigmoid functions. These functions contain essential singularities which make an analytic continuation into the complex energy plane questionable.
The current paper presents a more complete study of the scattering system, considering all possible channels, constrained by an order of magnitude more energy levels, and using a variety of analytically well-behaved amplitudes. We come to a completely different conclusion about the number of poles present, and while the pion mass is different, it would be very surprising if _two_ additional poles move into the studied energy region under a modest change in the light quark mass.
### Interpretation and comparisons to other theoretical work
Our finding of a single relatively narrow resonance in each of \(J^{PC}=0^{++}\) and \(2^{++}\) can be compared to previous model-based predictions of the state content of the energy region around the \(D\bar{D}\) and \(D_{s}\bar{D}_{s}\) thresholds.
Before the observation of the XYZ candidates, a leading picture of the observed states above 3 GeV was in terms of charm-anticharm bound states formed of heavy quarks moving in a static potential. Our results appear to agree with the state counting of this picture, with the single \(\chi_{c0}\) and \(\chi_{c2}\) resonances corresponding to the \(2P\) radial excitations. The large overlap of energy-levels near to the masses of these states with \(c\bar{c}\)-like operators also supports a dominant role for charmonium-like wavefunction components in these states. The fact that our scalar state is slightly heavier than the tensor state32 is in opposition to the prediction of short-distance spin-orbit effects in this picture. This might indicate that the physics of coupling to the open-charm decay channels is stronger than the relativistic corrections to the interquark potential.
Footnote 32: albeit the effect is of limited statistical significance.
Potential models also predict a single nearby \(2^{-+}\) state and a single nearby \(3^{++}\) state, as we found, and our determined masses lie within the spread of predictions made by various model implementations.
In order to include some of the physics of hadron strong decay to pairs of lighter hadrons, potential models are sometimes augmented with an application of first-order perturbation theory in which an operator that produces a quark-antiquark pair is introduced. The form of this operator is an assumption of the approach, and a choice, known as the "\({}^{3}\!P_{0}\) model", which successfully describes some experimental meson decays, has the pair produced with vacuum quantum numbers. The relative strengths of open-charm decays predicted by the \({}^{3}\!P_{0}\) model [71; 72] are not consistently reflected in our extracted pole couplings. The large coupling to \(D^{*}\bar{D}^{*}\) of the tensor resonance relative to the scalar resonance _is_ a feature of the \({}^{3}\!P_{0}\) model, its spin-recoupling factors giving the amplitudes a ratio of 2. However, these factors do not work universally, as the ratio of couplings for \(\chi_{c0}\to D^{*}\bar{D}^{*}\) and \(\chi_{c0}\to D\bar{D}\) is predicted to take value \(1/\sqrt{3}\), which is in poor agreement with our extracted couplings. The smallness of the tensor resonance coupling to \(D_{s}\bar{D}_{s}\)_may_ have an explanation in the \(D\)-wave threshold factor, but a
Figure 23: Comparison of the amplitudes extracted in the region near \(D\bar{D}\)-threshold region from energy levels in this work (with \(m_{\pi}\approx 391\) MeV), and other lattice calculations, Lang et al [27] (\(m_{\pi}\approx 156\) MeV and \(266\) MeV) and Prelovsek et al [28] (\(m_{\pi}\approx 280\) MeV). Presented as \(k\cot\delta\) which has an effective range expansion (upper panel) and \(S\)-wave elastic phase-shift \(\delta\) (lower panel).
very similar reduction would be expected for \(D\bar{D}^{*}\) decays which is not seen, and the \({}^{3}\!P_{0}\) model does not provide any compensating factor here.
The observation of XYZ candidate states spurred theoretical consideration of possible state constructions going beyond just \(c\bar{c}\). In particular, the \(X(3872)\) at \(D\bar{D}^{*}\) threshold and the \(Z_{c}(3900)\) nearby have been interpreted as providing evidence for strong long-distance meson-meson interactions in \(S\)-wave, potentially strong enough to induce binding of molecular-like meson-meson configurations.
Heavy quark spin symmetry applied to the charm quarks suggests similar strong effects in the \(D^{*}\bar{D}^{*}\)\(S\)-wave, and potentially \(0^{++},1^{+-},2^{++}\) partners of the \(1^{++}\)\(X(3872)\)[73; 74]. These states may be bound relative to \(D^{*}\bar{D}^{*}\), but because they lie above \(D\bar{D}\) and \(D\bar{D}^{*}\) thresholds, they may manifest as resonances.33 It is suggested that these molecular states appear _in addition_ to the \(c\bar{c}\) states discussed above (or for the physical eigenstates to be admixtures). The scalar and tensor resonances found in the current calculation do have numerically significant couplings to the kinematically closed \(D^{*}\bar{D}^{*}\) channel, which may imply they have significant \(D^{*}\bar{D}^{*}\) components. However, the state counting suggests that \(D^{*}\bar{D}^{*}\)\(S\)-wave interactions are not strong enough to generate additional states (at a pion mass of 391 MeV).
Footnote 33: The longest-range process of one-pion exchange is not present in elastic \(D\bar{D}\) scattering, and hence a bound-state in \(D\bar{D}\) must be generated by some other process.
An approach to explaining at least some of the XYZs that does not directly connect them to meson-meson thresholds is the suggestion that they contain significant _compact tetraquark_ components. While the dynamics assumed in models to get these states to bind varies [75; 76], inevitably such pictures lead to many states beyond those expected in a \(c\bar{c}\) only picture. Tetraquark states are often proposed to lie within a few tens of MeV of meson-meson thresholds with the same quark content and thus unambiguously demonstrating such components is challenging [77]. The results presented in this paper do not seem to support additional states of tetraquark origin, but a natural criticism would be that the calculation did not include operators resembling compact tetraquark configurations.34 An earlier calculation [50] performed on the smallest volume lattice used here did include a basis of compact tetraquark operators as well as meson-meson operators. This calculation found no difference in the extracted finite volume spectrum when the tetraquark operators were removed, suggesting that tetraquark components may not be important.
Footnote 34: Our meson-meson operators have a spatial structure that is not compact, rather each meson samples the entire volume of the lattice.
### Experimental comparisons
The experimental status of the channels studied in this paper is at present unclear. Peaks are seen in several processes but often \(J^{PC}\) quantum numbers are not known. Nor is it known how peaks in different final states relate to each other.
It is not possible to directly compare the present work to experiment due to the larger light quark mass, the known discretization effects illustrated by the incorrect \(J/\psi-\eta_{c}\) hyperfine splitting, leading to expected differences of a few tens of MeV, and the other systematic uncertainties discussed above. Nevertheless, we can present some discussion assuming plausible extrapolations to the physical light and strange quark masses.
It has been observed in several studies that typically resonance properties vary smoothly with changes in quark mass [45; 58; 52; 58; 60; 78; 79; 80; 81].35 It has proven to be reasonable in many cases to perform extrapolations based upon the idea that the reduced couplings (pole couplings with the angular momentum barrier divided out) are constant with changing quark mass. Predictions have been made using this approach for \(f_{2}\) resonances [26; 42], the \(b_{1}\) resonance [25], \(\rho_{J},\omega_{J}\) resonances [83], and a hybrid \(\pi_{1}\)[26]. Typically these extrapolations assume that we know the physical mass of the resonance from experiment.
Footnote 35: Although there are notable exceptions, for example Ref. [82]
For light quark resonances decaying to final states featuring a pion, there can be a large increase in phase-space with reduction of light quark mass, and a corresponding rapid growth in the decay width of resonances. However, in the current case, consulting Table 4, we see that even though the light-quark masses are high, because the charm-quark mass is much larger than the light-quark mass, the differences with respect to experiment of the stable hadron masses remain relatively small, and hence we do not expect particular large changes in the resonance properties.
For the case of the single extracted scalar resonance, we might propose two possible extrapolations: (a) if the resonance mass stays where it is (or decreases slightly), there would be only a modest change in the \(D\bar{D}\) and \(D_{s}D_{s}\) phase-spaces, and the state would remain an isolated relatively narrow resonance with decays to \(D\bar{D}\), to \(D_{s}D_{s}\) (if this is still an open channel) and possibly to \(J/\psi\,\omega\). (b) If the resonance mass moves up slightly, getting close to or even above the \(D^{*}\bar{D}^{*}\) threshold, the large coupling to that channel in \(S\)-wave _could_ generate a large total width for the state. In either case there will be just a single scalar resonance.
Compared to this result there would appear to be a surfeit of experimental scalar candidate states, as discussed in the introduction. What is not clear is whether the features observed experimentally in differing production processes and final states could actually be due to just a
single resonance appearing in the coupled-channel system. Production processes must share the same pole singularities as the scattering \(t\)-matrix, but the real-energy axis lineshape can be sculpted by polynomial energy-dependence from the production factor,36 which, if not accounted for, could lead to slightly differing resonance masses and widths in different processes. It remains to be seen if sufficiently rigorous coupled-channel analysis could resolve the experimental \(\chi_{c0}(3930),\chi_{c0}(3960)\) peaks, the broad \(\chi_{c0}(3860)\) enhancement, and possibly the \(X(3915)\) as ultimately being due to amplitudes featuring only a single scalar resonance pole.
Footnote 36: or even more rapid energy dependence if the Born-term has a nearby singularity.
For the tensor resonance, similar observations can be made. However, the \(D\)-wave nature of the open decay modes means that the state will rapidly become narrow if its mass decreases, while the large \(S\)-wave coupling to \(D^{*}\bar{D}^{*}\) might make it broad should it go up in mass. The results are consistent with there being a single \(\chi_{c2}(3930)\) resonance coupled to \(D\bar{D}\)[5; 6; 84], and the current experimental data is not inconsistent with this having at most a small coupling to \(D_{s}\bar{D}_{s}\)[10].
## VIII Summary
We have presented an investigation of the \(\chi_{c0}\) and \(\chi_{c2}\) channels above \(D\bar{D}\) threshold where resonant effects are observed in experiment. Working in the approximation where charm-quark annihilation is forbidden, for the first time we have been able to consider all of the necessary channels up to \(\psi\phi\) threshold. A summary of our key findings is presented in Figure 24.
Working at \(m_{\pi}\approx 391\) MeV, we find a quite simple picture with a single resonance in both \(J^{PC}=0^{++}\) and \(2^{++}\) strongly coupled to open-charm decay modes. Both resonances are found just below \(D^{*}\bar{D}^{*}\) threshold around 4000 MeV with relatively narrow widths of around 60 MeV, and both have a significant coupling to the kinematically closed \(D^{*}\bar{D}^{*}\) channel in \(S\)-wave. A key difference between the resonances is that the \(D_{s}\bar{D}_{s}\) coupling is very small for the tensor resonance, but for the scalar state it is of approximately equal strength to the coupling to \(D\bar{D}\).
As a by-product of this work, in order to determine "background" partial waves that appear in our lattice QCD calculation, we have found an \(\eta_{c2}\) bound state, and a \(\chi_{c3}\) resonance, both of which have coupling to the \(D\bar{D}^{*}\) channel. Exotic \(J^{PC}=1^{-+}\) and \(3^{-+}\) amplitudes were found to be small below 4100 MeV.
Our results are in disagreement with other theoretical work reporting bound or near-threshold states in \(D\bar{D}\) in \(S\)-wave [85; 86; 87; 88; 9; 15; 76; 89; 9; 15], including a prior lattice QCD calculation [28].
The methods used in this paper may be applied to other sectors featuring scattering of hadrons containing charm quarks. Particularly attractive targets are the near-threshold vector-pseudoscalar enhancements, \(X/\chi_{c1}(3872)\), \(T_{cc}(3875)^{+}\), and \(Z_{c}/T_{\psi 1}^{b}(3900)^{+}\), whose interaction dynamics are likely related to the states observed in the current paper. A more complete calculation in a robust lattice QCD framework of these systems will aid in understanding the inner workings of QCD at these energies.
###### Acknowledgements.
We thank our colleagues within the Hadron Spectrum Collaboration (www.hadspec.org), in particular Raul Briceno, Andrew Jackura and Arkaitz Rodas, and also acknowledge useful discussions with Igor Danilkin, Feng-Kun Guo, Christoph Hanhart, Sasa Prelovsek, Steve Sharpe and Adam Szczepaniak. DJW acknowledges support from a Royal Society University Research Fellowship. DJW & CET acknowledge support from the U.K. Science and Technology Facilities Council (STFC) [grant number ST/T000694/1]. JJD acknowledges support from the U.S. Department of Energy contract DE-SC0018416 at William & Mary, and JJD & RGE from contract DE-AC05-06OR23177, under which Jefferson Science Associates, LLC, manages and operates Jefferson Lab. The software codes Chroma [89], QUDA [90; 91], QUDA-MG [92], QPhiX [93], MG_PROTO [94], QQQPQP [95; 96], and Redstar [97] were used. Some software codes used in this project were developed with support from the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research and Office of Nuclear Physics, Scientific Discovery through Advanced Computing (SciDAC) program; also acknowledged is support from the Exascale Computing Project (17-SC-20-SC), a collaborative effort of the U.S. Department of Energy Office of Science and the National Nuclear Security Administration. This work used the Cambridge Service for Data Driven Discovery (CSD3), part of which is operated by the University of Cambridge Research Computing Service (www.csd3.cam.ac.uk) on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). The DiRAC component of CSD3 was funded by BEIS capital funding via STFC capital grants ST/P002307/1 and ST/R002452/1 and STFC operations grant ST/R00689X/1. Other components were provided by Dell EMC and Intel using Tier-2 funding from the Engineering and Physical Sciences Research Council (capital grant EP/P020259/1). This work also used the earlier DiRAC Data Analytic system at the University of Cambridge. This equipment was funded by BIS National E-infrastructure capital grant (ST/K001590/1), STFC capital grants ST/H008861/1 and ST/H00887X/1, and STFC DiRAC Operations grant ST/K00333X/1. DiRAC is part of the National E-Infrastructure. This work also used clusters at Jefferson
Laboratory under the USQCD Initiative and the LQCD ARRA project.
Propagators and gauge configurations used in this project were generated using DiRAC facilities, at Jefferson Lab, and on the Wilkes GPU cluster at the University of Cambridge High Performance Computing Service, provided by Dell Inc., NVIDIA and Mellanox, and part funded by STFC with industrial sponsorship from Rolls Royce and Mitsubishi Heavy Industries. Also used was an award of computer time provided by the U.S. Department of Energy INCITE program and supported in part under an ALCC award, and resources at: the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725; the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated under Contract No. DE-AC02-05CH11231; the Texas Advanced Computing Center (TACC) at The University of Texas at Austin; the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation Grant No. ACI-1548562; and part of the Blue Waters sustained-petascale computing project, which is supported by the National Science Foundation (awards OCI-0725070 and ACI-1238993) and the state of Illinois. Blue Waters is a joint effort of the University of Illinois at Urbana-Champaign and its National Center for Supercomputing Applications.
Figure 24: Results from \(J^{PC}=0^{++}\) (left column) and \(J^{PC}=2^{++}\) (right column). Top panels: scattering \(t\)-matrix elements plotted as \(\rho_{i}\rho_{j}|t_{ij}|^{2}\). Middle: dots show the energy levels used to constrain scattering amplitudes, with further energy levels at lower energies that mostly constrain the \(\eta_{c}\eta\) amplitudes. Bottom panels: resonance pole positions on the “proximal” sheet, the closest unphysical sheet to the real energies at which \(s\)-channel scattering occurs. The most significant partial widths are also indicated, as determined from pole residues using Eq. (19).
## Appendix A Dispersion relations and additional systematic uncertainty
In this appendix we give further details of the dispersion relations, stable hadron masses and the additional systematic uncertainty included in the spectrum. Figure 25 shows the result of determining stable hadron masses from dispersion relation fits to each lattice volume individually.
Figure 26 shows the deviations of the computed energies from the dispersion relation fitted to all three volumes simultaneously. The size of these residuals, particularly for the \(\chi_{cJ}\) states, motivates our addition of a systematic error as described in the main text. Without this additional systematic, the \(\chi^{2}/N_{\rm dof}\) values for descriptions of computed finite-volume spectra using scattering amplitudes in many cases are large. By adding this systematic error, which may reflect discretization effects or some other unaccounted-for systematic, we make the uncertainty on the amplitude descriptions more accurately reflect the uncertainty in the calculation. Additional details in the context of \(DK\) scattering are given in Ref. [45].
In Table 5 we provide details of the dispersion relation fits for stable charmed and charmonium mesons using Eq. 2 that are shown in Fig. 1 and Table 2.
Figure 26: Residuals upon fitting stable hadron energies to the dispersion relation given in Eq. 2 with independent values of \(m\), \(\xi\), for each hadron. Grey bands indicate our selection of modest additional systematic uncertainties added to energy levels for \(J=0,1\) (left) and \(J=2,3\) (right).
Figure 25: Masses of stable hadrons obtained from dispersion relation fits to each lattice volume individually, \(a_{t}m_{L}\), using Eq. 2. Fits include points with momentum \(\left|\vec{n}\right|^{2}\leq 3,4,6\) for \(L/a_{s}=16,20,24\) respectively.
## Appendix B Including the \(\chi_{c0}(1P)\) bound-state in amplitudes
In this appendix we consider the two-channel rest-frame analysis of Section V.1, but additionally include those energy levels lying far below \(\eta_{c}\eta\) threshold which we have identified as being due to the stable \(\chi_{c0}(1P)\) state. The purpose is to show that the amplitude behavior above threshold is unchanged upon this inclusion.
To describe the scattering system including the deep bound-state, a \(K\)-matrix with a pole term and a matrix of constants is used, with a Chew-Mandelstam phase-space subtracted at the \(K\)-matrix pole location. Describing 13 energy levels results in amplitude parameters,
\[a_{t}m = (0.60402\pm 0.00037\pm 0.00004)\] \[a_{t}g_{\eta_{c}\eta} = (0.23\pm 0.09\pm 0.02)\] \[a_{t}g_{D\bar{D}} = (0.23\pm 0.63\pm 0.15)\] \[\gamma_{\eta_{c}\eta\to\eta_{c}\eta} = (0.97\pm 0.53\pm 0.08)\] \[\gamma_{\eta_{c}\eta\to D\bar{D}} = (0.05\pm 1.54\pm 0.31)\] \[\gamma_{D\bar{D}\to D\bar{D}} = (1.11\pm 3.24\pm 0.07) \tag{115}\] \[\chi^{2}/N_{\rm dof} = \tfrac{8.19}{13-6}=1.17\,,\]
and the resulting amplitude is presented in Fig. 27, which is comparable to Fig. 7.
We performed a limited exploration of the space of possible parameterization variations, finding amplitudes with \(g_{D\bar{D}}=0\) that described the data as well as the above description. We were not able to find amplitudes with a pole term but with \(g_{\eta_{c}\eta}\) set to zero that described the data well. Including moving-frame energy levels do not change these conclusions.
Figure 27: As Figure 7, but including levels well below \(\eta_{c}\eta\) threshold due to the stable \(\chi_{c0}(1P)\) state. Amplitude as given by Eq. 115.
## Appendix C \(J^{pc}=0^{++}\) amplitude determinations with 3 and 5 channels
In this appendix we investigate scalar amplitudes determined using highest energies that lie between the two channel region in sections V.1 & V.2 and the 7 channel region in sections V.2 & V.3. The aim is to more closely inspect the \(D_{s}\bar{D}_{s}\) near-threshold region, and to show that inclusion of the \(D^{*}\bar{D}^{*}\) channel is not essential to extract the resonance pole (although the extra levels and wider energy coverage are helpful).
### \(J^{pc}=0^{++}\) below \(\psi\omega\) threshold
We determine scattering amplitudes in the region where \(\eta_{c}\eta\), \(D\bar{D}\) and \(D_{s}\bar{D}_{s}\) are active, below the \(\psi\omega\) threshold at \(a_{t}E_{\text{cm}}=0.690\). In this case we opt to neglect the \(\eta_{c}\eta^{\prime}\) channel which is assumed to be decoupled, and exclude those levels dominated by overlap with operators resembling \(\eta_{c}\eta^{\prime}\). One motivation for considering only these levels is that the description in this energy region appears to have some tension in the larger energy region \(S\)-wave amplitude determinations, for example see the left panel of Fig. 17.
We make use of rest-frame irreps (\(L/a_{s}=16,20,24\)) and moving-frame irreps (\(L/a_{s}=20,24\)) with energies up to \(a_{t}E_{\text{cm}}=0.690\). This includes 5 levels dominated by overlap with \(D_{s}\bar{D}_{s}\) operators very close to \(D_{s}\bar{D}_{s}\) threshold from \([000]A_{1}^{+}\) and \([002]A_{1}\). The \([001]A_{1}\) and \([111]A_{1}\) irreps are included but they do not have any levels dominated by \(D_{s}\bar{D}_{s}\) operators in this energy region.
An example amplitude determined from these energies is given by a \(K\)-matrix of constants \(K_{ij}=\gamma_{ij}\). In this case, we subtract the Chew-Mandelstam function at the threshold of each channel. The resulting parameter values are,
\[\gamma_{\eta_{c}\eta\to\eta_{c}\eta} = (1.39\pm 0.58\pm 0.40)\] \[\gamma_{D\bar{D}\to D\bar{D}} = (0.13\pm 0.35\pm 0.22)\] \[\gamma_{D_{s}\bar{D}_{s}\to D_{s}\bar{D}_{s}} = (4.87\pm 3.84\pm 3.28)\] \[\gamma_{\eta_{c}\eta\to D\bar{D}} = (0.57\pm 0.32\pm 0.08)\] \[\gamma_{\eta_{c}\eta\to D_{s}\bar{D}_{s}} = (3.18\pm 1.35\pm 0.96)\] \[\gamma_{D\bar{D}\to D_{s}\bar{D}_{s}} = (-0.87\pm 1.07\pm 0.45)\] \[\chi^{2}/N_{\text{dof}}=\tfrac{53.0}{60-6-8}=1.15\,, \tag{101}\]
where the central value of the \(\gamma_{D_{s}\bar{D}_{s}\to D_{s}\bar{D}_{s}}\) parameter is numerically larger than in the other amplitudes in this work, although with a large uncertainty. The description of the finite-volume energy levels is improved in the region of \(D_{s}\bar{D}_{s}\) threshold, as can be seen in Fig. 28. The amplitudes are plotted in Fig. 29.
With this selection of levels, stronger features can be seen at \(D_{s}\bar{D}_{s}\) threshold than when considering a wider energy range. Amplitudes determined from this limited selection of energies feature poles on unphysical sheets in the region of \(a_{t}\sqrt{s}\approx 0.68\), which may be at complex locations in coupled-channel cases, or on the real axis below \(D_{s}\bar{D}_{s}\) threshold in decoupled cases. In the case presented in Eq. 101, there is a pole on a "hidden" sheet, although within uncertainties the imaginary part is consistent with zero, as summarized in Table 6.
The large uncertainties on the parameters and the plotted amplitudes indicate that the constraint on \(D_{s}\bar{D}_{s}\) is not particularly strong. We have highlighted this because it is the most extreme amplitude behavior near \(D_{s}\bar{D}_{s}\) threshold that the finite-volume spectra can tolerate. However, no poles have been found _close to_\(D_{s}\bar{D}_{s}\) threshold in this work, in contrast to Ref. [28] which finds one within a few MeV. The energy shifts seen in the current calculation are smaller and the corresponding interactions are weak. Adding further levels at higher energies pushes the solution towards a smaller \(D_{s}\bar{D}_{s}\) amplitude just above threshold, the pole then moves away, and the coupling weakens. The poles in Table 6 thus appear to be a property of describing energy levels in too small a region, and are not a good overall reflection of the findings in this work.
### Determining \(J^{PC}=0^{++}\) up to \(D^{*}\bar{D}^{*}\) threshold only
The \(0^{++}\) resonance identified in sections V.2 & V.3 lies below \(D^{*}\bar{D}^{*}\) threshold and so, in principle, we should be able to extract it without considering the \(D^{*}\bar{D}^{*}\) and \(\psi\phi\) channels, although the strong overlap with \(D^{*}\bar{D}^{*}\) operators of several states below threshold (as seen in Fig 3) suggest there is merit in including the kinematically closed \(D^{*}\bar{D}^{*}\) channel.
In this section we consider a coupled \(\eta_{c}\eta-D\bar{D}-\eta_{c}\eta^{\prime}-D_{s}\bar{D}_{s}-\psi\omega\) scattering system below \(D^{*}\bar{D}^{*}\) threshold, taking the opportunity to test some of the properties of the amplitudes at lower energies. As with the main analysis, we begin by using only at-rest energies before making use of both rest-frame and moving-frame energies together.
#### v.2.1 Rest-frame energies only
Working below \(a_{t}E_{\sf cm}=0.709\), there are 30 levels over the three volumes (including three \(\eta_{c}\eta^{\prime}\) levels) from \([000]A_{1}^{+}\). Following the methods outlined in the main text, we make use of a \(K\)-matrix with a pole and constants, \(K_{ij}=\frac{g_{i}g_{j}}{m^{2}-s}+\gamma_{ij}\), and a Chew-Mandelstam phase space subtracted at the \(K\)-matrix pole. One representative result is,
Figure 28: As Figure 7, except solutions from the three-channel amplitudes in Eq. 17 determined from \([000]\,A_{1}^{+}\) and moving frames \(A_{1}\) irreps are shown as the orange curves.
Figure 29: Three-channel \(J^{PC}=0^{++}\) scattering amplitudes in Eq. 17 determined from \([000]A_{1}^{+}\) and moving frame \(A_{1}\) irreps.
\[a_{t}m = (0.70398\pm 0.00128\pm 0.00035)\] \[a_{t}g_{D\bar{D}} = (0.0867\pm 0.0155\pm 0.00143)\] \[a_{t}g_{D_{s}\bar{D}_{s}} = (0.1281\pm 0.0222\pm 0.00393)\] \[\gamma_{\eta_{e}\eta\to\eta_{e}\eta} = (0.07\pm 0.098\pm 0.048)\] \[\gamma_{DD\to\psi\omega} = (2.04\pm 0.620\pm 0.208)\] \[\gamma_{\eta_{e}\eta^{\prime}\to\eta_{e}\eta^{\prime}} = (3.17\pm 1.49\pm 0.66)\] \[\chi^{2}/N_{\rm dof}=\tfrac{41.9}{30-6}=1.75\,,\]
where as previously all parameters not listed are set equal to zero. This amplitude is plotted in the top panel of Fig. 30.
The limited set of energy levels provides relatively little constraint for the \(\psi\omega\) channel, with only three levels dominated by \(\psi\omega\) operators being present very close to \(\psi\omega\) threshold. However, in this case, the \(\psi\omega\) amplitude has an interesting shape, it produces a dip around the pole position and has a relatively sharp rise from threshold, resulting in a shallow peak slightly _below_ the resonance mass. The uncertainties are large and this feature does not survive the addition of more energies, but it does show how small strength features in very weakly coupled channels do not always resemble the dominant resonance.
Adding more free parameters can produce a lower \(\chi^{2}/N_{\rm dof}\), in particular when allowing non-zero \(\gamma_{D\bar{D}\to D\bar{D}}\) and \(\gamma_{D_{s}\bar{D}_{s}\to D_{s}\bar{D}_{s}}\). However these also result large uncertainties on the determined amplitudes. Freedom in the \(\gamma_{D\bar{D}\to\psi\omega}\) parameter can be interchanged with \(\gamma_{\psi\omega\to\psi\omega}\) or \(g_{\psi\omega}\) with little effect on the amplitudes within uncertainties, and only small changes in the \(\chi^{2}/N_{\rm dof}\).
#### c.2.2 Including moving-frame energies
Using rest-frame energies up to \(a_{t}E_{\sf cm}=0.709\) and the same selection of moving-frame energies up to \(a_{t}E_{\sf cm}=0.690\) that were previously used in section C.1, results in 75 levels to constrain the \(S\)-wave amplitudes. We fix the \(J^{\overline{D}C}=2^{++}\) result to the reference amplitude given in Eq. 15, and for simplicity we fix all other contributing partial-waves to zero. We also fix the strength of the decoupled \(\eta_{e}\eta^{\prime}\) channel via \(\gamma_{\eta_{e}\eta^{\prime}\to\eta_{e}\eta^{\prime}}=3\), consistent with the determination in Eqs. 17 and 2. An example amplitude determined from these levels is,
\[a_{t}m = (0.7037\pm 0.0013\pm 0.0007)\] \[a_{t}g_{DD} = (0.081\pm 0.016\pm 0.001)\] \[a_{t}g_{D_{s}D_{s}} = (0.133\pm 0.022\pm 0.009)\] \[\gamma_{\eta_{e}\eta\to\eta_{e}\eta} = (0.02\pm 0.08\pm 0.05)\] \[\gamma_{DD\to DD} = (-0.53\pm 0.21\pm 0.11)\] \[\gamma_{D_{s}D_{s}\to D_{s}D_{s}} = (0.16\pm 1.02\pm 0.34)\] \[\gamma_{\psi\omega\to\psi\omega} = (1.02\pm 2.01\pm 0.26)\] \[\gamma_{\eta_{e}\eta^{\prime}\to\eta_{e}\eta^{\prime}} = 3\text{ (fixed)}\] \[\chi^{2}/N_{\rm dof}=\tfrac{89.9}{75-7-11}=1.58\,.\]
In this case, the second uncertainties are obtained by varying the scattering hadron masses to their upper values in Table 2 (\(m_{i}\to m_{i}+\delta m_{i}\)), but only half a sigma in the negative direction (\(m_{i}\to m_{i}-\delta m_{i}/2\)). When varying the masses to their lower values (\(m_{i}\to m_{i}-\delta m_{i}\)), we find qualitatively slightly different solutions which are discussed below. This amplitude is plotted in the central panel of Fig. 30.
In this reduced energy region below \(D^{*}\bar{D}^{*}\) threshold, when performing amplitude determinations at the extreme lower end of the ranges of the hadron masses (\(m_{i}\to m_{i}-\delta m_{i}\) in Table 2), we observe a second class of solution for the \(D_{s}\bar{D}_{s}\) amplitude, shown in the bottom panel of Fig. 30. We see that this solution has a significantly stronger turn-on of the \(D_{s}\bar{D}_{s}\to D_{s}\bar{D}_{s}\) amplitude at threshold, and an atypical behavior at higher energies where a strong peak appears in \(D\bar{D}\to D\bar{D}\).
The parameters corresponding to this solution using the rather extreme mass values \(m_{i}-\delta m_{i}\) are,
Figure 30: A selection of scattering amplitudes determined below \(a_{t}E_{\rm{cm}}=0.709\) (just below \(D^{*}\bar{D}^{*}\) threshold) in \(J^{PC}=0^{++}\). Peaks are seen across all parameterizations in \(D\bar{D}\) that correspond to a resonance pole consistent with that given in the main text, strongly coupled to \(D\bar{D}\) and \(D_{s}\bar{D}_{s}\). The top panel shows amplitudes obtained from 30 rest frame energies. The middle panel shows amplitudes obtained from 75 energies including moving frames. The bottom panel highlights a slightly different amplitude found with hadron masses set to \(1\sigma\) below their best-fit values.
\[a_{t}m = (0.7069\pm 0.0010)\] \[a_{t}g_{D\bar{D}} = (0.102\pm 0.019)\] \[a_{t}g_{D_{\bar{D}}s} = (0.065\pm 0.057)\] \[\gamma_{n_{c}\eta\to\eta_{s}\eta} = (-0.04\pm 0.10)\] \[\gamma_{DD\to D\bar{D}} = (-0.39\pm 0.23)\] \[\gamma_{D_{s}\bar{D}_{s}\to D_{s}\bar{D}_{s}} = (\phantom{-}2.87\pm 1.02)\] \[\gamma_{\psi\omega\to\psi\omega} = (-0.22\pm 1.19)\] \[\gamma_{\eta_{c}\eta^{\prime}\to\eta_{c}\eta^{\prime}} = 3\ (\mathrm{fixed})\] \[\chi^{2}/N_{\mathrm{dof}}=\tfrac{91.9}{75-7-11}=1.61\,,\]
and this amplitude has a virtual bound-state pole strongly coupled to the \(D_{s}\bar{D}_{s}\) channel at \(a_{t}\sqrt{s_{0}}=0.6656\pm 0.0157\) with \(a_{t}|c_{D_{s}\bar{D}_{s}}|=0.413\pm 0.056\). We highlight this result in part because it is resembles somewhat the solution found in Ref. [28], but only qualitatively, as the pole is roughly 125 MeV below \(D_{s}\bar{D}_{s}\) threshold while Ref. [28] reports a pole within a few MeV of the \(D_{s}\bar{D}_{s}\) threshold.
We consider this solution to be disfavored as it only appears when an extreme choice is made for all scattering hadron masses, but even here a scalar resonance appears, in reasonable agreement with our other determinations, with a pole position \(a_{t}\sqrt{s_{0}}=0.7068(9)\pm\tfrac{i}{2}0.0059(26)\) on the proximal sheet. Large couplings to \(D\bar{D}\) and \(D_{s}\bar{D}_{s}\) and small couplings to the \(\eta_{c}\eta\) and \(\psi\omega\) channels are found.
It should also be noted that while the central values look quite different, the uncertainty bands are largely consistent across sections C.2.1, C.2.2, and also the main results given above, as can be seen in Fig. 30.
## Appendix D A \(J^{PC}=2^{++}\) toy-model study
The purpose of this appendix is to show that the couplings to the kinematically closed \(D^{*}\bar{D}^{*}\) channel can be determined reliably from the volume-dependence of energy levels. We illustrate the sensitivity using a simplified two-channel system with a resonance coupling to an open \(D\)-wave channel (which we call \(D\bar{D}\)) and a closed \(S\)-wave channel (\(D^{*}\bar{D}^{*}\)). For an approximately fixed resonance mass and width, we show that the spectra are sensitive to the value of the coupling to \(D^{*}\bar{D}^{*}\). This toy-model also contains a further example of the \(K\)-matrix pole "coupling-ratio phenomenon" described in Section V.5.
We utilize a two-channel version of the Flatte amplitude Eq. 14, where the lower channel is \(D\bar{D}\big{\{}^{1}\!D_{2}\big{\}}\) (with \(D\)-wave suppression close to threshold) and the higher channel is \(D^{*}\bar{D}^{*}\big{\{}^{5}\!S_{2}\big{\}}\) (an \(S\)-wave channel that can open rapidly). The pole parameter is set to \(a_{t}m=0.7\), however we subtract the real correction from \(-ig_{D^{*}\bar{D}^{*}}^{2}\rho_{D^{*}\bar{D}^{*}}(m^{2})\), as described after Eq. 14, so that the pole parameter \(m\) retains its meaning. We initially fix \(a_{t}g_{D^{*}\bar{D}^{*}}=1.6\) which is a representative value giving \(D^{*}\bar{D}^{*}\big{\{}^{5}\!S_{2}\big{\}}\) amplitudes similar to those found throughout this work. The \(g_{D\bar{D}}\) coupling is then chosen so that a \(t\)-matrix pole width \(a_{t}\Gamma=-2\,\text{Im}\,a_{t}\sqrt{s_{0}}=0.0116\) is obtained (corresponding to \(\approx 66\) MeV). We then reduce \(g_{D^{*}\bar{D}^{*}}\) and adjust \(g_{D\bar{D}}\) in order to maintain an approximately constant \(t\)-matrix pole position \(a_{t}\sqrt{s_{0}}=0.697\pm\frac{i}{2}0.0116\).
In Fig. 31 we show the amplitudes and the finite volume spectra resulting from this procedure. Below \(a_{t}E_{\text{cm}}=0.7\), on the lower half of the resonance hump, we see almost no variation as these parameters are changed. Similarly, the finite volume spectra in this energy region show little sensitivity.
On the other hand _above_\(a_{t}E_{\text{cm}}=0.7\), significant differences are observed. An avoided level crossing occurs in every irrep around the position of the lowest \(D^{*}\bar{D}^{*}\) non-interacting energy with departures proportional to the size of \(g_{D^{*}\bar{D}^{*}}\). These deviations are significantly larger than typical uncertainties in the computed spectrum and so it is plausible that the coupling to this channel can be well-determined.
We also observe the beginning of the onset of the coupling-ratio phenomenon in this toy model given that there is only a relatively small difference between the amplitudes with \(a_{t}g_{D^{*}\bar{D}^{*}}=0.8\) and \(a_{t}g_{D^{*}\bar{D}^{*}}=1.6\), which correspond to very similar coupling ratios, \(g_{D^{*}\bar{D}^{*}}/g_{D\bar{D}}=1.05,0.096\) respectively.
Figure 31: Toy model amplitudes consisting of \(D\bar{D}\{^{1}\!D_{2}\}\) and \(D^{*}\bar{D}^{*}\{^{5}\!S_{2}\}\) as described in the text. Upper panel: Scattering amplitudes plotted as \(\rho_{i}\rho_{j}|t_{ij}|^{2}\) for each of the parameters given in the top right. Circles on the horizontal axis indicate threshold energies. Lower panel: The solid green and blue curves show non-interacting energies corresponding to \(D\bar{D}\) and \(D^{*}\bar{D}^{*}\) respectively. Degeneracies are not indicated since only a single level is expected for each hadron-hadron pair when only a single combination is present in the Lüscher determinant condition Eq. 5. The dashed horizontal lines indicate kinematic thresholds. The grey band and horizontal solid grey line indicate the mass and width of the resonance pole. The dotted horizontal line indicates both the position of the mass parameter \(a_{t}m\), and the centre of the peak seen in the \(D\bar{D}\{^{1}\!D_{2}\}\) amplitudes. The red and grey spectrum of curves show the finite volume spectra obtained from the Lüscher determinant condition Eq. 5 corresponding to the same-colored amplitudes from the upper panel.
## Appendix E Summary of amplitude parameterizations
In this appendix we summarise the parameterizations of the \(J^{PC}=3^{++}\), \(2^{++}\) and \(0^{++}\) amplitudes.
### Summary of \(J^{pc}=3^{++}\) parameterizations
In Table 7 we summarize 10 \(J^{PC}=3^{++}\) parameterization variations, describing 16 energy levels in the \([000]A_{2}^{+}\) irrep as introduced in section V.3. All eigenvalues of the data correlation matrix were above the cutoff of \(\Lambda=0.02\). The amplitudes are shown in Fig. 10.
### Summary of \(J^{PC}=2^{++}\) parameterizations
In Table 8 we summarize 24 \(J^{PC}=2^{++}\) parameterization variations obtained including energies from moving frame irreps. Many more parameterization forms were attempted but only those with good \(\chi^{2}\) minima are retained. Four additional parameterizations were obtained with the same free parameters as the reference amplitude, two with the scattering hadron masses set to their central values \(\pm 1\sigma\), as given in Table 2, and two with the anisotropy set to its upper and lower values \(3.444\pm 0.006\) as determined from the pion. By default we apply a cutoff on data correlation eigenvalues of \(\Lambda=0.02\) as mentioned in the text, with further details given in Appendix F.
\begin{table}
\begin{tabular}{c c|c|c} amplitude parameters & \(\chi^{2}/N_{\rm dof}\) & other details \\ \hline \(m\), \(g_{D\bar{D}}\), \(g_{D^{*}}\), \(g_{D_{s}D_{s}}\), \(g_{D^{*}D^{*}}\), \(\gamma_{\eta_{c}\eta\to\eta_{c}\eta}\), \(\gamma_{\psi\omega\to\psi\omega}\), \(\gamma_{\psi\phi\to\psi\phi}\) & \\ & & \(\frac{65.3}{86-7-23}=1.17\) & \\ \(\&\)\(g_{D\bar{D}}\) (freed) & & \(\frac{65.1}{86-7-23}=1.16\) & \(g_{D\bar{D}^{*}}=-40\) (fixed) \\ \(\&\)\(\tilde{\chi}\) & \(\gamma_{D\bar{D}\to D_{s}D_{s}}\) & & \(\frac{62.8}{86-823}=1.14\) & reference amp Eq. 15 \\ \(\&\)\(\tilde{\chi}\) & \(\gamma_{D\bar{D}\to D\bar{D}^{*}\to D\bar{D}^{*}}\), \(\gamma_{D_{s}D_{s}\to D_{s}D_{s}}\), \(\gamma_{D^{*}D^{*}\to D^{*}}\) & \(\frac{64.0}{86-11-23}=1.23\) \\ \(\&\)\(\tilde{\chi}\) & \(\gamma_{D\bar{D}\to D_{s}D_{s}}\), \(\gamma_{D\bar{D}\to D\bar{D}^{*}}\), \(\gamma_{D\bar{D}^{*}\to D_{s}D_{s}}\), \(\gamma_{D_{s}D_{s}\to D_{s}D_{s}}\) & \(\frac{56.9}{86-11-23}=1.09\) \\ \(\&\)\(\tilde{\chi}\) & \(\gamma_{D\bar{D}^{*}\to D\bar{D}^{*}}\), \(\gamma_{D^{*}\bar{D}^{*}\to D^{*}\bar{D}^{*}}\) & \(\frac{65.3}{86-10-23}=1.23\) \\ \(\&\)\(\tilde{\chi}\) & \(\gamma_{D\bar{D}^{*}\to D\bar{D}^{*}}\), \(\gamma_{D^{*}\bar{D}^{*}\to D^{*}\bar{D}^{*}}\) & \(\frac{60.0}{86-823}=1.09\) \\ \(\&\)\(\tilde{\chi}\) & \(\gamma_{D^{*}D^{*}\to D^{*}D^{*}}\) & & \(\frac{65.3}{86-823}=1.19\) \\ \(\&\)\(\tilde{\chi}\) & \(\gamma_{D\bar{D}^{*}\to\psi\omega}\) & & \(\frac{65.2}{86-7-23}=1.16\) & \(\gamma_{\psi\omega\to\psi\omega}=0\) (fixed) \\ \hline & & & \(\frac{99.1}{86-7-12}=1.48\) & \(\Lambda=0.01\) \\ & & & \(\frac{74.6}{86-7-19}=1.24\) & \(\Lambda=0.016\) \\ & & & \(\frac{54.4}{86-7-28}=1.07\) & \(\Lambda=0.024\) \\ & & & \(\frac{45.3}{86-7-35}=1.03\) & \(\Lambda=0.032\) \\ & & & \(\frac{36.8}{86-7-40}=0.94\) & \(\Lambda=0.040\) \\ \(\&\)\(\tilde{\chi}\) & \(\gamma_{D\bar{D}\to D_{s}D_{s}}\) & & \(\frac{63.3}{86-8}=0.81\) & uncorrelated \\ \hline & & & & simple phase space \\ \(\&\)\(\tilde{\chi}\) & \(\gamma_{\psi\omega\{^{3}D_{2}\}\to\psi\omega\{^{3}D_{2}\}}\) & & \(\frac{66.9}{86-823}=1.22\) & \\ \(\&\)\(\tilde{\chi}\) & \(\gamma_{D\bar{D}\to D_{s}D_{s}}+\gamma_{D_{s}D_{s}\to D^{*}D^{*}}\) & & \(\frac{70.9}{86-923}=1.31\) & \\ \(\&\)\(\tilde{\chi}\) & \(\gamma_{D\bar{D}\to D_{s}D_{s}}+\gamma_{D_{s}D_{s}\to D^{*}D^{*}}\) & & \(\frac{68.1}{86-923}=1.26\) & \(g_{D\bar{D}}=20\) \\ \(\&\)\(\tilde{\chi}\) & \(\gamma_{D\bar{D}\to D_{s}D_{s}}+\gamma_{D\bar{D}^{*}\to D\bar{D}^{*}}+\gamma_{D^{*}D ^{*}D^{*}}\) & & \(\frac{69.4}{86-10-23}=1.31\) & \\ \hline & & & & simple phase space \\ & & & \(\frac{106.6}{86-7-12}=1.59\) & \(\Lambda=0.010\) \\ & & & \(\frac{78.8}{86-7-19}=1.31\) & \(\Lambda=0.016\) \\ & & & \(\frac{69.5}{86-7-23}=1.24\) & \(\Lambda=0.020\) \\ & & & \(\frac{57.0}{86-7-28}=1.12\) & \(\Lambda=0.024\) \\ \end{tabular}
\end{table}
Table 8: \(J^{PC}=2^{++}\) parameterization variations. Parameters not listed are fixed to zero. A Chew-Mandelstam phase space with a \(K\)-matrix pole subtraction point is used unless otherwise stated. An cutoff on data correlation eigenvalues of \(\Lambda=0.02\) is used unless otherwise stated. We fix \(g_{D\bar{D}}=10\cdot a_{t}\) unless indicated otherwise.
### Summary of \(J^{PC}=0^{++}\) parameterizations
vi.3.1 Coupled \(\eta_{c}\eta\) and \(D\bar{D}\) scattering below \(\eta_{c}\eta^{{}^{\prime}}\) and \(D_{s}\bar{D}_{s}\) thresholds
This section gives further details on the amplitudes variations used in Section V.7, where \(K\)-matrices are determined using energies from \([000]A_{1}^{+}\), \([001]A_{1}\), \([111]A_{1}\) and \([002]A_{1}\), resulting in 43 levels. In two of the fits a reduced selection of energies is used, removing \([002]A_{1}\) levels and resulting in 31 levels. A data correlation eigenvalue cutoff of \(\Lambda=0.02\) is used, resulting in 5 resets for the 43 level selection, and 3 resets for the 31 level selection. Using two or three constant parameter terms \(\gamma_{ij}\) in the \(K\)-matrix results in 10 parameterisations in total, as summarized in Table 9.
\begin{table}
\begin{tabular}{c c c|c} \(\gamma_{\eta_{c}\eta\to\eta_{c}\eta}\) & \(\gamma_{\eta_{c}\eta\to D\bar{D}}\) & \(\gamma_{D\bar{D}\to D\bar{D}}\) & \(\chi^{2}/N_{\rm dof}\) \\ \hline \hline \multicolumn{4}{c|}{with Chew-Mandelstam phase space} \\
0.37(16) & -0.65(18) & 0.06(49) & \(\frac{28.4}{31-3.3}=1.14\) (\(*\)) \\
0.37(15) & -0.64(16) & 0.15(34) & \(\frac{40.5}{43-3.5}=1.16\) \\ – & -0.45(14) & 0.14(32) & \(\frac{47.6}{43-2.5}=1.32\) \\
0.40(12) & – & -0.39(24) & \(\frac{48.6}{43-2.5}=1.35\) \\
0.37(14) & -0.61(15) & – & \(\frac{40.8}{43-2.5}=1.13\) \\ \hline \multicolumn{4}{c|}{with simple phase space} \\
0.36(11) & -0.51(15) & -0.02(40) & \(\frac{45.0}{31-3.3}=1.80\) (\(*\)) \\
0.36(14) & -0.63(15) & 0.14(31) & \(\frac{40.4}{43-3.5}=1.15\) \\ – & -0.46(14) & 0.13(31) & \(\frac{47.4}{43-2.5}=1.32\) \\
0.40(12) & – & -0.39(25) & \(\frac{48.5}{43-2.5}=1.35\) \\
0.36(14) & -0.61(15) & – & \(\frac{40.6}{43-2.5}=1.13\) \\ \end{tabular}
\end{table}
Table 9: Parameterization variations for two-channel \(\eta_{c}\eta-D\bar{D}\)\(S\)-wave scattering amplitudes including moving frame energies. The first row in each block, indicated by (\(*\)), uses only 31 levels, excluding levels from \([002]A_{1}\). ‘\(\cdot\)’ indicates that a parameter is fixed to zero. The number of degrees of freedom is taken to be \(N_{\rm dof}=N_{\rm levels}-N_{\rm pars.}-N_{\rm reset}\) using a data-correlation eigenvalue cutoff of \(\Lambda=0.02\) as discussed in Appendix F.
Coupled-channel scattering up to \(\psi\phi\) threshold at rest and \(a_{t}E_{\text{on}}=0.69\) in moving frames
In Table 10 we summarize \(J^{PC}=0^{++}\) parameterization variations working up to \(\psi\phi\) threshold while including moving-frame information. One example with parameter values and correlations is given in Eq. 17. These amplitudes are plotted in Fig. 18, and are used when determining \(t\)-matrix poles in Section VI.1.
\begin{table}
\begin{tabular}{l|c|c} amplitude parameters & \(\chi^{2}/N_{\text{dof}}\) & other details \\ \hline \(m\), \(g_{DD}\), \(g_{D_{s}D_{s}}\), \(g_{D^{*}D^{*}}\), \(\gamma_{n_{\eta}\eta\to\eta_{c}\eta}\), \(\gamma_{\psi\phi\to\psi\phi}\) & \\ & \(\&\,g_{\phi\omega}\), \(\gamma_{DD\to D_{s}D_{s}}\), \(\gamma_{\eta_{c}\eta^{\prime}\to\eta_{c}\eta^{\prime}}\), \(\gamma_{\psi\omega}{}_{\{}^{5}D_{4}\}\) & \(\frac{91.0}{50-10-16}=1.42\) & reference amp \\ & & \(\frac{91.6}{90-7-16}=1.37\) & \\ & \(\&\,g_{\phi\omega}\), \(\gamma_{DD\to D_{s}D_{s}}\) & \(\frac{91.5}{90-8-16}=1.39\) & \\ & \(\&\,g_{\phi\omega}\), \(\gamma_{DD\to D_{s}D_{s}}\), \(\gamma_{\psi\omega}{}_{\{}^{5}D_{4}\}\) & \(\frac{91.2}{90-916}=1.40\) & \\ & \(\&\,\gamma_{DD\to DD}\), \(\gamma_{DD\to D_{s}D_{s}}\), \(g_{\psi\omega}\) & \(\frac{87.1}{90-916}=1.34\) & \\ & \(\&\,\gamma_{DD\to DD}\), \(\gamma_{DD\to D_{s}D_{s}}\), \(\gamma_{\psi\omega\to\psi\omega}\), \(\gamma_{\psi\omega\to D^{*}D^{*}}\) & \(\frac{94.7}{50-10-16}=1.48\) & \\ & \(\&\,\gamma_{DD\to DD}\), \(\gamma_{DD\to D_{s}D_{s}}\), \(\gamma_{\psi\omega\to\psi\omega}\) & \(\frac{94.7}{50-9-16}=1.46\) & \\ & \(\&\,\gamma_{D_{s}D_{s}\to D_{s}D_{s}}\), \(\gamma_{\psi\omega\to\psi\omega}\) & \(\frac{93.6}{90-916}=1.44\) & \\ & \(\&\,\gamma_{D^{*}D^{*}\to D^{*}}\), \(\gamma_{\psi\omega\to\psi\omega}\) & \(\frac{92.8}{90-916}=1.43\) & \\ & \(\&\,\gamma_{DD\to D_{s}D_{s}}\), \(\gamma_{\psi\omega\to\psi\omega}\), \(\gamma_{\eta_{c}\eta^{\prime}\to\eta_{c}\eta^{\prime}}\) & \(\frac{95.0}{90-916}=1.46\) & \\ & \(\&\,\gamma_{\psi\omega\to\psi\omega}\) & \(\frac{92.9}{50-7-16}=1.39\) & simple phase space \\ & \(\&\,\gamma_{\psi\omega\to\psi\omega}\), \(\gamma_{DD\to D_{s}D_{s}}\) & \(\frac{90.6}{90-8-16}=1.37\) & simple phase space \\ \hline & \(\,\,\gamma_{DD\to D_{s}D_{s}}\), \(g_{\psi\omega}\) & \(\frac{143.0}{50-8-6}=1.88\) & \(\Lambda=0.01\)\((*)\) \\ & \(\,\,\gamma_{DD\to D_{s}D_{s}}\), \(g_{\phi\omega}\) & \(\frac{59.4}{90-8-35}=1.26\) & \(\Lambda=0.04\)\((*)\) \\ \hline & \(\,\,\gamma_{DD\to D_{s}D_{s}}\), \(\gamma_{\psi\omega\to\psi\omega}\) & \(\frac{143.4}{90-8-6}=1.89\) & \(\Lambda=0.01\)\((*)\) \\ & \(\,\,\gamma_{DD\to D_{s}D_{s}}\), \(\gamma_{\psi\omega\to\omega}\) & \(\frac{115.3}{50-8-11}=1.62\) & \(\Lambda=0.016\) \\ & \(\,\,\gamma_{DD\to D_{s}D_{s}}\), \(\gamma_{\psi\omega\to\psi\omega}\) & \(\frac{82.2}{50-5-20}=1.33\) & \(\Lambda=0.024\) \\ & \(\,\,\gamma_{DD\to D_{s}D_{s}}\), \(\gamma_{\psi\omega\to\psi\omega}\) & \(\frac{69.8}{90-8-28}=1.29\) & \(\Lambda=0.032\) \\ & \(\,\,\gamma_{DD\to D_{s}D_{s}}\), \(\gamma_{\psi\omega\to\psi\omega}\) & \(\frac{59.4}{50-8-35}=1.26\) & \(\Lambda=0.04\)\((*)\) \\ & \(\,\,\gamma_{DD\to D_{s}D_{s}}\), \(\gamma_{\psi\omega\to\psi\omega}\) & \(\frac{71.0}{50-8}=0.87\) & uncorrelated \\ \end{tabular}
\end{table}
Table 10: \(J^{PC}=0^{++}\) parameterization variations. Parameters not listed are fixed to zero. A Chew-Mandelstam phase space with a \(K\)-matrix pole subtraction point is used unless otherwise stated. A data-correlation eigenvalue cutoff of \(\Lambda=0.02\) is used unless otherwise stated. If \(\gamma_{\psi\omega}{}_{\{}^{5}D_{4}\}\to\psi\omega{}_{\{}^{5}D_{4}\}\) is not listed, it is fixed to 300. If \(\gamma_{\eta_{c}\eta^{\prime}\to\eta_{c}\eta^{\prime}}\) is not listed, it is fixed to 3. All meson-meson channels are \({}^{1}S_{0}\) unless otherwise stated. Amplitudes marked \((*)\) are provided for comparison with Table VIII, and they are not included on the plots or used in the analysis.
## Appendix F Data covariance eigenvalue cutoff
Relatively large data correlations between the energy levels on each lattice volume are found in this work. For small selections of energy levels, such as those obtained using only rest-frame energies, this does not present a problem, but for larger selections of energy levels, such as those using moving-frames, inverting the data covariance for use in a correlated \(\chi^{2}\) produces an object of questionable validity. Given the use of ensembles of typically \(\sim 500\) gauge configurations, one should not expect to be able to reliably determine all components of data covariance matrices of increasingly high rank. This issue is most relevant for the amplitude determinations in Sections V.8 and V.9.
In Fig. 32, we show the eigenvalues \(\lambda_{i}\) of the data correlation matrices, normalised to the largest eigenvalue, \(\lambda_{1}\) for the two largest sets of spectra relevant to scalar and tensor scattering, including moving-frame energies. A steep dropoff in value is observed for the smallest value eigenvalues, and we infer that this is associated with the these modes being poorly determined. We choose to place a cut on the allowed values when performing fits, removing the eigenvectors associated with the cut eigenvalues from the matrix inverse. Our default choice is to retain only those modes with \(\Lambda=\lambda/\lambda_{1}>0.02\). We have explored a range of values of this cut between \(\Lambda=0.01\) and \(0.04\), and have reported the modest sensitivity to this choice in earlier appendices.
We have also explored other approaches, such as artificially setting the correlations to zero, "shrinkage" which interpolates between fully correlated and uncorrelated [98, 99], and using "eigenvalue limits" rather than hard cutoffs [55]. The outcomes are broadly similar as can be inferred from Figs. 14 & 18, which include amplitudes determined using several different values of \(\Lambda\) and one where the correlations are set to zero, as shown in Tables 10 and 8 respectively. Provided the smallest eigenmodes or the most extreme correlations are tamed using one of these methods, the results are in good agreement. We consider \(\Lambda=0.02\) to be a conservative choice.
Figure 32: The eigenvalues, \(\lambda_{i}\), of the data correlation matrix normalized to the largest eigenvalue \(\lambda_{1}\), ordered in decreasing magnitude. We observe a steep falloff above \(i\approx 30\) where the directions in the eigenspace are unlikely to be reliably determined. The number of resets on the plot indicates the number of modes that are discarded on the \(20^{3}+24^{3}\) volumes, with cuts of \(\Lambda=0.04\) (light grey), \(0.02\) (dark grey) and \(0.01\) (black). The “\(0^{++}\)” and “\(2^{++}\)” refer to the correlation matrix used in the largest amplitude determinations of these \(J^{PC}\) from Sections V.8 and V.9 respectively.
## Appendix G Additional scattering amplitude poles in \(J^{pc}=2^{++}\)
The \(J^{PC}=2^{++}\) amplitudes determined in Section V.6 feature a single narrow resonance pole that is systematically present across many parameterizations, but in addition other poles can be present which vary in location and which do not have obvious interpretations. We explore these in this Appendix. In particular we investigate the origin of the closest of these additional poles using simplified elastic and two-channel systems that capture the main features of the amplitudes used in this work. We explore the dependence of these poles on the \(g_{D\bar{D}^{*}}\) parameter and propose an alternative parameterization where the additional poles do not arise. Ultimately we find that the narrow resonance pole on the proximal \((D\bar{D}[-],D\bar{D}^{*}[-],D_{s}\bar{D}_{s}[-],D^{*}\bar{D}^{*}[+])\) sheet is the only nearby pole singularity necessary to describe the finite-volume spectra.
Figure 33 shows the \(t\)-matrix poles found for a range of parameterizations, where the nearby pole on the proximal sheet (in red) is observed to show very little variation over parameterization. In section VI.1 we discuss "mirror" poles in the context of the scalar amplitudes, and these poles on "hidden" sheets are to be expected with a large number of Riemann sheets and several decoupled hadron-hadron channels in each case. Many of these poles can be ignored due to their distance on the Riemann surface from physical scattering. In Figure 33 the green and blue points show such "mirror" poles, and we observe that they show a greater scatter over parameterization variation than the pole on the proximal sheet.
Figure 33 also shows, in grey, poles found _on the physical sheet_. Such poles indicate a breakdown of causality, but depending upon how close they are to real scattering energies, the pathology may not be of any practical relevance. The origin of these poles can be traced back to the presence of the \(k_{i}^{-\ell_{i}}\) barrier factor in Eq. 6 for \(D\)-wave channels. Such barriers are necessary to promote the expected behavior of amplitudes at threshold 37, but unless some other part of the amplitude suppresses their effect at higher-energies, they can give rise to unwanted energy dependence.
Footnote 37: Which matches the behavior of the Lüscher Zeta functions at threshold.
The amplitudes presented in Section V.5 feature a large contribution to the denominator of the \(t\)-matrix from the \(D\bar{D}^{*}\) channel,
\[\sim g_{DD^{*}\{\bar{3}D_{2}\}}^{2}(2k_{D\bar{D}^{*}})^{4}\rho_{D\bar{D}^{*}}\,,\]
where values of \(g_{DD^{*}\{\bar{3}D_{2}\}}\) are found between \(-30\)\(a_{t}\) and \(-40\)\(a_{t}\). The dominance of this term over others in the denominator offers an explanation of the presence of physical sheet poles. A simple way to see this is by plotting the positions of the poles of the amplitudes in the complex-\(k_{D\bar{D}^{*}}\) plane, as is done in Fig. 34. Using the reference parameterization in Eq. 15, the position of the complex-conjugate pair (in \(s\)) of poles due to the resonance are plotted in blue. The physical sheet poles are shown in red, and a virtual bound state pole that also arises is shown in green.
Figure 33: Poles of the \(J^{PC}=2^{++}\) amplitudes plotted in the complex \(k_{D^{*}D^{*}}\) and \(\sqrt{s}\) planes. The resonance pole on the \((-,-,-,+)\) sheet is shown in red. A second nearby pole observed on the \((+,+,+,-)\) sheet is shown in green. Several other more distant poles are also present as described in the text. In particular, there are virtual bound state poles on several sheets below \(D\bar{D}\) threshold. For each pole the reference parameterization is highlighted in black.
These poles can be compared to those present in a simple toy-amplitude featuring a single elastic \(D\bar{D}^{*}\{^{3}\!D_{2}\}\) amplitude constructed from just a \(K\)-matrix pole, \(K=g_{D\bar{D}^{*}}^{2}/(m_{0}^{2}-s)\). In this case, the \(t\)-matrix has a denominator \(D=m_{0}^{2}-s-ig_{D\bar{D}^{*}}^{2}(2k_{D\bar{D}^{*}})^{5}/\sqrt{s}\), and with \(g_{D\bar{D}^{*}\{^{3}\!D_{2}\}}=-40\,a_{t}\) and \(a_{t}m=0.71\); numerically the final term dominates the behavior. In this case the denominator equals zero at five points shown in orange in Fig. 34, which lie very close to the roots of \(10^{-6}-i(a_{t}k_{D\bar{D}^{*}})^{5}\), shown in grey. This "roots of unity"-like phenomenon is unavoidable with a large coupling and the \(D\)-wave threshold factor in the denominator.38
Footnote 38: It is straightforward to observe similar solutions for \(\ell>0\) in a very simple amplitude such as a scattering length approximation, \(k^{2\ell+1}\cot\delta_{\ell}=1/a_{\ell}\).
In simple coupled \(D\bar{D}^{*}\{^{3}\!D_{2}\}-D^{*}\bar{D}^{*}\{^{5}\!S_{2}\}\) systems, an additional term \(-ig_{D^{*}D^{*}}^{2}(2k_{D^{*}\bar{D}^{*}})/\sqrt{s}\) arises in the denominator. Adding this results in a very close agreement with the solutions obtained from the amplitudes determined from lattice QCD energies, as shown by the pale blue-green points in Fig. 34. Fig. 34 shows only sheets where \(\operatorname{Im}k_{D^{*}\bar{D}^{*}}>0\), but poles are also present on sheets with \(\operatorname{Im}k_{D^{*}\bar{D}^{*}}<0\). In this highly simplified two-channel system, an additional group of five poles arises that are approximately the complex-conjugates in \(k_{D\bar{D}^{*}}\) of those in Fig. 34.
The sensitivity of the additional poles to the value of \(g_{D\bar{D}^{*}}\) can be explored. By default we have fixed \(g_{D\bar{D}}=10\,a_{t}\), but owing to the coupling-ratio phenomenon, there is very little sensitivity to this choice. We may now consider in addition fixing \(g_{D\bar{D}^{*}}\) to a range of values, and redetermine the remaining parameters by \(\chi^{2}\) minimization for each choice. For simplicity we do this using only the energies from the \([000]\,E^{+}\) and \([000]\,T_{2}^{+}\) irreps. In Fig. 35, the result
Figure 34: The “roots of unity”-like phenomena present in the \(2^{++}\) amplitudes. The grey circles show the solutions of \(10^{-6}-i(a_{t}k_{D\bar{D}^{*}})^{5}=0\), an arbitrary but simple choice whose zeros approximate the positions of the observed poles, and the large dashed circle shows \(|a_{t}k_{D\bar{D}^{*}}|=10^{-6/5}\). The orange circles are the solutions of \(m_{0}^{2}-s-ig_{D\bar{D}^{*}}^{2}(2k_{D\bar{D}^{*}})^{5}/\sqrt{s}=0\). The pale blue-green circles are the solutions of \(m_{0}^{2}-s-ig_{D\bar{D}^{*}}^{2}(2k_{D\bar{D}^{*}})^{5}/\sqrt{s}=0\) which closely mimics the observed behavior of the amplitudes determined above. The points with error bars are the relevant poles of the reference parameterization. The open circles on axes are hadron-hadron thresholds.
of this procedure is shown. Four clusters of poles are plotted, the resonance pole on the proximal sheet \((-,-,-,+)\) is shown below the real \(\sqrt{s}\) axis, and three clusters of spurious poles on the \((+,+,+,\pm)\) sheets are plotted above the axis. The closest of these on the \((+,+,+,+)\) sheet is the equivalent of the physical sheet pole shown in red in Fig. 34, and the nearby pole on the \((+,+,+,-)\) sheet is the "mirror" obtained by switching to the unphysical \(D^{*}\bar{D}^{*}\) sheet. These closest two poles move away from the constrained energy region as \(g_{D\bar{D}^{*}}\) is reduced. The resonance pole position on the proximal sheet is very well determined and relatively insensitive to the precise value of \(g_{D\bar{D}^{*}}\). The \(\chi^{2}\) is also shown and has a smooth dependence on \(g_{D\bar{D}^{*}}\). We see that the energy levels clearly do favor a large value of \(g_{D\bar{D}^{*}}\) and the proximity of the physical sheet pole can be associated with this.
Since the narrow resonance we are interested in lies some way above \(D\bar{D}^{*}\) threshold, we might anticipate that its properties would not be overly sensitive to the barrier behavior at threshold. This can be explored by crudely replacing the \(k_{D\bar{D}^{*}}^{5}\) factor with a lower power in the parameterization. This introduces a mismatch with the behavior of the Luscher Zeta functions, but in practice this only has a significant effect for energies very close to or more severely _below_ threshold. By restricting to consideration of the \([000]\,E^{+}\) and \([000]\,T_{2}^{+}\) spectra, which have no energy levels at or below threshold that have significant overlap onto \(D\bar{D}^{*}\)-like operators, we anticipate that we do not introduce a serious error into the analysis.
Using a similar parameterization to the reference parameterization determined from \([000]\,E^{+}\) and \([000]\,T_{2}^{+}\) energies, as given in Eq. 13, we artificially modify the \(\ell_{i}\) for \(D\bar{D}^{*}\{{}^{3}\!D_{2}\}\) terms in Eq. 6 to take value 1 rather than 2. This leads to a term in the denominator with only three powers of momentum rather than five. Describing 47 energy levels results in an amplitude,
Figure 35: Left: Pole positions in \(\sqrt{s}\) from the \(2^{++}\) amplitudes as a function of \(g_{D\bar{D}^{*}}\) colored according to the values shown in the right figure. Four clusters of poles are shown. The tight cluster below the real axis is the resonance pole on the proximal sheet. The other poles are strongly dependent on the value of \(g_{D^{*}D^{*}}\). Right: the \(\chi^{2}\) value computed from the \([000]\,E^{+}\) and \([000]\,T_{2}^{+}\) irreps at the same values of \(g_{D\bar{D}^{*}}\).
\[a_{t}m = (0.7037\pm 0.0011\pm 0.0001)\] \[\mbox{``}g_{DD^{*}(^{3}D_{2})} = (-4.39\pm 0.70\pm 0.17)\] \[\mbox{$g_{D^{*}(^{1}D_{2})}$} = (-0.32\pm 3.49\pm 0.97)\cdot a_{t}\] \[\mbox{$g_{D^{*}D^{*}(^{1}S_{2})}$} = (1.74\pm 0.22\pm 0.13)\cdot a_{t}^{-1}\] \[\mbox{$g_{\psi\omega}(^{3}S_{2})$} = (0.00\pm 0.22\pm 0.06)\cdot a_{t}^{-1}\] \[\mbox{$\gamma_{\eta_{o}\eta^{\{}^{\dagger}}[{}^{1}D_{2})}$} = (22.0\pm 23.9\pm 7.83)\cdot a_{t}^{4}\] \[\mbox{$\gamma_{DD\{^{1}D_{2}\},D_{2}\}$} = (163\pm 189\pm 44)\cdot a_{t}^{4}\] \[\mbox{$\gamma_{\omega\{^{\dagger}S_{2}\},\omega\{^{\dagger}S_{2} \}}$} = (-0.88\pm 0.45\pm 0.05)\] \[\mbox{$\gamma_{\psi\omega\{^{\dagger}3D_{2}\},\omega\{^{\dagger }3D_{2}\}}$} = (561\pm 513\pm 132)\cdot a_{t}^{4}\] \[\mbox{$\gamma_{\psi\phi\{^{\dagger}S_{2}\},\omega\{^{\dagger}S_{2 }\}}$} = (1.33\pm 0.78\pm 0.04)\] \[\mbox{$g_{D\bar{D}\{^{1}D_{2}\}}$} = 10\cdot a_{t}\ \mbox{(fixed)}\] \[\mbox{$\chi^{2}/N_{\rm dof}$}=\frac{49.1}{47-10}=1.33\,. \tag{101}\]
This amplitude and its resonance pole position are shown in Fig. 36, alongside those of other \(2^{++}\) amplitudes given in Eq. 13 and Eq. 15.
The nearby resonance pole on the proximal sheet appears at \(a_{t}\sqrt{s_{0}}=(0.7016\pm 0.0013)-\frac{i}{2}(0.013\pm 0.003)\), which, as anticipated, is essentially the same location as when the correct \(D\)-wave barrier behavior was present for \(D\bar{D}^{*}\). On the other hand, physical sheet poles are found in completely different locations for this amplitude, with the nearest being at \(a_{t}\sqrt{s_{0}}=(0.651\pm 0.010)+\frac{i}{2}(0.169\pm 0.052)\), which is very far from physical scattering.
In summary we conclude that the physical sheet poles found in the amplitudes presented in the main text are an artifact of the \(D\)-wave barrier factors for the \(D\bar{D}^{*}\)channel, while the narrow resonance pole which is the dominant feature of the \(2^{++}\) amplitude is a robust result.
Figure 36: The amplitude and resonance pole position from the amplitude in Eq. (116) using the modified threshold factors compared with other amplitudes obtained in this work. In the left and upper panels we show the diagonal \(D\bar{D}\), \(D\bar{D}^{\star}\) and \(D^{\star}\bar{D}^{\star}\) amplitudes compared with the amplitudes given in Eq. (13) using only rest-frame energies (dotted curves) and Eq. (15) including also moving frame energies (pink curves). In the lower right panel, a comparison of the resonance pole positions are shown, including the value obtained considering all parameterization variations in table 8. |
2309.15409 | The Sierpiński Domination Number | Let $G$ and $H$ be graphs and let $f \colon V(G)\rightarrow V(H)$ be a
function. The Sierpi\'{n}ski product of $G$ and $H$ with respect to $f$,
denoted by $G \otimes _f H$, is defined as the graph on the vertex set
$V(G)\times V(H)$, consisting of $|V(G)|$ copies of $H$; for every edge $gg'$
of $G$ there is an edge between copies $gH$ and $g'H$ of $H$ associated with
the vertices $g$ and $g'$ of $G$, respectively, of the form
$(g,f(g'))(g',f(g))$. In this paper, we define the Sierpi\'{n}ski domination
number as the minimum of $\gamma(G\otimes _f H)$ over all functions $f \colon
V(G)\rightarrow V(H)$. The upper Sierpi\'{n}ski domination number is defined
analogously as the corresponding maximum. After establishing general upper and
lower bounds, we determine the upper Sierpi\'{n}ski domination number of the
Sierpi\'{n}ski product of two cycles, and determine the lower Sierpi\'{n}ski
domination number of the Sierpi\'{n}ski product of two cycles in half of the
cases and in the other half cases restrict it to two values. | Michael A. Henning, Sandi Klavžar, Elżbieta Kleszcz, Monika Pilśniak | 2023-09-27T05:21:18Z | http://arxiv.org/abs/2309.15409v1 | # The Sierpinski Domination Number
###### Abstract
Let \(G\) and \(H\) be graphs and let \(f\colon V(G)\to V(H)\) be a function. The Sierpinski product of \(G\) and \(H\) with respect to \(f\), denoted by \(G\otimes_{f}H\), is defined as the graph on the vertex set \(V(G)\times V(H)\), consisting of \(|V(G)|\) copies of \(H\); for every edge \(gg^{\prime}\) of \(G\) there is an edge between copies \(gH\) and \(g^{\prime}H\) of \(H\) associated with the vertices \(g\) and \(g^{\prime}\) of \(G\), respectively, of the form \((g,f(g^{\prime}))(g^{\prime},f(g))\). In this paper, we define the Sierpinski domination number as the minimum of \(\gamma(G\otimes_{f}H)\) over all functions \(f\colon V(G)\to V(H)\). The upper Sierpinski domination number is defined analogously as the corresponding maximum. After establishing general upper and lower bounds, we determine the upper Sierpinski domination number of the Sierpinski product of two cycles, and determine the lower Sierpinski domination number of the Sierpinski product of two cycles in half of the cases and in the other half cases restrict it to two values.
**Keywords:** Sierpinski graph; Sierpinski product; domination number; Sierpinski domination number
**AMS subject classification:** 05C69, 05C76
Introduction
Sierpinski graphs represent a very interesting and widely studied family of graphs. They were introduced in 1997 in the paper [15], where the primary motivation for their introduction was the intrinsic link to the Tower of Hanoi problem, for the latter problem see the book [12]. Intensive research of Sierpinski graphs led to a review article [11] in which state of the art up to 2017 is summarized and unified approach to Sierpinski-type graph families is also proposed. Later research on Sierpinski graphs includes [2, 3, 6, 19, 23].
Sierpinski graphs have a fractal structure, the basic graphs of which are complete graphs. In 2011, Gravier, Kovse, and Parreau [7] introduced a generalization in such a way that any graph can act as a fundamental graph, and called the resulting graphs generalized Sierpinski graphs. We refer to the papers [1, 4, 5, 13, 14, 16, 17, 20, 21, 22, 24] for investigations of generalized Sierpinski graphs in the last few years.
An interesting generalization of Sierpinski graphs in the other direction has recently been proposed by Kovic, Pisanski, Zemljic, and Zitnik in [18]. Namely, in the spirit of classical graph products, where the vertex set of a product graph is the Cartesian product of the vertex sets of the factors, they introduced the Sierpinski product of graphs as follows. Let \(G\) and \(H\) be graphs and let \(f\colon V(G)\to V(H)\) be an arbitrary function. The _Sierpinski product of graphs \(G\) and \(H\) with respect to \(f\)_, denoted by \(G\otimes_{f}H\), is defined as the graph on the vertex set \(V(G)\times V(H)\) with edges of two types:
* _type-\(1\) edge_: \((g,h)(g,h^{\prime})\) is an edge of \(G\otimes_{f}H\) for every vertex \(g\in V(G)\) and every edge \(hh^{\prime}\in E(H)\),
* _type-\(2\) edge_: \((g,f(g^{\prime}))(g^{\prime},f(g))\) is an edge of \(G\otimes_{f}H\) for every edge \(gg^{\prime}\in E(G)\).
We observe that the edges of type-\(1\) induce \(n(G)=|V(G)|\) copies of the graph \(H\) in the Sierpinski product \(G\otimes_{f}H\). For each vertex \(g\in V(G)\), we let \(gH\) be the copy of \(H\) corresponding to the vertex \(g\). A type-\(2\) edge joins vertices from different copies of \(H\) in \(G\otimes_{f}H\), and is called a _connecting edges_ of \(G\otimes_{f}H\). A vertex incident with a connecting edge is called a _connecting vertex_. We observe that two different copies of \(H\) in \(G\otimes_{f}H\) are joined by at most one edge. A copy of the graph \(H\) corresponding to a vertex of the graph \(G\) in the Sierpinski product \(G\otimes_{f}H\) is called an _\(H\)-layer_.
Let \(G\) and \(H\) be graphs and \(H^{G}\) be the family of functions from \(V(G)\) to \(V(H)\). We introduce new types of domination, the _Sierpinski domination number_, denoted by \(\gamma_{S}(G,H)\), as the minimum over all functions \(f\) from \(H^{G}\) of the domination number of the Sierpinski product with respect to \(f\), and _upper Sierpinski domination number_, denoted by \(\Gamma_{S}(G,H)\), as the maximum over all functions \(f\in H^{G}\) of domination number of the Sierpinski product with respect to \(f\). That is,
\[\gamma_{\mathrm{S}}(G,H)\coloneqq\min_{f\in H^{G}}\{\gamma(G\otimes_{f}H)\}\]
and
\[\Gamma_{\mathrm{S}}(G,H)\coloneqq\max_{f\in H^{G}}\{\gamma(G\otimes_{f}H)\}\,.\]
In this paper, we initiate the study of Sierpinski domination in graphs. In Section 1.1 we present the graph theory notation and terminology we follow. In Section 2 we discuss general lower and upper bounds on the (upper) Sierpinski domination number. Our main contribution in this introductory paper is to determine the upper Sierpinski domination number of the Sierpinski product of two cycles, and to determine the lower Sierpinski domination number of the Sierpinski product of two cycles in half of the cases and in the other half cases restrict it to two values.
### Notation and terminology
We generally follow the graph theory notation and terminology in the books [8, 9, 10] on domination in graphs. Specifically, let \(G\) be a graph with vertex set \(V(G)\) and edge set \(E(G)\), and of order \(n(G)=|V(G)|\) and size \(m(G)=|E(G)|\). For a subset \(S\) of vertices of a graph \(G\), we denote by \(G-S\) the graph obtained from \(G\) by deleting the vertices in \(S\) and all edges incident with vertices in \(S\). If \(S=\{v\}\), then we simply write \(G-v\) rather than \(G-\{v\}\). The subgraph induced by the set \(S\) is denoted by \(G[S]\). We denote the path, cycle and complete graph on \(n\) vertices by \(P_{n}\), \(C_{n}\), and \(K_{n}\), respectively. For \(k\geq 1\) an integer, we use the notation \([k]=\{1,\ldots,k\}\) and \([k]_{0}=\{0,1,\ldots,k\}\). We generally label vertices of the considered graphs by elements of \([n]\). In this case, the mod function over the set \([n]\) is to be understood in a natural way, more formally, we apply the following operation for \(t\geq 1\): \(t\bmod^{*}n=(t-1)\bmod n+1\).
A vertex _dominates_ itself and its neighbors, where two vertices are neighbors in a graph if they are adjacent. A _dominating set_ of a graph \(G\) is a set \(S\) of vertices of \(G\) such that every vertex in \(G\) is dominated by a vertex in \(S\). The _domination number_, \(\gamma(G)\), of \(G\) is the minimum cardinality of a dominating set of \(G\). A dominating set of cardinality \(\gamma(G)\) is called a \(\gamma\)_-set of \(G\)_. A thorough treatise on dominating sets can be found in [8, 9].
If \(S\) is a set of vertices in a graph \(G\), then we will use the notation \(G|S\) to denote that the vertices in the set \(S\) are assumed to be dominated and hence \(\gamma(G|S)\) is the minimum number of vertices in the graph \(G\) needed to dominate \(V(G)\setminus S\). We note that it could be that a vertex in \(S\) is still a member of a such a minimum dominating set no matter that we do not need to dominate the vertices in \(S\) themselves. If \(S=\{x\}\), then we simply denote \(G|S\) by \(G|x\) rather than \(G|\{x\}\).
## 2 General lower and upper bounds
We present in this section general lower and upper bounds on the (upper) Sierpinski domination number.
**Theorem 2.1**.: _If \(G\) and \(H\) are graphs, then_
\[n(G)\gamma(H)-m(G)\leq\gamma_{\mathrm{S}}(G,H)\leq\Gamma_{\mathrm{S}}(G,H) \leq n(G)\gamma(H)\,.\]
Proof.: Let \(G\otimes_{f}H\) be an arbitrary Sierpinski product of graphs \(G\) and \(H\) and let \(X\) be a \(\gamma\)-set of \(G\otimes_{f}H\). Assuming for a moment that all the connecting edges are removed from
\(G\otimes_{f}H\), we obtain \(n(G)\) disjoint copies of \(H\) for which we clearly need \(n(G)\gamma(H)\) vertices in a minimum dominating set. Consider now an arbitrary connecting edge \(e=(g,f(g^{\prime}))(g^{\prime},f(g))\) of \(G\otimes_{f}H\). If no end-vertex of \(e\) lies in \(X\), then clearly \(\gamma(G\otimes_{f}H-e)=\gamma(G\otimes_{f}H)\). Similarly, if both end-vertices of \(e\) lie in \(X\), then \(\gamma(G\otimes_{f}H-e)=\gamma(G\otimes_{f}H)\). Hence the only situation in which \(e\) has an effect on \(\gamma(G\otimes_{f}H)\) is when \((g,f(g^{\prime}))\in X\) and \((g^{\prime},f(g))\notin X\) (or the other way around). But in this case, the effect of the presence of the edge \(e\) is that because \((g,f(g^{\prime}))\) dominates one vertex of \(g^{\prime}H\), the edge \(e\) might reduce the domination number by \(1\). That is, each connecting edge can drop the domination number of \(G\otimes_{f}H\) by at most \(1\), which proves the left inequality. The other two inequalities are clear.
To show that the lower bound of Theorem 2.1 is achieved, we show later in Theorem 3.8 that for \(n\geq 3\) and \(k\geq 1\), if we take \(G=C_{n}\) and \(H=C_{3k+1}\) where \(n\equiv 0\,(\mathrm{mod}\;4)\), then \(\gamma_{\mathrm{S}}(G,H)=kn=n(G)\gamma(H)-m(G)\). The upper bound of Theorem 2.1 is obtained, for example, for the Sierpinski product of two complete graphs. More generally, to achieve equality in the upper bound of Theorem 2.1 we require the graph \(H\) to have the following property.
**Theorem 2.2**.: _The equality in \(\Gamma_{\mathrm{S}}(G,H)\leq n(G)\gamma(H)\) is achieved if and only if there exists a vertex \(x\in V(H)\) such that \(\gamma(H|x)=\gamma(H)\)._
Proof.: Suppose that \(H\) has a vertex \(x\) that satisfies \(\gamma(H|x)=\gamma(H)\). In this case, we consider the Sierpinski product \(G\otimes_{f}H\) with the function \(f\colon V(G)\to V(H)\) defined by \(f(v)=x\) for every vertex \(v\in V(G)\). Consequently each connecting edge in the product is of the form \((g,x)(g^{\prime},x)\). Thus, if \(X\) is a \(\gamma\)-set of \(G\otimes_{f}H\), then \(|X\cap V(gH)|=\gamma(H)\) because the only vertex of \(gH\) that can be dominated from outside \(gH\) is \((g,x)\), but we have assume that \(\gamma(H|x)=\gamma(H)\). Therefore, \(\Gamma_{S}(G,H)=n(G)\gamma(H)\).
For the other implication suppose that \(\gamma(H|x)<\gamma(H)\) for every vertex \(x\in V(H)\). For an arbitrary edge \(g_{1}g_{2}\in E(G)\), if \(D\) corresponds to a \(\gamma\)-set of the product \(G\otimes_{f}H\), then \(|D\cap V(G\otimes_{f}H[V(g_{1}H)\cup V(g_{2}H)])|\leq 2\gamma(H)-1\). Consequently, \(\Gamma_{S}(G,H)<n(G)\gamma(H)\).
To conclude this section we describe large classes of graphs for which the second and the third inequality of Theorem 2.1 are both equality.
**Proposition 2.3**.: _If \(G\) and \(H\) are graphs such that \(\Delta(G)<n(H)\) and \(\gamma(H)=1\), then \(\Gamma_{\mathrm{S}}(G,H)=\gamma_{\mathrm{S}}(G,H)=n(G)\)._
Proof.: Let \(G\) and \(H\) be graphs such that \(\Delta(G)<n(H)\) and \(\gamma(H)=1\). Thus, by Theorem 2.1, \(\gamma_{\mathrm{S}}(G,H)\leq\Gamma_{\mathrm{S}}(G,H)\leq n(G)\) and it is straightforward that the Sierpinski product of graphs \(G\) and \(H\) can be dominated by taking one dominating vertex from each \(H\)-layer to the dominating set. It remains to show that the inequality \(\gamma_{\mathrm{S}}(G,H)\geq n(G)\) also holds. Suppose that \(\gamma_{\mathrm{S}}(G,H)\leq n(G)-1\). Let \(D\) be a dominating set of \(G\otimes_{f}H\), where \(f\) is such that that minimizes the domination number. Therefore there is an \(H\)-layer, denote it by \(H^{\prime}\), of \(G\otimes_{f}H\) such that \(D\cap V(H^{\prime})=\emptyset\). Since there are at most \(\Delta(G)\) connecting edges incident with vertices from each \(H\)-layer and \(\Delta(G)<n(H)\),then all the vertices from an \(H\)-layer cannot be dominated by the vertices from the neighboring layers. Therefore we have \(D\cap V(H^{\prime\prime})\neq\emptyset\) for each \(H^{\prime\prime}\)-layer of \(G\otimes_{f}H\). Thus \(\gamma_{\mathrm{S}}(G,H)\geq n(G)\) and the result follows.
The Sierpinski domination number of cycles
Let us recall firstly the domination number of a path and a cycle.
**Observation 3.1**.: _For \(n\geq 3\), \(\gamma(P_{n})=\gamma(C_{n})=\left\lceil\frac{n}{3}\right\rceil\)._
In this section, we shall prove the following results.
**Theorem 3.2**.: _For \(n\geq 3\), \(k\geq 1\), and \(p\in[2]_{0}\),_
\[\gamma_{\mathrm{S}}(C_{n},C_{3k+p})\in\left\{\begin{array}{ll}\{kn\};&p=0\mbox {,}\\ \{kn,kn+1\};&p=1\mbox{,}\\ \{kn+\left\lfloor\frac{n}{2}\right\rfloor,kn+\left\lfloor\frac{n}{2}\right \rfloor+1\};&p=2\mbox{.}\end{array}\right.\]
_Moreover, if \(n\equiv 0\mod 4\), then \(\gamma_{\mathrm{S}}(C_{n},C_{3k+1})=kn\) and \(\gamma_{\mathrm{S}}(C_{n},C_{3k+2})=kn+\left\lfloor\frac{n}{2}\right\rfloor\)._
**Theorem 3.3**.: _For \(n\geq 3\), \(k\geq 1\), and \(p\in[2]_{0}\),_
\[\Gamma_{\mathrm{S}}(C_{n},C_{3k+p})=\left\{\begin{array}{ll}kn;&p=0\mbox{,} \\ kn+\left\lceil\frac{n}{3}\right\rceil\mbox{;}&p=1\mbox{,}\\ (k+1)n;&p=2\mbox{.}\end{array}\right.\]
In order to prove Theorems 3.2 and 3.3, we consider three cases, depending on the value of \(p\).
### The cycle \(C_{n}\) and cycles \(C_{3k+1}\)
To determine \(\Gamma_{S}(C_{n},C_{3k+1})\), we prove a slightly more general result. For this purpose, we define a class of graphs \(\mathcal{H}_{k}\) as follows.
**Definition 3.4**.: _For \(k\geq 1\), let \(\mathcal{H}_{k}\) be the class of all graphs \(H\) that have the following properties._
* \(\gamma(H)=k+1\) _and_ \(\gamma(H-v)=k\) _for every vertex_ \(v\in V(H)\)_._
* _If_ \(x,y\in V(H)\)_, then there exists a_ \(\gamma\)_-set of_ \(H\) _that contains_ \(x\) _and_ \(y\)_, where_ \(x=y\) _is allowed._
We show, for example, that for every \(k\geq 1\), the cycle \(C_{3k+1}\) belongs to the class \(\mathcal{H}_{k}\).
**Proposition 3.5**.: _For \(k\geq 1\), the class \(\mathcal{H}_{k}\) of graphs contains the cycle \(C_{3k+1}\)._
Proof.: For \(k\geq 1\), let \(H\cong C_{3k+1}\). Since \(\gamma(C_{n})=\gamma(P_{n})=\left\lceil n/3\right\rceil\), property (a) in Definition 3.4 holds. To prove that property (b) in Definition 3.4 holds, let \(x,y\in V(H)\). Since \(H\) is vertex-transitive, every specified vertex belongs to some \(\gamma\)-set of \(H\). In particular, if \(x=y\), then property (b) is immediate. Hence, we may assume that \(x\neq y\). Let \(H\) be the cycle
\(v_{1}v_{2}\ldots v_{3k+1}v_{1}\), where renaming vertices if necessary, we may assume that \(x=v_{1}\). Let \(y=v_{i}\), and so \(i\in[3k+1]\setminus\{1\}\).
Let \(H^{\prime}=H-N[\{x,y\}]\), that is, \(H^{\prime}\) is obtained from \(H\) by removing \(x\) and \(y\), and removing all neighbors of \(x\) and \(y\). If \(H^{\prime}\) is connected, then \(H^{\prime}\) is a path \(P_{3(k-2)+j}\) for some \(j\) where \(j\in[3]\). In this case, \(\gamma(H^{\prime})=k-1\). If \(H^{\prime}\) is disconnected, then \(H^{\prime}\) is the disjoint union of two paths \(P_{k_{1}}\) and \(P_{k_{2}}\), where \(k_{1}+k_{2}=3(k-2)+1\). Thus renaming \(k_{1}\) and \(k_{2}\) if necessary, we may assume that either \(k_{1}=3j_{1}\) and \(k_{2}=3j_{2}+1\) where \(j_{1}\geq 1\), \(j_{2}\geq 0\), and \(j_{1}+j_{2}=k-2\) or \(k_{1}=3j_{1}+2\) and \(k_{2}=3j_{2}+2\) where \(j_{1},j_{2}\geq 0\) and \(j_{1}+j_{2}=k-3\). In both cases, \(\gamma(H^{\prime})=\lceil k_{1}/3\rceil+\lceil k_{2}/3\rceil=k-1\). Letting \(D^{\prime}\) be a \(\gamma\)-set of \(H^{\prime}\), the set \(D=D^{\prime}\cup\{x,y\}\) is a dominating set of \(H\) of cardinality \(k+1=\gamma(H)\), implying that \(D\) is a \(\gamma\)-set of \(H\) that contains both \(x\) and \(y\). Hence, property (b) holds.
For \(n\geq 3\) an integer, a _circulant graph_\(C_{n}\langle L\rangle\) with a given list \(L\subseteq\{1,\ldots,\lfloor\frac{1}{2}n\rfloor\}\) is a graph on \(n\) vertices in which the \(i\)th vertex is adjacent to the \((i+j)\)th and \((i-j)\)th vertices for each \(j\) in the list \(L\) and where addition is taken modulo \(n\). For example, for \(n=3k+1\) where \(k\geq 1\) and \(L=\{1\}\), the circulant graph \(C_{n}\langle L\rangle\) is the cycle \(C_{3k+1}\), which, by Proposition 3.5, belongs to the class \(\mathcal{H}_{k}\). More generally, for \(n=k(2p+1)+1\) where \(k\geq 1\), \(p\geq 1\), and \(L=[p]\), the circulant graph \(C_{n}\langle L\rangle\) belongs to the class \(\mathcal{H}_{k}\). We omit the relatively straightforward proof. These examples of circulant graphs serve to illustrate that for each \(k\geq 1\), one can construct infinitely many graphs in the class \(\mathcal{H}_{k}\). We determine next the upper Sierpinski domination number \(\Gamma_{S}(C_{n},H)\) of a cycle \(C_{n}\) and a graph \(H\) in the family \(\mathcal{H}_{k}\).
**Theorem 3.6**.: _For \(n\geq 3\) and \(k\geq 1\), if \(H\in\mathcal{H}_{k}\), then_
\[\Gamma_{\mathrm{S}}(C_{n},H)=kn+\left\lceil\frac{n}{3}\right\rceil\,.\]
Proof.: For \(n\geq 3\) and \(k\geq 1\), let \(G\cong C_{n}\) and let \(H\in\mathcal{H}_{k}\). Let \(G\) be the cycle given by \(g_{1}g_{2}\ldots g_{n}g_{1}\). In what follows, we adopt the following notation. For each \(i\in[n]\), we denote the copy \(g_{i}H\) of \(H\) corresponding to the vertex \(g_{i}\) simply by \(H_{i}\). We proceed further with two claims. The first claim establishes a lower bound on \(\Gamma_{\mathrm{S}}(C_{n},H)\), and the second claim establishes an upper bound on \(\Gamma_{\mathrm{S}}(C_{n},H)\).
**Claim 1**.: \(\Gamma_{\mathrm{S}}(C_{n},H)\geq kn+\left\lceil\frac{n}{3}\right\rceil.\)__
Proof.: Let \(f\colon V(G)\to V(H)\) be a constant function, that is, we select \(h\in V(H)\) and for every vertex \(g\in V(G)\), we set \(f(g)=h\). Let \(D_{G}\) be a \(\gamma\)-set of \(G\). Thus, \(|D_{G}|=\gamma(C_{n})=\lceil n/3\rceil\). By property (a) in Definition 3.4, for every vertex \(g\in V(G)\), there exists a \(\gamma\)-set of \(gH\) that contains the vertex \((g,f(g))=(g,h)\). If \(g\in D_{G}\), let \(D_{g}\) be a \(\gamma\)-set of \(gH\) that contains the vertex \((g,f(g))=(g,h)\), and so \(|D_{g}|=\gamma(H)=k+1\). If \(g\in V(G)\setminus D_{G}\), let \(D_{g}\) be a \(\gamma\)-set of \(gH-(g,f(g))=gH-(g,h)\), and so in this case \(|D_{g}|=\gamma(H-h)=\gamma(H)-1=k\). Let
\[D=\bigcup_{g\in V(G)}D_{g}.\]
The set \(D\) is a dominating set of \(G\otimes_{f}H\), and so
\[\gamma(G\otimes_{f}H)\leq|D|=\gamma(G)(k+1)+(n-\gamma(G))k=kn+\gamma(G)=kn+ \left\lceil\frac{n}{3}\right\rceil\,. \tag{1}\]
For the fixed vertex \(h\) chosen earlier, we note that the set of vertices \((g,h)\) for all \(g\in V(G)\) induces a subgraph of \(G\otimes_{f}H\) that is isomorphic to \(G\cong C_{n}\). We denote this copy of \(G\) by \(Gh\). Among all \(\gamma\)-sets of \(G\otimes_{f}H\), let \(D^{*}\) be chosen to contain as many vertices of \(Gh\) as possible. Let \(D^{*}_{g}=D^{*}\cap V(gH)\) for every \(g\in V(G)\). Further let \(D^{*}_{G}=\{(g,h)\in D^{*}\colon g\in V(G)\}\), that is, \(D^{*}_{G}\) is the restriction of \(D^{*}\) to the copy of \(G\). If a vertex \((g,h)\notin D^{*}_{G}\) and \((g,h)\) is not dominated by \(D^{*}_{G}\), then \(D^{*}_{g}\) is a \(\gamma\)-set of \(gH\) by the minimality of the set \(D^{*}\). However in this case, we could replace the set \(D^{*}_{g}\) be a \(\gamma\)-set of \(gH\) that contains the vertex \((g,h)\) to produce a new \(\gamma\)-set of \(G\otimes_{f}H\) that contains more vertices from the copy of \(G\) than does \(D^{*}\), a contradiction. Hence, the set \(D^{*}_{G}\) is a dominating set in the copy of \(G\), and so \(|D^{*}_{g}(G)|\geq\gamma(G)\). By the minimality of the set \(D^{*}\) and by property (a) in Definition 3.4, for each vertex \(g\in V(G)\), we have \(|D^{*}_{g}|=\gamma(H)=k+1\) if the vertex \((g,h)\in D^{*}_{G}\) and \(|D^{*}_{g}|=\gamma(H-h)=k\) if the vertex \((g,h)\notin D^{*}_{G}\). Therefore,
\[\gamma(G\otimes_{f}H)=|D^{*}|=|D^{*}_{G}|(k+1)+(n-|D^{*}_{G}|)k=kn+|D^{*}_{G}| \geq kn+\gamma(G)=kn+\left\lceil\frac{n}{3}\right\rceil\,. \tag{2}\]
By inequalities (1) and (2), we have
\[\gamma(G\otimes_{f}H)=kn+\left\lceil\frac{n}{3}\right\rceil\,. \tag{3}\]
By equation (3), we have \(\Gamma_{\mathrm{S}}(C_{n},H)\geq\gamma(G\otimes_{f}H)=kn+\left\lceil n/3 \right\rceil\,\). This completes the proof of Claim 1. (\(\Box\))
**Claim 2**.: \(\Gamma_{\mathrm{S}}(C_{n},H)\leq kn+\left\lceil\frac{n}{3}\right\rceil\,\).
Proof.: Let \(f\colon V(G)\to V(H)\) be an arbitrary function. Let \(H_{i}\) be the \(i\)th copy of \(H\) corresponding to the vertex \(g_{i}\) of \(G\) for all \(i\in[n]\). Let \(D\) be the dominating set of \(G\otimes_{f}H\) constructed as follows. Let \(x_{i}y_{i+1}\) be the connecting edge from \(H_{i}\) to \(H_{i+1}\) for all \(i\in[n]\), where addition is taken modulo \(n\). Thus, the vertex \(x_{i}\in V(H_{i})\) is adjacent to the vertex \(y_{i+1}\in V(H_{i+1})\) in the graph \(G\otimes_{f}H\), that is, \(x_{i}=(g_{i},f(g_{i+1}))\) and \(y_{i+1}=(g_{i+1},f(g_{i}))\). We note that possibly \(x_{i}=y_{i}\). By property (b) in Definition 3.4, there exists a \(\gamma\)-set of \(H_{i}\) that contains both \(x_{i}\) and \(y_{i}\). For \(i\in[n]\), we define the sets \(D_{i,1}\), \(D_{i,2}\), and \(D_{i,3}\) as follows. Let \(D_{i,1}\) be a \(\gamma\)-set of \(H_{i}-x_{i}\). Let \(D_{i,2}\) be a \(\gamma\)-set of \(H_{i}\) that contains both \(x_{i}\) and \(y_{i}\). Let \(D_{i,3}\) be a \(\gamma\)-set of \(H_{i}-y_{i}\). We note that \(|D_{i,1}|=|D_{i,3}|=k\) and \(|D_{i,2}|=k+1\). For \(i\in[n]\), we define the set \(D_{i}\) as follows.
\[D_{i}=\left\{\begin{array}{ll}D_{i,1};&i\equiv 1\,(\mathrm{mod}\ 3)\ \mathrm{and}\ i\neq n,\\ D_{i,2};&i\equiv 2\,(\mathrm{mod}\ 3)\ \mathrm{or}\ i\equiv 1\,( \mathrm{mod}\ 3)\ \mathrm{and}\ i=n,\\ D_{i,3};&i\equiv 0\,(\mathrm{mod}\ 3).\end{array}\right.\]
For example, the set \(D_{1}\) dominates all vertices of \(H_{1}-x_{1}\). The set \(D_{2}\) contains the vertex \(y_{2}\), which is adjacent to the vertex \(x_{1}\) of \(H_{1}\), and contains the vertex \(x_{2}\), which is adjacent to the vertex \(y_{3}\) of \(H_{3}\), implying that \(D_{2}\) dominates the vertex \(x_{1}\) of \(H_{1}\), all vertices of \(H_{2}\), and the vertex \(y_{3}\) of \(H_{3}\). The set \(D_{3}\) dominates all vertices of \(H_{3}-y_{3}\). Thus, \(D_{1}\cup D_{2}\cup D_{3}\) dominates all vertices in \(V(H_{1})\cup V(H_{2})\cup V(H_{3})\) in the Sierpinski product \(G\otimes_{f}H\). Moreover,
\(|D_{1}|+|D_{2}|+|D_{3}|=k+(k+1)+k=3k+1\). More generally, the set \(D_{3j-2}\cup D_{3j-1}\cup D_{3j}\) dominates all vertices in \(V(H_{3j-2})\cup V(H_{3j-1})\cup V(H_{3j})\) in the Sierpinski product \(G\otimes_{f}H\) for all \(j\in\{1,\ldots,\lfloor n/3\rfloor\}\). Moreover, \(|D_{3j-2}|+|D_{3j-1}|+|D_{3j}|=k+(k+1)+k=3k+1\). If \(n\equiv 1\,(\text{mod }3)\), then the set \(D_{n}\) is a \(\gamma\)-set of \(H_{n}\), and in this case \(|D_{n}|=k+1\). If \(n\equiv 2\,(\text{mod }3)\), then the set \(D_{n-1}\cup D_{n}\) dominates all vertices in \(V(H_{n-1})\cup V(H_{n})\), and in this case \(|D_{n-1}|+|D_{n}|=k+(k+1)=2k+1\). The set
\[D=\bigcup_{i=1}^{n}D_{i}\]
is therefore a dominating set of \(G\otimes_{f}H\), implying that
\[\gamma(G\otimes_{f}H)\leq|D|=\sum_{i=1}^{n}|D_{i}|=kn+\left\lceil\frac{n}{3} \right\rceil\,.\]
This completes the proof of Claim 2. \({}^{(\Box)}\)
The proof of Theorem 3.6 follows as an immediate consequence of Claims 1 and 2.
As a consequence of Proposition 3.5, we have the following special case of Theorem 3.6.
**Corollary 3.7**.: _For \(n\geq 3\) and \(k\geq 1\),_
\[\Gamma_{\mathrm{S}}(C_{n},C_{3k+1})=kn+\left\lceil\frac{n}{3}\right\rceil\,.\]
We consider next the Sierpinski domination number of \(C_{n}\) and \(C_{3k+1}\), and show that \(\gamma_{\mathrm{S}}(C_{n},C_{3k+1})=kn\) if \(n\equiv 0\,(\text{mod }4)\) and \(\gamma_{\mathrm{S}}(C_{n},C_{3k+1})\in\{kn,kn+1\}\), otherwise.
**Theorem 3.8**.: _For \(n\geq 3\) and \(k\geq 1\),_
\[\gamma_{\mathrm{S}}(C_{n},C_{3k+1})\in\{kn,kn+1\}.\]
_Moreover, if \(n\equiv 0\mod 4\), then \(\gamma_{\mathrm{S}}(C_{n},C_{3k+1})=kn\)._
Proof.: For \(n\geq 3\) and \(k\geq 1\), let \(G=C_{n}\) and let \(H=C_{3k+1}\). Let \(G\) be the cycle given by \(g_{1}g_{2}\ldots g_{n}g_{1}\). We adopt our notation employed in our earlier proofs. For notational convenience, we let \(V(H)=\{1,2,\ldots,3k+1\}\) where vertices \(i\) and \(i+1\) are consecutive on the cycle \(H\) for all \(i\in[3k+1]\) (and where addition is taken modulo \(3k+1\), and so vertex \(1\) and vertex \(3k+1\) are adjacent).
As before, we denote the copy \(g_{i}H\) of \(H\) corresponding to the vertex \(g_{i}\) simply by \(H_{i}\) for each \(i\in[n]\). Thus, \(H_{i}=C_{3k+1}\) is the cycle \((g_{i},1),(g_{i},2),\ldots,(g_{i},3k+1),(g_{i},1)\) for all \(i\in[n]\). Recall that we denote the connecting edge from \(H_{i}\) to \(H_{i+1}\) by \(x_{i}y_{i+1}\) for all \(i\in[n]\), where \(x_{i}\in V(H_{i})\), \(y_{i+1}\in V(H_{i+1})\), and addition is taken modulo \(n\). Thus, \(y_{i}=(g_{i},f(g_{i-1}))\) and \(x_{i}=(g_{i},f(g_{i+1}))\) for all \(i\in[n]\).
By Proposition 3.5, the graph \(H\) belongs to the class \(\mathcal{H}_{k}\). Thus, \(\gamma(H)=k+1\) and \(\gamma(H-v)=k\) for every vertex \(v\in V(H)\). Furthermore, if \(x,y\in V(H)\) where \(x=y\) is allowed, then there exists a \(\gamma\)-set of \(H\) that contains \(x\) and \(y\).
By the elementary lower bound on the Sierpinski domination number given in Theorem 2.1, \(\gamma_{\mathrm{S}}(G,H)\geq n(G)\gamma(H)-m(G)=kn\), noting that here \(n(G)=m(G)=n\) and \(\gamma(H)=k+1\). It follows that \(\gamma_{\mathrm{S}}(C_{n},H)\geq kn\).
To complete the proof we are going to prove that
\[\gamma_{\mathrm{S}}(C_{n},H)\leq kn+\left\lceil\frac{n}{4}\right\rceil-\left \lfloor\frac{n}{4}\right\rfloor\,.\]
Let \(f\colon V(G)\to V(H)\) be the function defined by
\[f(g_{i})=\left\{\begin{array}{ll}1;&i\bmod 4\in\{1,2\},\\ 3;&\text{otherwise}.\end{array}\right.\]
for all \(i\in[n]\) where addition is taken modulo \(n\). Adopting our earlier notation, recall that \(y_{i}=(g_{i},f(g_{i-1}))\) and \(x_{i}=(g_{i},f(g_{i+1}))\) for all \(i\in[n]\). Let \(n=4\ell+j\) where \(j\in[3]_{0}=\{0,1,2,3\}\). We note that \(f(g_{4i-3})=f(g_{4i-2})=1\) and \(f(g_{4i-1})=f(g_{4i})=3\) for all \(i\in[\ell]\). Let \(D_{i}\) be the unique \(\gamma\)-set of \(H_{i}-y_{i}\cong P_{3k}\) which consists of all vertices at distance \(2\) modulo \(3\) from \(y_{i}\) in the graph \(H_{i}\) for all \(i\in[n]\), and let
\[D=\bigcup_{i=1}^{n}D_{i}.\]
We note that \(|D_{i}|=k\) for all \(i\in[n]\), and so \(|D|=kn\). For all \(i\in\{2,3,\ldots,\ell-1\}\), the following four properties hold.
* \(y_{4i-3}=(g_{4i-3},3)\) and \(x_{4i-3}=(g_{4i-3},1)\).
* \(y_{4i-2}=(g_{4i-2},1)\) and \(x_{4i-2}=(g_{4i-2},3)\).
* \(y_{4i-1}=(g_{4i-1},1)\) and \(x_{4i-1}=(g_{4i-1},3)\).
* \(y_{4i}=(g_{4i},3)\) and \(x_{4i}=(g_{4i},1)\).
Hence for all \(i\in\{2,3,\ldots,\ell-1\}\), the vertices \(x_{i}\) and \(y_{i}\) are at distance \(2\) in \(H_{i}\), implying that \(x_{i}\in D_{i}\). We consider four cases to determine which properties hold for the boundary conditions (that is for \(i\in\{1,\ell\}\)) and finally to set the upper bound on the domination number in each case.
_Case 1. \(n\equiv 0\,(\bmod 4)\), that is \(n=4\ell\)._
In this case, properties P1 and P4 also hold for \(i=1\) and \(i=\ell\), respectively. Thus, \(y_{1}=(g_{1},3)\) and \(x_{1}=(g_{1},1)\), and \(y_{4\ell}=(g_{4\ell},3)\) and \(x_{4\ell}=(g_{4\ell},1)\), implying that \(x_{1},x_{4\ell}\in D\). The set \(D\) is therefore a dominating set of \(G\otimes_{f}H\), and so \(\gamma(G\otimes_{f}H)\leq|D|=kn=kn+\lceil n/4\rceil-\lfloor n/4\rfloor\).
_Case 2. \(n\equiv 1\,(\bmod 4)\), that is \(n=4\ell+1\)._
In this case, \(y_{1}=x_{1}=(g_{1},1)\), and \(y_{4\ell+1}=(g_{4\ell+1},3)\) and \(x_{4\ell+1}=(g_{4\ell+1},1)\). In particular, property P4 also holds for \(i=\ell\), and so \(x_{4\ell+1}\in D\). The set \(D\cup\{x_{1}\}\) is therefore a dominating set of \(G\otimes_{f}H\), and so \(\gamma(G\otimes_{f}H)\leq|D|+1=kn+1=kn+\lceil n/4\rceil-\lfloor n/4\rfloor\).
_Case 3. \(n\equiv 2\,(\bmod 4)\), that is \(n=4\ell+2\)._
In this case, \(y_{1}=x_{1}=(g_{1},1)\), and \(y_{4\ell+2}=x_{4\ell+2}=(g_{4\ell+2},1)\). We note that neither \(x_{1}\) nor \(x_{4\ell+2}\) belong to the set \(D\). The set \(D\cup\{x_{1}\}\) is a dominating set of \(G\otimes_{f}H\), and so \(\gamma(G\otimes_{f}H)\leq|D|+1=kn+1=kn+\lceil n/4\rceil-\lfloor n/4\rfloor\).
_Case 4. \(n\equiv 3\,(\mathrm{mod}\ 4)\), that is \(n=4\ell+3\)._
In this case, \(y_{1}=(g_{1},3)\) and \(x_{1}=(g_{1},1)\), and \(y_{4\ell+3}=x_{4\ell+3}=(g_{4\ell+3},1)\). In particular, property P1 also holds for \(i=1\), and so \(x_{1}\in D\). However, \(x_{4\ell+3}\notin D\). The set \(D\cup\{x_{4\ell+3}\}\) is therefore a dominating set of \(G\otimes_{f}H\), and so \(\gamma(G\otimes_{f}H)\leq|D|+1=kn+1=kn+\lceil n/4\rceil-\lfloor n/4\rfloor\).
In all four cases, \(\gamma(G\otimes_{f}H)\leq kn+\lceil n/4\rceil-\lfloor n/4\rfloor\).
### The cycle \(C_{n}\) and cycles \(C_{3k+2}\)
In this section, we determine the Sierpinski domination number \(\gamma_{S}(C_{n},C_{3k+2})\) and the upper Sierpinski domination number \(\Gamma_{S}(C_{n},C_{3k+2})\).
**Theorem 3.9**.: _For \(n\geq 3\) and \(k\geq 1\), we have \(\Gamma_{\mathrm{S}}(C_{n},C_{3k+2})=(k+1)n\)._
Proof.: For \(n\geq 3\) and \(k\geq 1\), let \(G\cong C_{n}\) and let \(H\cong C_{3k+2}\). Let \(f\colon V(G)\to V(H)\) be a constant function, that is, we select \(h\in V(H)\) and for every vertex \(g\in V(G)\), we set \(f(g)=h\). For each vertex \(g\in V(G)\), let \(H_{g}\) denote the copy of \(H\) associated with the vertex \(g\). Let \(D\) be a dominating set of \(G\otimes_{f}H\), and let \(D_{g}=D\cap V(H_{g})\), and so \(D_{g}\) is the restriction of \(D\) to the copy \(H_{g}\) of \(H\). If the vertex \((g,h)\) does not belong to \(D_{g}\), then \(D_{g}\) dominates all vertices on the path \(H_{g}-(g,h)\cong P_{3k+1}\), and so \(|D_{g}|\geq\gamma(P_{3k+1})=k+1\). If the vertex \((g,h)\) does belong to \(D_{g}\), then \(D_{g}\) dominates all vertices on the cycle \(H_{g}\cong C_{3k+2}\), and so \(|D_{g}|\geq\gamma(C_{3k+2})=k+1\). In both cases, \(|D_{g}|\geq k+1\). Therefore,
\[\gamma(G\otimes_{f}H)=|D|=\sum_{g\in V(G)}|D_{g}|\geq(k+1)n,\]
implying that \(\Gamma_{\mathrm{S}}(C_{n},C_{3k+2})\geq(k+1)n\). By the upper bound in Theorem 2.1, we have \(\Gamma_{\mathrm{S}}(G,H)\leq n(G)\gamma(H)=(k+1)n\), noting that in this case \(\gamma(H)=\gamma(C_{3k+2})=k+1\). Consequently, \(\Gamma_{\mathrm{S}}(C_{n},C_{3k+2})=(k+1)n\).
**Theorem 3.10**.: _For \(n\geq 3\) and \(k\geq 1\),_
\[\gamma_{\mathrm{S}}(C_{n},C_{3k+2})\in\{kn+\left\lfloor\frac{n}{2}\right\rfloor,kn+\left\lfloor\frac{n}{2}\right\rfloor+1\}.\]
_Moreover, if \(n\equiv 0\mod 4\), then \(\gamma_{\mathrm{S}}(C_{n},C_{3k+2})=kn+\left\lfloor\frac{n}{2}\right\rfloor\)._
Proof.: For \(n\geq 3\) and \(k\geq 1\), let \(G\cong C_{n}\) and let \(H\cong C_{3k+2}\). We adopt our notation employed in our earlier proofs. Thus, the cycle \(G\) is given by \(g_{1}g_{2}\dots g_{n}g_{1}\), and \(V(H)=\{1,2,\dots,3k+2\}\) where vertices \(i\) and \(i+1\) are consecutive on the cycle \(H\) for all \(i\in[3k+2]\) (and where addition is taken modulo \(3k+2\), and so vertex \(1\) and vertex \(3k+2\) are adjacent). As before, we denote the copy \(g_{i}H\) of \(H\) corresponding to the vertex \(g_{i}\) simply by \(H_{i}\) for each \(i\in[n]\). Thus, \(H_{i}=C_{3k+2}\) is the cycle \((g_{i},1),(g_{i},2),\dots,(g_{i},3k+2),(g_{i},1)\) for all \(i\in[n]\).
We adopt our notation from the proof of Theorem 3.6. Thus, we denote the connecting edge from \(H_{i}\) to \(H_{i+1}\) by \(x_{i}y_{i+1}\) for all \(i\in[n]\), where \(x_{i}\in V(H_{i})\), \(y_{i+1}\in V(H_{i+1})\), and addition is taken modulo \(n\). Thus, \(y_{i}=(g_{i},f(g_{i-1}))\) and \(x_{i}=(g_{i},f(g_{i+1}))\) for all \(i\in[n]\).
We proceed further with two claims. The first claim establishes a lower bound on \(\gamma_{\mathrm{S}}(G,H)\), and the second claim an upper bound on \(\gamma_{\mathrm{S}}(G,H)\). Combining these two bounds yields the desired result in the statement of the theorem.
**Claim 3**.: \(\gamma_{\mathrm{S}}(C_{n},H)\geq kn+\left\lfloor\frac{n}{2}\right\rfloor\)_._
Proof.: Let \(f\colon V(G)\to V(H)\) be an arbitrary function. We show that
\[\gamma(G\otimes_{f}H)\geq kn+\left\lfloor\frac{n}{2}\right\rfloor. \tag{4}\]
Let \(D\) be a \(\gamma\)-set of \(G\otimes_{f}H\) constructed, and let \(D_{i}=D\cap V(H_{i})\) for \(i\in[n]\). If the vertex \(x_{i}\) is not dominated by \(D_{i}\), then either \(x_{i}\neq y_{i}\), in which case \(x_{i}\) is dominated by the vertex \(y_{i+1}\in D\), or \(x_{i}=y_{i}\), in which case \(x_{i}\) is dominated by the vertex \(x_{i-1}\in D\) or the vertex \(y_{i+1}\in D\). Analogously, if the vertex \(y_{i}\) is not dominated by \(D_{i}\), then either \(x_{i}\neq y_{i}\), in which case \(y_{i}\) is dominated by the vertex \(x_{i-1}\in D\), or \(x_{i}=y_{i}\), in which case \(y_{i}\) is dominated by the vertex \(x_{i-1}\in D\) or the vertex \(y_{i+1}\in D\). If a vertex is not dominated by \(D_{i}\), then such a vertex is \(x_{i}\) or \(y_{i}\), and we say that such a vertex is dominated from outside \(H_{i}\).
Similarly as before, we proceed with a claim that delivers properties of sets \(D_{i}\) leading to the desired lower bound on the Sierpinski domination number.
**Claim 3.1**.: _The following properties hold in the graph \(H_{i}\)._
1. _If_ \(d(x_{i},y_{i})\equiv 1\,(\mathrm{mod}\ 3)\)_, then_ \(|D_{i}|=k\)_. Further, both_ \(x_{i}\) _and_ \(y_{i}\) _are dominated from outside_ \(H_{i}\)_._
2. _If_ \(d(x_{i},y_{i})\not\equiv 1\,(\mathrm{mod}\ 3)\)_, then_ \(|D_{i}|=k+1\)_._
Proof.: Suppose that \(D_{i}\) contains a vertex \(w_{i}\) that dominates \(x_{i}\). Possibly, \(w_{i}=x_{i}\). In order to dominate the \(3(k-1)+2\) vertices in \(H_{i}\) not dominated by \(w_{i}\), at least \(k\) additional vertices are needed even if the vertex \(y_{i}\) is dominated outside the cycle \(H_{i}\). Thus in this case, \(|D_{i}|\geq k+1\), implying by the minimality of the set \(D\) that \(|D_{i}|=k+1\). Analogously, if \(D_{i}\) contains a vertex that dominates \(y_{i}\), then \(|D_{i}|=k+1\). Hence, if \(x_{i}\) or \(y_{i}\) (or both \(x_{i}\) and \(y_{i}\)) are dominated by \(D_{i}\), then \(|D_{i}|=k+1\).
Suppose that neither \(x_{i}\) nor \(y_{i}\) is dominated by \(D_{i}\), implying that both \(x_{i}\) and \(y_{i}\) are dominated from outside the cycle \(H_{i}\). Thus, \(D_{i}\) is a dominating set of \(H_{i}^{\prime}=H_{i}-x_{i}-y_{i}\). If \(x_{i}=y_{i}\), then \(H_{i}^{\prime}=P_{3k+1}\), and by the minimality of \(D\) we have \(|D_{i}|=\gamma(P_{3k+1})=k+1\). Hence, we may assume that \(x_{i}\neq y_{i}\). If \(x_{i}\) and \(y_{i}\) are adjacent, then \(H_{i}^{\prime}=P_{3k}\), and by the minimality of \(D\) we have \(|D_{i}|=\gamma(P_{3k})=k\). Suppose that \(x_{i}\) and \(y_{i}\) are not adjacent, and so \(H^{\prime}\) is the disjoint union of two paths \(P_{k_{1}}\) and \(P_{k_{2}}\), where \(k_{1}+k_{2}=3k\). If \(k_{1}=3j_{1}+1\) and \(k_{2}=3j_{2}+2\) (or if \(k_{1}=3j_{1}+2\) and \(k_{2}=3j_{2}+1\)) for some integers \(j_{i}\) and \(j_{2}\) where \(j_{1}+j_{2}=k-1\), then \(|D_{i}|=\lceil k_{1}/3\rceil+\lceil k_{2}/3\rceil=(j_{1}+1)+(j_{2}+1)=k+1\). If \(k_{1}=3j_{1}\) and \(k_{2}=3j_{2}\) where \(j_{1}+j_{2}=k\), then \(|D_{i}|=\lceil k_{1}/3\rceil+\lceil k_{2}/3\rceil=j_{1}+j_{2}=k\). Hence if neither \(x_{i}\) nor \(y_{i}\) is dominated by \(D_{i}\), then either \(d(x_{i},y_{i})\equiv 1\,(\mathrm{mod}\ 3)\), in which case \(|D_{i}|=k\), or \(d(x_{i},y_{i})\not\equiv 1\,(\mathrm{mod}\ 3)\), in which case \(|D_{i}|=k+1\). This proves properties (a) and (b) of the claim. (\(\,\))
By Claim 3.1, if \(|D_{i}|=k\) for some \(i\in[n]\), then \(|D_{i-1}|=|D_{i+1}|=k+1\) where addition is taken modulo \(n\). Furthermore in this case when \(|D_{i}|=k\), the vertices \(x_{i}\) and \(y_{i}\) are distinct
and are both dominated from outside \(H_{i}\), implying that \(y_{i+1}\in D_{i+1}\) and \(x_{i-1}\in D_{i-1}\). This implies that if \(n\) is even, then \(|D|\geq kn+n/2\), and if \(n\) is odd, then \(|D|\geq kn+(n+1)/2\). This proves inequality (4).
**Claim 4**.: \(\gamma_{\mathrm{S}}(C_{n},H)\leq kn+\left\lfloor\frac{n}{2}\right\rfloor+\left \lceil\frac{n}{4}\right\rceil-\left\lfloor\frac{n}{4}\right\rfloor\)_._
Proof.: Let \(f\colon V(G)\to V(H)\) be the function defined by
\[f(g_{i})=\left\{\begin{array}{ll}1;&i\equiv 1\,(\mathrm{mod}\ 4),\\ 2;&i\equiv 2\,(\mathrm{mod}\ 4),\\ 3;&\mathrm{otherwise}.\end{array}\right.\]
for all \(i\in[n]\) where addition is taken modulo \(n\). Adopting our earlier notation, recall that \(y_{i}=(g_{i},f(g_{i-1}))\) and \(x_{i}=(g_{i},f(g_{i+1}))\) for all \(i\in[n]\). Let \(n=4\ell+j\) where \(j\in[3]_{0}=\{0,1,2,3\}\). We note that \(f(g_{4i-3})=1\), \(f(g_{4i-2})=2\), and \(f(g_{4i-1})=f(g_{4i})=3\) for all \(i\in[\ell]\).
_Case 1._\(n\equiv 0\,(\mathrm{mod}\ 4)\)_._
Thus, \(n=4\ell\). We note that \(y_{4i-3}=(g_{4i-3},3)\) and \(x_{4i-3}=(g_{4i-3},2)\) for all \(i\in[n]\), and so in the graph \(H_{4i-3}\) the vertices \(x_{4i-3}\) and \(y_{4i-3}\) are at distance \(1\). Moreover, \(y_{4i-1}=(g_{4i-1},2)\) and \(x_{4i-1}=(g_{4i-1},3)\) for all \(i\in[n]\), and so in the graph \(H_{4i-1}\) the vertices \(x_{4i-1}\) and \(y_{4i-1}\) are at distance \(1\). This implies that \(H_{4i-j}-\{x_{4i-j},y_{4i-j}\}\cong C_{3k}\) for \(j\in\{1,3\}\). Let \(D_{4i-j}\) be a \(\gamma\)-set of \(H_{4i-j}-\{x_{4i-j},y_{4i-j}\}\) for \(j\in\{1,3\}\), and so \(|D_{4i-j}|=k\).
We also note that \(y_{4i-2}=(g_{4i-2},1)\) and \(x_{4i-2}=(g_{4i-2},3)\) for all \(i\in[n]\), and so in the graph \(H_{4i-2}\) the vertices \(x_{4i-2}\) and \(y_{4i-2}\) are at distance \(2\). Moreover, \(y_{4i}=(g_{4i},3)\) and \(x_{4i}=(g_{4i},1)\) for all \(i\in[n]\), and so in the graph \(H_{4i}\) the vertices \(x_{4i}\) and \(y_{4i}\) are at distance \(2\). This implies that \(H_{4i-j}-N[\{x_{4i-j},y_{4i-j}\}]\cong C_{3(k-1)}\) for \(j\in\{0,2\}\). Let \(D_{4i-j}\) be a \(\gamma\)-set of \(H_{4i-j}\) that contains both vertices \(x_{4i-j}\) and \(y_{4i-j}\) for \(j\in\{0,2\}\), and so \(|D_{4i-j}|=k+1\). The set
\[D=\bigcup_{i=1}^{4\ell}D_{i}\]
is a dominating set of \(G\otimes_{f}H\), and so \(\gamma(G\otimes_{f}H)\leq|D|=4k\ell+2\ell=kn+n/2\).
_Case 2._\(n\equiv 2\,(\mathrm{mod}\ 4)\)_._
Thus, \(n=4\ell+2\) and in this case, \(f(g_{4\ell+1})=1\) and \(f(g_{4\ell+2})=2\). We note that in the graph \(H_{4\ell+1}\), the vertices \(x_{4\ell+1}\) and \(y_{4\ell+1}\) are at distance \(1\) and in the graph \(H_{4\ell+2}\) we have \(x_{4\ell+2}=y_{4\ell+2}\). For \(i\in[4\ell]\), we define the set \(D_{i}\) exactly as in the previous case. Further, let \(D_{4\ell+1}\) be a \(\gamma\)-set of \(H_{4\ell+1}-\{x_{4\ell+1},y_{4\ell+1}\}\cong C_{3k}\), and let \(D_{4\ell+2}\) be a \(\gamma\)-set of \(H_{4\ell+2}\) containing \(x_{4\ell+2}\). We note that \(|D_{4\ell+1}|=k\) and \(|D_{4\ell+2}|=k+1\). The set
\[D=\bigcup_{i=1}^{4\ell+2}D_{i}\]
is a dominating set of \(G\otimes_{f}H\), and so \(\gamma(G\otimes_{f}H)\leq|D|=4k\ell+2k+2\ell+1=kn+n/2\).
_Case 3._\(n\equiv 1\,(\mathrm{mod}\ 4)\)_._
Thus, \(n=4\ell+1\), and in this case, \(f(g_{4\ell+1})=1\). Thus, \(y_{4\ell+1}=(g_{4\ell+1},3)\) and \(x_{4\ell+1}=(g_{4\ell+1},1)\), and so in the graph \(H_{4\ell+1}\), the vertices \(x_{4\ell+1}\) and \(y_{4\ell+1}\) are at distance \(2\). For \(i\in[4\ell]\), we define the set \(D_{i}\) exactly as in the previous cases. Further, let \(D_{4\ell+1}\) be a \(\gamma\)-set of \(H_{4\ell+1}\) that contains the vertex \(x_{4\ell+1}\). We note that \(|D_{4\ell+1}|=k+1\). The set
\[D=\bigcup_{i=1}^{4\ell+1}D_{i}\]
is a dominating set of \(G\otimes_{f}H\), and so \(\gamma(G\otimes_{f}H)\leq|D|=4k\ell+k+2\ell+1=kn+(n+1)/2\).
_Case 4._\(n\equiv 3\,(\mathrm{mod}\ 4)\).
Thus, \(n=4\ell+3\), and in this case, \(f(g_{4\ell+1})=1\), \(f(g_{4\ell+2})=2\), and \(f(g_{4\ell+3})=3\). In particular, \(y_{4\ell+3}=(g_{4\ell+3},2)\) and \(x_{4\ell+3}=(g_{4\ell+3},1)\), and so in the graph \(H_{4\ell+3}\), the vertices \(x_{4\ell+3}\) and \(y_{4\ell+3}\) are at distance \(1\). For \(i\in[4\ell+2]\), we define the set \(D_{i}\) exactly as in Case 2. Further, let \(D_{4\ell+3}\) be a \(\gamma\)-set of \(H_{4\ell+3}\) containing the vertex \(x_{4\ell+3}\). We note that \(|D_{4\ell+3}|=k+1\). The set
\[D=\bigcup_{i=1}^{4\ell+3}D_{i}\]
is a dominating set of \(G\otimes_{f}H\), and so \(\gamma(G\otimes_{f}H)\leq|D|=4k\ell+3k+2\ell+2=kn+(n+1)/2\). The desired result of the claim now follows from the four cases above. (\({}^{\Box}\))
The proof of Theorem 3.10 follows as an immediate consequence of Claim 3 and Claim 4.
### The cycle \(C_{n}\) and cycles \(C_{3k}\)
In this section, we determine the Sierpinski domination number \(\gamma_{S}(C_{n},C_{3k})\) and the upper Sierpinski domination number \(\Gamma_{S}(C_{n},C_{3k})\).
**Theorem 3.11**.: _For \(n\geq 3\) and \(k\geq 1\),_
\[\gamma_{\mathrm{S}}(C_{n},C_{3k})=\Gamma_{\mathrm{S}}(C_{n},C_{3k})=kn.\]
Proof.: We adopt our notation from the earlier sections. Let \(G\cong C_{n}\) be the cycle \(g_{1}g_{2}\ldots g_{n}g_{1}\), and let \(H_{i}\) be the \(i\)th copy of \(C_{3k}\) corresponding to the vertices \(g_{i}\) of \(G\) for \(i\in[n]\). As before, we denote the connecting edge from \(H_{i}\) to \(H_{i+1}\) by \(x_{i}y_{i+1}\) for all \(i\in[n]\).
Let \(f\colon V(G)\to V(H)\) be an arbitrary function. Let \(D\) be a \(\gamma\)-set of \(G\otimes_{f}H\), and let \(D_{i}=D\cap V(H_{i})\) for \(i\in[n]\). We show that \(|D_{i}|=k\) for all \(i\in[n]\). If both vertices \(x_{i}\) and \(y_{i}\) are dominated by \(D_{i}\), then \(D_{i}\) is a \(\gamma\)-set of \(H_{i}\cong C_{3k}\), and so \(|D_{i}|=k\). If exactly one of \(x_{i}\) and \(y_{i}\) is dominated by \(D_{i}\), say \(x_{i}\), then by the minimality of the set \(D\), the set \(D_{i}\) is a \(\gamma\)-set of \(H_{i}-y_{i}\cong P_{3k-1}\), and so \(|D_{i}|=k\). Hence, we may assume that neither \(x_{i}\) nor \(y_{i}\) is dominated by \(D_{i}\), for otherwise, \(|D_{i}|=k\) and the desired bound follows.
With our assumption that neither \(x_{i}\) nor \(y_{i}\) is dominated by \(D_{i}\), the set \(D_{i}\) is a \(\gamma\)-set of \(H^{\prime}_{i}=H_{i}-\{x_{i},y_{i}\}\). If \(x_{i}=y_{i}\), then \(H^{\prime}_{i}=P_{3k-1}\), and by the minimality of \(D\) we have \(|D_{i}|=\gamma(P_{3k-1})=k\). Hence, we may assume that \(x_{i}\neq y_{i}\). If \(x_{i}\) and \(y_{i}\) are adjacent, then
\(H^{\prime}_{i}=P_{3k-2}\), and by the minimality of \(D\) we have \(|D_{i}|=\gamma(P_{3k-2})=k\). Suppose that \(x_{i}\) and \(y_{i}\) are not adjacent, and so \(H^{\prime}\) is the disjoint union of two paths \(P_{k_{1}}\) and \(P_{k_{2}}\), where \(k_{1}+k_{2}=3k-2\). If \(k_{1}=3j_{1}+1\) and \(k_{2}=3j_{2}\) for some integers \(j_{i}\) and \(j_{2}\) where \(j_{1}+j_{2}=k-1\), then \(|D_{i}|=\lceil k_{1}/3\rceil+\lceil k_{2}/3\rceil=(j_{1}+1)+j_{2}=k\). Analogously, if \(k_{1}=3j_{1}\) and \(k_{2}=3j_{2}+1\), then \(|D_{i}|=k\). If \(k_{1}=3j_{1}+2\) and \(k_{2}=3j_{2}+2\) for some integers \(j_{i}\) and \(j_{2}\) where \(j_{1}+j_{2}=k-2\), then \(|D_{i}|=\lceil k_{1}/3\rceil+\lceil k_{2}/3\rceil=(j_{1}+1)+(j_{2}+1)=k\). In all cases, \(|D_{i}|=k\), implying that
\[\gamma(G\otimes_{f}H)=|D|=\sum_{i=1}^{n}|D_{i}|=kn.\]
Since \(f\colon V(G)\to V(H)\) was chosen as an arbitrary function, and \(D\) as an arbitrary \(\gamma\)-set of \(G\otimes_{f}H\), we deduce that \(\gamma_{\mathrm{S}}(C_{n},C_{3k})=\Gamma_{\mathrm{S}}(C_{n},C_{3k})=\gamma(G \otimes_{f}H)=kn\).
## Concluding remarks
It seems to us that in the vast majority of cases where the lower Sierpinski domination number of the Sierpinski product of two cycles is specified to two values exactly, the larger of the two is the correct value. However, the following example, which surprised us, demonstrates that there are also cases where the exact value is the smaller of the two possible values.
Let \(G\cong C_{18}\) with \(V(G)=[18]\) and let \(H\cong C_{7}\) with \(V(H)=[7]\) and let the function \(f:V(G)\to V(H)\) be defined as follows:
\[f(1)=f(4)=f(5)=f(18)=4,\] \[f(2)=f(3)=f(6)=f(7)=2,\] \[f(8)=f(9)=7,\] \[f(10)=f(11)=5,\] \[f(12)=f(13)=3,\] \[f(14)=f(15)=1,\] \[f(16)=f(17)=6.\]
Then Theorem 3.2 asserts that \(\gamma(G\otimes_{f}H)\in\{36,37\}\) and it is straightforward to check that the exact value is \(\gamma(G\otimes_{f}H)=36\).
## Acknowledgements
We would like to thank the reviewer very much for careful reading of the paper and in particular for pointing to a subtle technical detail that led to the reformulation of one of the main theorems.
This research was obtained during the sabbatical visit of the first author at the University of Ljubljana and he thanks them for their kindness in hosting him. The first author also acknowledges that his research visit was supported in part by the University of Johannesburg and the South African National Research Foundation. S.K. acknowledges the financial
support from the Slovenian Research Agency (research core funding No. P1-0297 and projects J1-2452, N1-0285).
|
2309.04504 | Compositional Learning of Visually-Grounded Concepts Using Reinforcement | Children can rapidly generalize compositionally-constructed rules to unseen
test sets. On the other hand, deep reinforcement learning (RL) agents need to
be trained over millions of episodes, and their ability to generalize to unseen
combinations remains unclear. Hence, we investigate the compositional abilities
of RL agents, using the task of navigating to specified color-shape targets in
synthetic 3D environments. First, we show that when RL agents are naively
trained to navigate to target color-shape combinations, they implicitly learn
to decompose the combinations, allowing them to (re-)compose these and succeed
at held-out test combinations ("compositional learning"). Second, when agents
are pretrained to learn invariant shape and color concepts ("concept
learning"), the number of episodes subsequently needed for compositional
learning decreased by 20 times. Furthermore, only agents trained on both
concept and compositional learning could solve a more complex,
out-of-distribution environment in zero-shot fashion. Finally, we verified that
only text encoders pretrained on image-text datasets (e.g. CLIP) reduced the
number of training episodes needed for our agents to demonstrate compositional
learning, and also generalized to 5 unseen colors in zero-shot fashion.
Overall, our results are the first to demonstrate that RL agents can be trained
to implicitly learn concepts and compositionality, to solve more complex
environments in zero-shot fashion. | Zijun Lin, Haidi Azaman, M Ganesh Kumar, Cheston Tan | 2023-09-08T07:26:49Z | http://arxiv.org/abs/2309.04504v2 | # Compositional Learning of Visually-Grounded Concepts Using Reinforcement
###### Abstract
Deep reinforcement learning agents need to be trained over millions of episodes to decently solve navigation tasks grounded to instructions. Furthermore, their ability to generalize to novel combinations of instructions is unclear. Interestingly however, children can decompose language-based instructions and navigate to the referred object, even if they have not seen the combination of queries prior. Hence, we created three 3D environments to investigate how deep RL agents learn and compose color-shape based combinatorial instructions to solve novel combinations in a spatial navigation task1. First, we explore if agents can perform compositional learning, and whether they can leverage on frozen text encoders (e.g. CLIP, BERT) to learn word combinations in fewer episodes. Next, we demonstrate that when agents are pretrained on the shape or color concepts separately, they show a 20\(\times\) decrease in training episodes needed to solve unseen combinations of instructions. Lastly, we show that agents pretrained on concept and compositional learning achieve significantly higher reward when evaluated zero-shot on novel color-shape1-shape2 visual object combinations. Overall, our results highlight the foundations needed to increase an agent's proficiency in composing word groups through reinforcement learning and its ability for zero-shot generalization to new combinations.
Footnote 1: [https://github.com/haidiazaman/RL-concept-learning-project](https://github.com/haidiazaman/RL-concept-learning-project) contains the environments and codes
## Introduction
Hierarchical reinforcement learning agents can learn several action policies and compose them together to solve complex tasks [1]. However, model-free end-to-end algorithms are inefficient learners, requiring millions of training episodes for convergence and incur high number error trials in the initial stages of learning [11, 12], making them undeployable in the real world [1]. Robotics has been trying to solve the vision-language-action combined representation space using symbolic methods but work to integrate vision, language and action spaces using neural networks for compositional learning has only recently started [13, 14].
Additionally, children are sometimes treated as benchmarks for general intelligence [15, 16, 17]. As part of their cognitive development, children learn individual concepts [18] or schemas [19, 20] by associating visual and verbal cues, and interacting with their environment [14, 15]. Learning concepts in the vision-language-action space subsequently facilitates rapid learning on higher order compositional tasks [1, 16, 15].
Hence, we developed a 3D environment so that multimodal RL agents can learn to associate color and shape based visual features to language-based instructions by navigating to the correctly referenced object for a reward. The agent learns a set of color-shape combined instructions, and is evaluated on an unseen compositional test set to determine its ability to decompose novel instructions and combine prior learned features in zero-shot. Besides training agents from scratch, we trained agents with frozen text encoders (CLIP, BERT) to determine if semantic knowledge learned from static datasets reduces the training episodes needed for instruction acquisition to solve the train and test combinatorial task. Lastly, we pretrained agents to learn color and shape concepts separately to demonstrate a 20\(\times\) reduction in the number of episodes needed to solve the unseen color-shape task, and the zero-shot ability to solve a novel color-shape1-shape2 task. We conclude by showing that the type of pretraining to ground agents on the vision-language-action space influences the rate of combinatorial learning.
To summarize, our contributions are as follows:
* We demonstrated the capability of RL agents to decompose concepts and generalize to unseen color-shape combinations, with certain pretrained language encoders improving the learning performance.
* We grounded RL agents on shape and color word groups separately to demonstrate the rapid learning performance on the color-shape compositional learning task. Results show 100\(\times\) and 20\(\times\) improvement in train and test combinations respectively.
* We show for the first time the advantages of pretraining on concept learning tasks in order to solve increasingly complex compositional learning tasks through zero-shot navigation test experiments.
## Related Work
### Using the environment to ground visuo-language learning
Children pedagogy such as Montessori and Reggio emphasizes on scaffolding children development using the environment. Specifically, the environment should be safe for free exploration and contain sufficient resources for children to engage in the various stages of play e.g. solitary, parallel, cooperative, etc. [12] while grounding learning experiences to visual and language features.
Similarly, there has been an increase in virtual learning environments to emulate the scaffolding conditions for artificial cognitive development.
The simplest form of environment is a two-dimensional continuous arena with boundaries where the agent has to learn state-specific navigation policies to successfully reach targets. The agent perceives it's location using place cells and the target to navigate to is given by simple instructions such as a one-hot vector encoding sensory cue [13, 1]. Due to the simplified task and model requirements, it is easy to understand the learned policies, though hard to transfer to natural conditions.
A more complex but naturalistic environment is a three-dimensional arena where the agent perceives environmental features using RGB visual inputs, and instructions are represented by either one-hot vectors [14] or English sentences [15]. The goal is to navigate to the instructed target. Recent work called XL, allowed the generation of millions of learning environments by changing three variables - the physical 3D space, the game rule specified using natural language, and the number of co-players [16]. Training RL agents on vastly diverse tasks led to impressive generalization behaviour on held-out tasks, requiring them to navigate, use tools and cooperate or compete depending on the instruction to maximize total rewards.
Alternatively, agents can learn to move objects to satisfy the language-based instruction [17]. Here, the action space is much larger than a navigation task as the agent has to learn to manipulate objects to satisfy the corresponding language query. Hence, the visual features are simplified to ensure the learning speed is tractable.
To our knowledge, there is only one environment that was developed to emulate the learning environment for children [20], though its ability to scaffold concept learning using reinforcement learning has not been tested yet.
### Models for compositional learning
Compositional learning is the ability to decompose large information into its basic elements or concepts, and subsequently combine these concepts to solve novel, unseen combinations in zero-shot or with few examples [13, 14, 15, 16, 17, 18, 19].
Compositional learning has mostly been used to improve object detection where vision-based models learn object-attribute pairs in a training set and compose the invarianves learned to an unseen test set [15, 16, 17, 18]. Alternatively, the loss function can be augmented to encourage deep networks to decompose information into generalizable features [21, 19]. More recently, models can accurately identify or parse relevant objects from images using bounding boxes[16] or object segmentation masks [11] to solve new task requirements.
Symbolically parsing sentences into individual words and parts-of-speech tagging helps to decompose the sentence and clarify the semantics. Deep networks such as BERT represent concepts with similar or opposite semantics through its embedding [15, 16] or object segmentation masks [14] to solve new task requirements.
There have been numerous works on learning atomic actions and composing them to rapidly solve new tasks [17]. Some of such works are termed hierarchical reinforcement learning where models learn various policies at different levels of control for efficient policy composition [13] such as for tool use [16] or sequential navigation to goals [15].
Multi-modal models learn to align visual inputs to language concepts [15, 17, 18, 19] to solve a multitude of compositional reasoning tasks [16] such as Visual-Question Answering (VQA) [18], Referring Expressions [13], or augment images using instructions [15].
Yet, there are only a handful of work to align compositional learning in the vision, language and action spaces. A notable example includes training vision-language based reinforcement learning agents on millions of various toy environments to develop a generally capable agent that can solve tasks based on the instruction given [16].
Nevertheless, how these reinforcement learning models ground vision-language-action representations for compositional learning, what the individual concepts are, and how these concepts are recomposed to solve novel combinations remains elusive.
## Color & Shape Learning Environment
The following section describes the learning environment used to ground the vision-language reinforcement learning agent on geometric Shapes (S) and Colors (C).
### Environmental design
The 3D environments were developed to learn two key concepts: Shape (S) and Color (C). The environments contain objects made up of five distinct shapes, which are capsule, cube, cylinder, prism, and sphere, paired with five different colors: red, green, blue, yellow, and black. Hence, each object can be described using the shape and color, for example "red sphere", "blue capsule", or "yellow prism". The environment engine utilizes the available shapes and colors to
generate episode variations featuring different object combinations in a random manner to train the agent.
A target object will be randomly spawned at one of four predetermined locations within a rectangular room. The overall layout of the environment remains constant, as depicted in Figure 1. The room includes fixed visual cues namely a door, window, shelf and reference man.
A Unity-based camera models the agent's first person point of view by capturing the environmental dynamics in terms of RGB images. These serve as the visual input for the RL agent. The environment also produces the label of the target object such as "red cube" or "blue sphere". The instruction will be passed to the RL agent as language input.
TaskThe primary objective of our RL agent is to navigate to the target object described by the language-based instruction. We devised three different environments to investigate the agent's ability to decompose instructions and learn foundational concepts. The key details of the three environments are:
1. C+S ("Color AND Shape"): The task is to compositionally learn two word concepts. There is one target described by the color and shape attribute with three non-target objects. The instruction includes both color and shape words as inputs.
2. C+S+S ("Color Shape Shape"): The task is to compositionally learn three-word concepts. There is one target with two shapes and three non-targets with pairs of shapes. Both objects in a pair will be of the same color. The instruction includes all three color and two shape words as inputs.
3. C/S ("Color" OR "Shape"): The task is single word concept learning. There is one target and three non-target objects. Only one word, either color or shape, that is given as instruction.
Figure 1 shows examples of the C+S (top row) and C+S+S (bottom row) environments. For all three environments, one target and three non-target objects or object pairs are randomly spawned at the four predefined positions in the environment.
Moreover, to evaluate the agent's performance, we also created train and test combinatorial instructions for the three environments. The train-test splits for environments C/S, C+S and C+S+S are detailed in the Supplementary material. The train-test split allows us to examine whether agents can learn to decompose instructions on which it was trained on and combine the Color and Shape concepts to solve unseen combinations in zero-shot during testing.
### Evaluation Metrics
Throughout each episode, the agent's actions disburses specific rewards and penalties. A successful navigation to the target object yields a reward of +10, while collisions with non-target objects or walls incur penalties of -3 and -1, respectively. Additionally, the agent receives a penalty of -10 upon reaching the maximum allowed steps of 500. To ascertain the successful learning of a task by the agent, we establish a Performance Criterion of +9. The agent is deemed to have effectively learned the task when it accumulates a reward \(\geq\) 9 over 100 training episodes.
## 3 Grounded Concept Learning Agent
### One-Hot Agent Architecture
As illustrated in Figure 2, the One-hot agent architecture is modified from [10], which takes in visual inputs (a 3\(\times\)128\(\times\)128 tensor of RGB pixel values) and language input (one-hot vector embedding). RGB pixel values are passed into the vision module, which contains three convolutional layers, and the output flattened into a 3136 (a 64\(\times\)7\(\times\)7 tensor) dimensional embedding.
Simultaneously, the language module takes in two one-hot vector embedding of the instructions, each representing the Color and Shape attributes. For example, "red cube sphere" is represented by a one-hot vector ([1,0,0,0,0]) and a two-hot vector ([0,1,0,0,1]). The two vectors are fully connected to a 128 unit linear embedding layer.
The 3136 dimensional vector from the vision module and the 128 dimensional vector from the language modules are concatenated and fed into a 256 dimensional linear mixing layer.
An Long Short Term Memory (LSTM) takes the 256-dimensional embedding as input. Its hidden state output \(s_{t}\) is linked to both the action predictor (actor) and value estimator (critic). The action predictor maps the LSTM's hidden state tensor to a probability distribution \(\pi(a_{t}|s_{t})\) over
Figure 1: Two example environments. Left column shows the top-view of the environments. Right column shows the first-person view (128x128) of the RL agent. Top and bottom rows show the C+S environment (target instruction is “red sphere”) and the C+S+S environment (target instruction is “red sphere cylinder”) respectively. Notice how the sizes and locations of the objects differ (compare top and bottom).
four possible actions, i.e., move forward, move backward, turn left and turn right. Meanwhile, the value estimator computes a scalar approximation of the agent's state-value function \(V(s_{t})\).
The agent is trained using the advantage actor-critic (A2C) algorithm [10]. We utilize the RMSProp optimizer with a consistent learning rate of \(2.5\times 10^{-4}\) across all experiments to optimize the training process.
### Text Encoders
To study the influence of different text encoders on learning word combinations, we introduce three variants: Vanilla, CLIP, and BERT, whose architectures are depicted in Figure 2 respectively. The output dimensions of three text encoders are standardized to 128 for fair comparison. Instructions are represented using text strings and passed to the language modules. Importantly, the other parts of the agent is unchanged to be consistent with the One-hot agent.
Vanilla Text EncoderThe text instruction firstly undergoes tokenization using the original CLIP tokenizer, with a maximum tokenized tensor length of 77 [1]. The tokenized tensor is then passed through a single token embedding layer, retrieving 512-dimensional word embeddings for each token. Subsequently, an average pooling operation computes the mean value of each 512-dimensional word embedding, yielding a 77-dimensional tensor. This tensor is passed to the 128-dimensional linear layer. Importantly, the parameters of the Vanilla text encoder are initialized randomly and updated during training.
BERT Text EncoderThe text instruction undergoes tokenization using the BERT tokenizer before feeding it into the pre-trained BERT Text Encoder [1]. The output from the BERT Text Encoder is a 768-dimensional vector, which is then passed to the 128-dimensional word embedding linear layer to obtain. Here, the weights of the BERT Text Encoder are kept frozen while only the weights to the 128-dimensional embedding layer is updated during training.
CLIP Text EncoderSimilar to the Vanilla text encoder, the text instruction is tokenized using the CLIP tokenizer first and passed into the pre-trained CLIP Text Encoder. The resulting 512-dimensional output from the CLIP text encoder is then passed to the 128-dimensional word embedding linear layer. Similar to BERT, the parameters of the CLIP text encoder remains fixed while only the weights to the 128-dimensional embedding layer is updated during training.
## Results
In the first experiment, we train the four text-encoder variants of reinforcement learning agent (Fig. 2) to navigate to the target object described by the color AND shape attributes (Fig. 1 top row). We evaluate its color-shape compositional learning capabilities based on the number of episodes needed to reach performance criterion on the 20 training combinations and five unseen test combinations (refer to Supplementary material).
In the second experiment, we demonstrate that if agents are pretrained to learn the color and shape concepts separately, the number of episodes needed for color-shape compositional learning is significantly reduced. We further compare compositional learning with and without concept learning in terms of zero-shot navigation in the novel color-shape1-shape2 environment.
### Experiment 1: Compositional reinforcement learning with various text encoders
Visually grounded agents can understand single feature instructions [13]. How navigation agents learn and compose multiple attributes is unclear. Hence, experiment 1
Figure 2: Agent architecture. The language module of the One-hot encoder agent is demarcated with dotted lines, and the language model of the agent with: **A**) Vanilla text encoder, **B**) BERT text encoder, **C**) CLIP text encoder are shown respectively. BERT text encoder and CLIP text encoder are both pretrained and frozen. Red arrows or boxes represent trainable weights, black arrows or boxes represent frozen weights.
expands on single attribute navigation to the combination of two attributes, Color \(+\) Shape (C\(+\)S).
In this experiment, the four agent variants are trained on 20 C\(+\)S instructions and tested on 5 unseen or held-out C\(+\)S pairs. For example, the agent is trained on the instructions and visual targets "black cube" and "red sphere". After 100 training episodes, it is tested on its ability to compose the concepts of "black" and "sphere" to accurately navigate to the visual target "black sphere" when given the held-out test combination instruction.
Table 1 shows the number of training episodes needed for agents to achieve performance criterion on both the train and test combinations. To clarify, agents continue to train on the 20 training combinations to determine when they can achieve performance criterion on the unseen test combination. The intent is to understand if and when RL agent learns to decompose instructions given during training to learn each word group and recompose them to solve unseen color-shape test combinations. All the trainable parameters were randomly initialized and each agent variant was trained for 2 iterations (\(N=2\)) on the C\(+\)S environment to obtain the mean episodes needed to achieve performance criterion.
The agent with the One-hot text encoder requires approximately 72,000 and 97,000 episodes to achieve performance criterion for the 20 training and 5 unseen test combinations respectively. This result highlights that an agent can learn to decompose the compositional instructions and recompose the individual concepts to generalize to unseen test combinations.
The Vanilla text encoder that was trained from scratch required almost two times of the training episodes of the One-hot encoder to achieve performance criterion for the training combination. This is expected since the Vanilla encoder has to learn the semantic differences of the color and shape attributes from scratch. However, the text encoder used CLIP's tokenizer which encoded the various colors and shapes into distinct tokens, making it easier for the Vanilla text encoder to disentangle the embeddings even before training (refer to Supplementary material).
Interestingly, although the agent with the BERT text encoder was trained for a maximum of 200,000 episodes, the agent achieved the average reward of 8.5 approximately for testing combinations, which failed to reach performance criterion in the unseen test combinations. Figure 3 shows that after training for 50,000 episodes, BERT's word embedding of the 20 train and 5 test instructions are highly overlapped, suggesting that the pretrained BERT encoder struggles to distinguish between the color and shape concepts. A potential explanation is because BERT is only trained on text data and is not grounded to visual attributes, making it difficult to disambiguate visual concepts like color and shape.
The CLIP text encoder achieved performance criterion on both the train and unseen test combinations 1.3 times faster than the One-hot text encoder agent. The expedited learning implies that CLIP's prelearned word embedding is useful to increase the training efficiency for a reinforcement learning agent. Figure 3 shows that CLIP encodes both the train and test instructions systematically where Principal Component 1 and 2 clearly explains the color and shape dimensions with minimal overlap between the 25 instructions. Such organization of color and shape concepts could be because CLIP's language comprehension is grounded to visual attributes due to its cross-modal training regime on various image-caption datasets. This result demonstrates that it is possible to use
\begin{table}
\begin{tabular}{|c|r|r|} \hline & \multicolumn{2}{c|}{**Training Episodes**} \\ \hline
**Text** & **Train** & **Test** \\
**Encoder** & **combinations** & **combinations** \\ \hline
**One-hot** & \(72.4\pm 1.6\) & \(97.1\pm 2.4\) \\ \hline
**Vanilla** & \(124.9\pm 11.2\) & \(187\pm 18.9\) \\ \hline
**BERT** & \(106.2\pm 10.1\) & \(\geq 200\) \\ \hline
**CLIP** & \(58.6\pm 5.5\) & \(74.6\pm 8.0\) \\ \hline \end{tabular}
\end{table}
Table 1: Init \(\rightarrow\) C\(+\)S: Learning to decompose instructions for compositional performance. Values in the table indicate the mean of episodes (in **thousands**) needed to achieve an average episode reward \(\geq\) 9 (performance criterion) over 100 episodes with standard deviation for two iterations in training and testing environments. Lower values indicate faster learning performance.
Figure 3: Word embeddings of the agent with BERT (top) and CLIP (bottom) text encoders after training in the C\(+\)S environment for 50,000 episodes. Filled icons represent training set examples, while unfilled icons with magenta labels represent testing set examples.
vision-language grounded models to improve the training efficiency of multi-modal reinforcement learning agents, especially for compositional learning.
### Experiment 2: Concept learning speeds up Compositional learning
We have demonstrated that RL agents are capable of learning the Color and Shape word groups by decomposing composed instructions and, subsequently recompose them to solve unseen combinations of word groups. This investigation is extended by showcasing that prior learning of the individual Color and Shape word concepts (Concept learning) can serve as a schema to enhance the agents' proficiency in solving compositional instructions (Compositional learning). For this set of experiments, we focus only on the One-hot text encoder agent.
Table 2 compares the number of training episodes needed to achieve performance criterion on the 20 train and five unseen test combinations in the C\(+\)S environment for \(N\)=3 iterations. Notably, two types of training regimes are compared. The first row indicates randomly initialized One-hot text encoder agents being directly trained on the C\(+\)S environment as part of compositional learning. The second row describes randomly initialized One-hot text encoder agents that are pretrained on the C\(/\)S environment for 200,000 episodes to learn the Color and Shape individually as part of concept learning. Subsequently, they are trained on the C\(+\)S environment for compositional learning.
Agents pretrained on C\(/\)S for concept learning achieve performance criterion on the train and unseen test combinations 100\(\times\) and 20\(\times\) faster than the agents trained only on compositional learning, respectively. Through concept learning, agents learn shape and color invariances in C\(/\)S, allowing the RL agents to quickly learn to encode and compose combined instructions in C\(+\)S environment. These results demonstrate the training speed-up when learning individual concepts before learning higher order tasks such as composing concepts.
To further demonstrate the benefit of concept learning, we evaluated the One-hot text encoder agents on their zero-shot navigation capabilities on two environments and instruction types, C\(+\)S and C\(+\)S+S. Table 3 summarises two sets of results.
In the first set, we evaluated randomly initialised agents that were not trained on any environment (nil) and agents trained on C\(/\)S on familiar and unseen color-shape combinations in the C\(+\)S training and testing environments respectively. As expected, randomly initialised agents perform poorly on both familiar and unseen combinations, achieving large negative reward values. Conversely, agents trained on C\(/\)S perform considerably well for zero-shot performance and even achieve close to the Performance Criterion of \(+\)9 for familiar combinations. These results further support the importance of concept learning where agents can make reasonable inferences of the color-shape target even though they were not trained to compose instructions.
Up till now, agents have only been trained and evaluated on color-shape instruction combinations. However, in the real-world setting, instructions are highly compositional, extending beyond two word color-shape attributes. Furthermore, real-world objects can be thought of two or more geometric shapes composed together such as a hammer being composed of a cylinder and a cuboid.
We evaluated agents trained either on concept learning (C\(/\)S), compositional learning (C\(+\)S), or both (C\(/\)S \(\rightarrow\) C\(+\)S) on the C\(+\)S\(+\)S environment to determine if these agents could generalize in zero-shot to a completely novel category of instructions and visual objects found in C\(+\)S\(+\)S. To solve the task in this environment, the RL agents have to compose three word instruction, one color and two shapes, which makes the task more complex and difficult than C\(+\)S. Since there is some overlap between the train combinations in C\(+\)S and the combinations in C\(+\)S\(+\)S, we call these objects and instructions as familiar combinations while combinations that do not overlap with the train combinations are called unseen combinations, making them truly novel composed objects and instructions. There are 40 and 10 color-shape1-shape2 familiar and unseen combinations respectively.
Table 3 shows that agents trained only on C\(/\)S performs slightly worse on the familiar C\(+\)S\(+\)S combinations but better on the unseen C\(+\)S\(+\)S combinations compared to the agent trained only on C\(+\)S. The best performing agent is the one that was trained on C\(/\)S first and then C\(+\)S, achieving a
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Training** & **Train** & **Test** \\
**environment** & **combinations** & **combinations** \\ \hline
**C\(+\)S** & \(67.4\pm 7.2\) & \(94.8\pm 3.7\) \\ \hline
**C\(/\)S \(\rightarrow\) C\(+\)S** & \(0.6\pm 0.1\) & \(5.5\pm 2.9\) \\ \hline \end{tabular}
\end{table}
Table 2: Mean and standard deviation of the number of episodes (in **thousands**) required to achieve an average episode reward \(\geq\) 9 (performance criterion) over 100 episodes over three iterations. These experiments were ran in C\(+\)S training and testing environments, where agents are trained from scratch, versus after pretraining on C\(/\)S. Lower values indicate faster learning performance.
\begin{table}
\begin{tabular}{|c|c|c|} \hline & \multicolumn{2}{c|}{**Zero-shot Evaluation**} \\ \hline
**Training** & **Familiar** & **Unseen** \\
**environment** & **C\(+\)S combo** & **C\(+\)S combo** \\ \hline
**nil** & \(-29.15\pm 3.07\) & \(-36.48\pm 3.60\) \\ \hline
**C\(/\)S** & \(8.37\pm 0.64\) & \(6.74\pm 1.64\) \\ \hline \hline
**Training** & **Familiar** & **Unseen** \\
**environment** & **C\(+\)S\(+\)S combo** & **C\(+\)S\(+\)S combo** \\ \hline
**nil** & \(-24.42\pm 1.29\) & \(-23.42\pm 2.57\) \\ \hline
**C\(/\)S** & \(1.19\pm 1.24\) & \(-4.02\pm 2.44\) \\ \hline
**C\(+\)S** & \(2.84\pm 0.92\) & \(-5.10\pm 2.59\) \\ \hline
**C\(/\)S \(\rightarrow\) C\(+\)S** & \(5.49\pm 0.26\) & \(5.55\pm 0.39\) \\ \hline \end{tabular}
\end{table}
Table 3: This table summarises the different zero-shot experiments. The values are the mean and standard deviation of the cumulative rewards of the agents over 100 episodes in the training and testing environments, across three iterations. Higher reward values indicate better performance.
rewards of 5.5 on both the familiar and unseen combinations in zero-shot.
Figure 4 demonstrates the difference in embedding the 40 familiar and 10 unseen combinations by the LSTM layer. When an agent does not learn individual concepts and directly learns to compose (C\(+\)S), the 50 novel instructions are mapped onto just two principal components which can explain 90% of the variance in the LSTM activity. PC1 might be sufficient to separate color information but PC2 struggles to encode the two different shape dimensions. This results in a highly overlapped embedding, making it difficult for the agent to disambiguate instructions and solve the new C\(+\)S\(+\)S combinations.
Conversely, when an agent is first trained on the color and shape concepts individually and subsequently learn to compose them, the agent maps the 50 different C\(+\)S\(+\)S combinations onto a much higher dimensional space, requiring at least six principal components to explain 90% of the variance in the LSTM.
Still, there is some organization of the 50 novel combinations in the higher dimensional embedding space. For instance, upright triangle symbol representing "Color prism sphere" combinations are found mostly in the bottom right subspace and plus symbols representing "Color capsule cylinder" are found in the top left subspace. With this organization, agents can leverage on the clustering of similar combinations to achieve the zero-shot navigation results seen in Table 3.
These results highlight the importance of pairing concept learning with compositional learning to maximize the zero-shot capabilities of reinforcement learning agents, even in a completely novel environment.
## Limitations and Future work
The 3D environments developed for this study utilised relatively basic geometric shapes such as "capsule" and "prism". This raises questions about generalization and zero-shot navigation on real-world tasks involving more realistic objects. Additionally, without any obstacles included in the room, the agents' navigation task is fairly simple. As such, an agent trained in our environments may struggle to generalize to other scenarios which require the ability to navigate more complex routes.
In the concept learning experiment, we only compared the performance of the One-hot agent across different training and testing environments. As such, we did not provide insights on whether pretrained and frozen visual encoders can show similar increases in speed to solve compositional learning tasks. Furthermore, the language instructions for our experiments are only composed of two or three words. We did not experiment with more complex instructions such as "find any object yellow". These limitations will be explored as future work.
## Conclusion
In this paper, we demonstrate that reinforcement learning agents are able to ground instructions to visual attributes. Specifically, we explore if agents can learn to decompose instructions during training and are able to recompose different word groups together to solve unseen object combinations. Furthermore, we demonstrate that concept learning of individual word groups accelerates compositional learning of combined word groups. Moreover, we evaluate the zero-shot capabilities of agents on complex compositional learning tasks after learning concepts individually.
This study has potential applications in various domains, including autonomous navigation and human-robot interactions require language and visuo-spatial reasoning skills. As we continue to bridge the gap between language and vision, our work on compositional learning in multi-modal reinforcement learning agents lay the groundwork for applications that require the dynamic interaction between artificial agents and their environments.
Figure 4: Embedding of average LSTM activity for 50 instructions from C\(+\)S\(+\)S after training only on C\(+\)S (Top), compared to pretraining on C\(/\)S and subsequently training on C\(+\)S (Bottom). |
2302.14465 | Video Quality Assessment with Texture Information Fusion for Streaming
Applications | The rise in video streaming applications has increased the demand for video
quality assessment (VQA). In 2016, Netflix introduced Video Multi-Method
Assessment Fusion (VMAF), a full reference VQA metric that strongly correlates
with perceptual quality, but its computation is time-intensive. We propose a
Discrete Cosine Transform (DCT)-energy-based VQA with texture information
fusion (VQ-TIF) model for video streaming applications that determines the
visual quality of the reconstructed video compared to the original video.
VQ-TIF extracts Structural Similarity (SSIM) and spatiotemporal features of the
frames from the original and reconstructed videos and fuses them using a long
short-term memory (LSTM)-based model to estimate the visual quality.
Experimental results show that VQ-TIF estimates the visual quality with a
Pearson Correlation Coefficient (PCC) of 0.96 and a Mean Absolute Error (MAE)
of 2.71, on average, compared to the ground truth VMAF scores. Additionally,
VQ-TIF estimates the visual quality at a rate of 9.14 times faster than the
state-of-the-art VMAF implementation, along with an 89.44 % reduction in energy
consumption, assuming an Ultra HD (2160p) display resolution. | Vignesh V Menon, Prajit T Rajendran, Reza Farahani, Klaus Schoeffmann, Christian Timmerer | 2023-02-28T10:14:28Z | http://arxiv.org/abs/2302.14465v2 | # Video Quality Assessment
###### Abstract
The rise of video streaming applications has increased the demand for _Video Quality Assessment_ (VQA). In 2016, Netflix introduced VMAF, a full reference VQA metric that strongly correlates with perceptual quality, but its computation is time-intensive. This paper proposes a _Discrete Cosine Transform_ (DCT)-energy-based VQA with texture information fusion (VQ-TIF ) model for video streaming applications that predicts VMAF for the reconstructed video compared to the original video. VQ-TIF extracts Structural Similarity (SSIM) and spatio-temporal features of the frames from the original and reconstructed videos, fuses them using a _Long Short-Term Memory_ (LSTM)-based model to estimate VMAF. Experimental results show that VQ-TIF estimates VMAF with a _Pearson Correlation Coefficient_ (PCC) of 0.96 and a _Mean Absolute Error_ (MAE) of 2.71, on average, compared to the ground truth VMAF scores. Additionally, VQ-TIF estimates VMAF at a rate of 9.14 times faster than the state-of-the-art VMAF implementation and a 89.44% reduction in the energy consumption, assuming an Ultra HD (2160p) display resolution.
Vignesh V Menon\({}^{1}\) Prajit T Rajendran\({}^{2}\) Reza Farahani\({}^{1}\) Klaus Schoeffmann\({}^{1}\) Christian Tommerer\({}^{1}\)\({}^{1}\) Christian Doppler Laboratory ATHENA, Institute of Information Technology (ITEC), Alpen-Adria-Universitat, Klagenfurt, Austria
\({}^{2}\) CEA, List, F-91120 Palaiseau, Universite Paris-Saclay, France Video quality assessment, VMAF, SSIM, texture information.
## 1 Introduction
_Motivation:_ With the ever-increasing demands for high-definition video streaming services, the needs for _Video Quality Assessment_ (VQA) is growing potentially. It plays an essential role in video processing from capturing to rendering, including compression, transmission, restoration, and display [1]. With all the available encoding options and trade-offs to consider in _HTTP Adaptive Streaming_ (HAS) [2], having a lightweight and reliable VQA method is crucial. According to the degree of information available for the reference video signals, VQA is classified into _full reference_ (FR), _reduced reference_ (RR), and _no reference_ (NR) methods. NR-VQA methods are 'blind' where the original video content is not used for the quality assessment, leading to an unreliable VQA [1]. On the other hand, since RR-VQA methods use _(i)_ less overhead data compared to FR-based VQA approaches, and _(ii)_ are more reliable than NR-based VQA methods, they are employed in real-time scenarios [3].
_Peak Signal to Noise Ratio (PSNR)_ continues to be the predominant industry benchmark for standardizing video codecs. PSNR is an effective method for generating a numeric value that compares an original input file and a coded output file. The limitations of PSNR are _(i)_ its failure to account for the temporal nature of compression artifacts and _(ii)_ lack of correlation between PSNR improvements and subjective quality, particularly in the presence of camera noise [4, 5]. _Structural Similarity (SSIM)_ is a still image quality metric introduced in 2004 [6] which considers image degradation as perceived change in structural information while also incorporating critical perceptual phenomena, including both luminance masking and contrast masking terms. _Video Multi-Method Assessment Fusion (VMAF)_ was explicitly formulated by Netflix to correlate strongly with subjective Mean Opinion Scores (MOSs). Using machine learning techniques, a large sample of MOSs was used as ground truth to train a quality estimation model. Among the state-of-the-art VQA metrics, VMAF achieves the highest correlation with Difference Mean Opinion Score (DMOS); however, its computation time is very high compared to PSNR and SSIM metrics [7].
_Target:_ This paper targets a fast estimation of VMAF to facilitate low-latency VQA in video streaming applications. The expected computation time should be comparable with PSNR and SSIM, with the highest possible accuracy compared to the ground truth VMAF score.
_Contributions:_ The paper proposes a fast machine learning (ML)-based VQA model using texture information fusion (VQ-TIF ), which can be implemented in real-time to determine VMAF. The VQ-TIF model determines VMAF by _Discrete Cosine Transform_ (DCT)-energy-based texture information fusion and SSIM between the original and the reconstructed video. The model extracts video complexity features like brightness and spatio-temporal texture information from the luma channels of the videos. The extracted features are fused using a _Long Short-Term Memory_ (LSTM)-based model to determine the VMAF score. VQ-TIF -based VMAF can be determined at a rate of 9.14 times faster than the state-of-the-art VQA metrics.
_Target:_ This paper considers a fast estimation of VMAF to facilitate low-latency VQA in video streaming applications. The expected computation time should be comparable with PSNR and SSIM, with the highest possible accuracy compared to the ground truth VMAF score.
_Contributions:_ The paper proposes a fast machine learning (ML)-based VQA model using texture information fusion (VQ-TIF ), which can be implemented in real-time to determine VMAF. The VQ-TIF model determines VMAF by _Discrete Cosine Transform_ (DCT)-energy-based texture information fusion and SSIM between the original and the reconstructed video. The model extracts video complexity features like brightness and spatio-temporal texture information from the luma channels of the videos. The extracted features are fused using a _Long Short-Term Memory_ (LSTM)-based model to determine the VMAF score. VQ-TIF -based VMAF can be determined at a rate of 9.14 times faster than the state-of-the-art VQA metrics.
-of-the-art VMAF evaluation, with a _Pearson Correlation Coefficient_ (PCC) of 0.96 with the ground truth VMAF values and 89.44% reduction in energy consumption, assuming a UHD (2160p) display.
## 2 Video Quality Metrics
_Peak Signal to Noise Ratio (PSNR)_ is a conventional quality metric for signals and expressed in decibels. PSNR is derived from the mean square error (MSE) or its square root (RMSE). The formula used is: \(PSNR=20log_{10}\frac{Max}{RMSE}\), where the error is computed over all the pixels in the video to a reference video. SSIM computes a score for each pixel using a window of neighboring pixels [9]. These scores can then be averaged to produce a global score for the entire image to a reference image. The original metric [6] produces scores ranging between zero and one; however, it is commonly expressed in a non-linear decibel scale, _i.e._, \(-10*log_{10}(1-SSIM)\).
VMAF is a full-reference, perceptual video quality metric that aims to approximate human perception of video quality. This metric is focused on quality degradation due to compression and rescaling. VMAF estimates the perceived quality score by computing scores from multiple quality assessment algorithms and fusing them by a support vector machine (SVM). Currently, three image fidelity metrics and one temporal signal have been chosen as features to the SVM: _(i)_ Anti-noise SNR (AN-SNR), _(ii)_ Detail Loss Measure (DLM), _(iii)_ Visual Information Fidelity (VIF), and _(iv)_ Mean Co-Located Pixel Difference (MCPD). An essential feature is the MCPD of a frame to the previous frame (_i.e._, the temporal component). PSNR and SSIM metrics do not consider temporal information [10]. A VMAF score is simpler to understand because it operates in a linear range of 0 to 100, whereas PSNR is logarithmic. It considers scaling and compression artifacts and has a model trained for mobile video consumption [11].
Pearson Correlation of PSNR, SSIM, and VMAF quality metrics is analyzed for five hundred video sequences of the Video Complexity Dataset (VCD) [8] encoded at Ultra High Definition (2160p) resolution with the x264 AVC encoder - using _ultrafast_ preset and CRF rate control, as shown in Table 1. CRF values ranging between 1 and 51 are used in the analysis. The correlation of the VMAF score with PSNR and SSIM scores are 0.83 and 0.88, respectively. The correlation can also be observed graphically in Fig. 1, which shows rate-distortion (RD) curves of selected video sequences based on their spatio-temporal complexity, where the distortion is measured using PSNR, SSIM, and VMAF.
## 3 Vq-Tif Model
The architecture of the proposed VQ-TIF -based VMAF estimation is illustrated in Fig. 2. Since the correlation between SSIM and VMAF is very high (_cf._ Table 1), and the computation time of SSIM is significantly lower than VMAF, SSIM is used as a feature to compute VMAF. The input video segment is divided into multiple chunks. The first phase of the model is the frame-wise _texture information extraction_ for each chunk (explained in Section 3.1) of the original and reconstructed video segment and SSIM calculation. The second phase of the model is the _texture information fusion_, where the features and the computed SSIM are fused using an LSTM-based model to determine the VMAF score for each chunk (refer to Section 3.2). The VMAF scores obtained for each chunk is averaged as the VMAF for the reconstructed video segment.
### Texture Information Extraction
An intuitive feature extraction method would be utilizing Convolutional Neural Networks (CNNs) [12]. However, such
\begin{table}
\begin{tabular}{l|c|c|c} \hline \hline & PSNR & SSIM & VMAF \\ \hline PSNR & 1.00 & 0.70 & 0.83 \\ SSIM & 0.70 & 1.00 & 0.88 \\ VMAF & 0.83 & 0.88 & 1.00 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Pearson Correlation of VQA metrics.
Figure 1: Rate Distortion (RD) curves of selected segments of different spatiotemporal complexities – _Beauty_s000_ (\(E=\)59.90, \(h\)=17.49, \(L\)=89.25), _Basketball_s000_ (\(E=\)15.30, \(h\)=12.59, \(L\)=108.18), _Characters_s000_ (\(E=\)45.42, \(h\)=36.88, \(L\)=134.56), and _Runners_s000_ (\(E=\)105.85, \(h\)=22.48, \(L\)=126.60) – selected from the VCD dataset [8]. Distortion is represented by (a) PSNR, (b) SSIM, and (c) VMAF. The segments are encoded with the x264 AVC encoder using _ultrafast_ preset and CRF ratecontrol.
models have several inherent disadvantages, such as higher training time, inference time, and storage requirements, which are impractical in streaming applications. Although CNN-based approaches could result in rich features, simpler models that yield a significant prediction performance are more suitable for video streaming. The popular state-of-the-art video complexity features are Spatial Information (SI) and Temporal Information (TI) [13]. However, the correlation of SI and TI features with the encoding output features such as bitrate, encoding time, _etc._ are very low, which is insufficient for encoding parameter prediction in streaming applications [14].
In this paper, three DCT-energy-based features, the average luma texture energy \(E\), the average gradient of the luma texture energy \(h\), and the average luminescence \(L\), which are extracted using VCA1 open-source video complexity analyzer [14] are used as the texture information measures [15, 16].
Footnote 1: [https://vca.itec.aau.at](https://vca.itec.aau.at), last access: Feb 20, 2023.
### Texture Information Fusion
The texture information fusion step of VQ-TIF is accomplished using the following steps:
_Spatial pooling_: The video segments are divided into \(T\) chunks with a fixed number of frames (_i.e._, \(f_{c}\)) in each chunk. The averages of the \(E\), \(h\), and \(L\) features of each frame in the chunk are computed to obtain the spatially pooled representation of the chunk, expressed as: \(X=\{x_{1},x_{2},..,x_{f_{c}}\}\), and \(\hat{X}=\{\hat{x}_{1},\hat{x}_{2},..,\hat{x}_{f_{c}}\}\), where, \(x_{i}\) and \(\hat{x}_{i}\) is the feature set of every \(i^{th}\) frame of the original and reconstructed video chunks, respectively.
\[x_{i}=[E_{i},h_{i},L_{i}],\hat{x}_{i}=[\hat{E}_{i},\hat{h}_{i},\hat{L}_{i}] \quad\forall i\in[1,f_{c}] \tag{1}\]
_Residual computation_: Residual features are formed by subtracting the original video texture information features from the reconstructed video features. This difference is known as the error or residual feature, which is expressed as \(r_{E_{i}}=E_{i}-\hat{E}_{i}\), \(r_{h_{i}}=h_{i}-\hat{h}_{i}\), \(r_{L_{i}}=L_{i}-\hat{L}_{i}\), respectively, where \(i\in[1,f_{c}]\). The residual features usually have low information entropy since the original and reconstructed video frames are similar.
_Fusion_: The fusion of the texture information features is established using a _Long Short-Term Memory_ (LSTM). LSTM is selected as a model for processing sequential data, making it suitable for combining information for temporally adjacent frames in a video. An advantage of LSTM models is that they can better handle long-term dependencies in long sequences. The videos are divided into several chunks, each of which consists of a fixed number of frames. Each chunk's feature averages are considered separate data points in the model training process. Thus, the input data consist of the residuals of the spatially pooled luma texture information features extracted per frame of the video chunk. Additionally frame-wise SSIM values denoted by \(S=\{s_{1},s_{2},..,s_{f_{c}}\}\) are appended to the residual features. The prediction model is a function of the residual features and the SSIM values of the frames in a chunk, as shown in Eq. 2. This approach can fuse feature information from temporally adjacent frames to estimate VMAF.
\[\tilde{x_{i}}=[r_{i}|s_{i}]^{T}\quad i\in[1,f_{c}] \tag{2}\]
where \(r_{i}=[r_{E_{i}},r_{h_{i}},r_{L_{i}}]\). Estimated VMAF per chunk \(\hat{v}\) can be presented as: \(\hat{v}=f(\tilde{x})\). The VMAF of the reconstructed video segment is the average of the \(\hat{v}\) values estimated for every chunk.
## 4 Evaluation
### Test Methodology
In this paper, 75% of the five hundred UHD video sequences (segments) of the Video Complexity Dataset (VCD) [8] are used as the training dataset, 5% are set as the validation set and the remaining 20% as the test dataset. All experiments are run on a computer with Intel i7-11370H processor and 16GB RAM. The videos are encoded using the x264 AVC encoder with constant rate factor (CRF) values between 1 and 51 to induce different quality distortions, and the corresponding VMAF is evaluated. A chunk comprises eight frames, _i.e._, \(f_{c}=8\). Thus, a video segment is divided into 15 chunks. The original and reconstructed video segments' luma texture features are extracted with the VCA v2.0 open-source video complexity analyzer running with eight CPU threads. The original and reconstructed video feature extraction process is
Figure 2: VMAF estimation for a video chunk using VQ-TIF model envisioned in this paper.
implemented concurrently, with 4 CPU threads for each process. Keras [17] is used as the machine learning framework to implement the model.
The importance of the features input to the LSTM-based fusion model is analyzed using the univariate approach, wherein for each feature, all the other features are kept constant, and the MAE is computed, and this is subtracted from the MAE of the model with all features intact, which gives a measure of the decrease in accuracy (_i.e._, increase in error) when that feature is removed from the model [18]. Subsequently, the absolute value of the decrease in accuracy is computed and normalized to obtain the importance score, where higher absolute scores indicate more critical features. Pearson's correlation coefficient (PCC) and Mean Absolute Error (MAE) scores are analyzed between the VQ-TIF - based VMAF scores and the ground truth VMAF quality scores2. Furthermore, \(\tau_{T}\) and \(J_{T}\), _i.e._, the total time taken and energy consumed to compute the quality metrics, are evaluated. The energy consumption is measured using _code-carbon3_.
Footnote 2: [https://github.com/Netflix/vvmar](https://github.com/Netflix/vvmar), last access: Feb 20, 2023
Footnote 3: [https://codecacarbon.io/](https://codecacarbon.io/), last access: Feb 20, 2023
### Experimental Results
_Relevance of features:_ The importance of the features is visualized in Fig. (a)a. It is observed that the SSIM feature contributes the most to the VQ-TIF -based VMAF estimation, followed by \(r_{E}\), \(r_{h}\), and \(r_{L}\) features.
_Accuracy:_ Fig (b)b shows the scatterplot of the VQ-TIF -based VMAF scores and the ground truth VMAF scores. A strong correlation between the scores is observed. Moreover, PCC and MAE of the VQ-TIF -based scores with the ground truth VMAF score are evaluated. The average PCC of the VQ-TIF based scores to the ground truth VMAF score in the evaluation dataset is 0.96, while MAE is 2.71. In the current industry practice, _Just Noticeable Difference_ (IND) between quality levels is 6 VMAF points4. In this light, VQ-TIF -based VMAF does not yield any significant error.
Footnote 4: [https://streaminglearningcenter.com/codeces/fnding-the-just-noticeable-difference-with-netflix-vvmar.html](https://streaminglearningcenter.com/codeces/fnding-the-just-noticeable-difference-with-netflix-vvmar.html), last access: Feb 20, 2023.
_Processing time and energy:_ The computation time and energy consumed for texture information extraction in VQ-TIF is observed as 0.67 s (_i.e._, 179.10 fps) and 17.18 \(\mu\)J, respectively (_cf._ Fig. (c)c and (d)d). The time taken and energy consumed for SSIM computation are 0.85 s and 22.01 \(\mu\)J, respectively. The time taken and the energy consumed for the texture information fusion are 0.07 s and 1.83 \(\mu\)J, respectively. Hence, the total processing time, \(\tau_{T}\) is 1.59 s (_i.e._, 75.47 fps), while the processing time of state-of-the-art VMAF computation is 14.52 s (_i.e._, 8.26 fps). The computation speed of VQ-TIF is 9.14 times higher than the state-of-the-art VMAF evaluation. Furthermore, in terms of the total energy consumed, VQ-TIF consumes 89.44% less energy than the state-of-the-art VMAF implementation.
## 5 Conclusions
This paper proposed VQ-TIF, a fast and accurate reduced-reference video quality assessment (RR-VQA) method based on texture information fusion. VQ-TIF includes DCT-energy-based video complexity feature extraction where features representing the luma texture and temporal activity are extracted from the original and the reconstructed video segments. The extracted texture information is fused using an LSTM-based model to determine the VQ-TIF -based VMAF. It is observed that VQ-TIF -based VMAF is determined at a speed of 9.14 times faster than the state-of-the-art VMAF implementation for Ultra HD (2160p) videos, consuming 89.44% less energy. At the same time, VQ-TIF -based VMAF scores yield a PCC of 0.96 and MAE of 2.71 compared to the ground truth VMAF.
## 6 Acknowledgment
The financial support of the Austrian Federal Ministry for Digital and Economic Affairs, the National Foundation for Research, Technology and Development, and the Christian Doppler Research Association is gratefully acknowledged. Christian Doppler Laboratory ATHENA: [https://athena.itec.aau.at/](https://athena.itec.aau.at/).
[MISSING_PAGE_POST] |
2309.06313 | Semantic and Articulated Pedestrian Sensing Onboard a Moving Vehicle | It is difficult to perform 3D reconstruction from on-vehicle gathered video
due to the large forward motion of the vehicle. Even object detection and human
sensing models perform significantly worse on onboard videos when compared to
standard benchmarks because objects often appear far away from the camera
compared to the standard object detection benchmarks, image quality is often
decreased by motion blur and occlusions occur often. This has led to the
popularisation of traffic data-specific benchmarks. Recently Light Detection
And Ranging (LiDAR) sensors have become popular to directly estimate depths
without the need to perform 3D reconstructions. However, LiDAR-based methods
still lack in articulated human detection at a distance when compared to
image-based methods. We hypothesize that benchmarks targeted at articulated
human sensing from LiDAR data could bring about increased research in human
sensing and prediction in traffic and could lead to improved traffic safety for
pedestrians. | Maria Priisalu | 2023-09-12T15:24:26Z | http://arxiv.org/abs/2309.06313v1 | # Semantic and Articulated Pedestrian Sensing Onboard a Moving Vehicle
###### Abstract
It is difficult to perform 3D reconstruction from on-vehicle gathered video due to the large forward motion of the vehicle. Even object detection and human sensing models perform significantly worse on onboard videos when compared to standard benchmarks because objects often appear far away from the camera compared to the standard object detection benchmarks, image quality is often decreased by motion blur and occlusions occur often. This has led to the popularisation of traffic data-specific benchmarks. Recently Light Detection And Ranging (LiDAR) sensors have become popular to directly estimate depths without the need to perform 3D reconstructions. However, LiDAR-based methods still lack in articulated human detection at a distance when compared to image-based methods. We hypothesize that benchmarks targeted at articulated human sensing from LiDAR data could bring about increased research in human sensing and prediction in traffic and could lead to improved traffic safety for pedestrians.
Keywords:Pedestrian Detection Autonomous Vehicles
## 1 Introduction
Autonomous vehicle (AV) research is gaining momentum [1, 2, 3, 4] in modeling vehicle-to-vehicle interactions, but pedestrian-vehicle motion planning models [5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46] could be improved by articulated human motion modelling. Pedestrians in difference to vehicles provide strong visual cues of their intent, as well as current and future motion through their articulated pose [47, 48, 49]. Human motion is predictable up to one second with around one centimeter average per joint error when observing articulated motion [50]. The motion information present in the pedestrian pose is unused in most AV motion planning models [5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46], as well as in AV model testing. Progress in articulated pedestrian modeling is slowed down by the lack of data due to the difficulty in recovering articulated pedestrian poses in real traffic scenarios. The importance of preserving the relationship between pedestrian motion and scene semantics on pedestrian motion perception is shown in Fig. 1. The lack of data has lead to the development of AV scene understanding models [5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46] that are oblivious to pedestrian poses and other visual cues (such as facial expressions etc), thus simply omitting available motion cues. Further AV testing is not yet utilizing realistic articulated pedestrian models and instead
tests AV's interactions with heuristic pedestrian motions [51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77]. Since AV's are not evaluated in interactions with real humans at scale the possible safety issues in pedestrian detection, tracking and forecasting are relatively unknown.
We argue that articulated semantically grounded pedestrian sensing and modeling is currently an underdeveloped research field due to a lack of Ground Truth (GT) data. Supervised articulated human sensing models[78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89] are often evaluated on clean benchmarks[90, 91, 92, 93, 94, 95] where humans are and clearly and often fully visible, close to the camera and captured in good lightning conditions. This leads to methods that fail at a distance as well as in the presence of motion blur or poor lightning and occlusions. Unsupervised [96, 97, 98, 99, 100, 101, 102, 103, 104] and weakly supervised[105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115] training have become popular to overcome the lack of difficult and varied GT data. These models could however be improved with combined temporal and traffic-centered semantic modeling to obtain human 3D pose tracking at scale from a moving vehicle.
A ground truth dataset of articulated human motion in 3D would allow one to evaluate the discrepancy between the true and estimated scale and depth, robustness to occlusions, and motion blur in human pose detection and forecasting. In parallel to this work, an approximated dataset of articulated humans in the wild has been released[116], but the dataset still exhibits humans that are close to the camera in the presence of little camera motion when compared to images from traffic and lacks annotations in the presence of large occlusions. Even though [116] is a step in the right direction it does not express the full complexity of the problem of articulated pedestrian motion estimation from onboard vehicles.
Existing monocular absolute scale depth estimators generalize poorly on previously unseen scenes[117, 118]. The same may be expected of the partially supervised and unsupervised 3D human sensing models[96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 114, 115], and this is likely to also affect the estimated limb lengths of the pedestrian. Correctly estimated limb lengths however allow for a precise estimation of the pedestrian's travelling speed. Note that a moving camera requires a robust and temporally smooth pedestrian sensor and motion model to deal with possible image blur,
Figure 1: By semantically modeling articulated pedestrians an AV in orange in the left figure can foresee that pedestrian 1 will continue moving in the same direction eventually being occluded by the tree (see right figure), that pedestrian 2 may choose to cross when standing next to a crosswalk (see right figure), and that the third pedestrian will continue to cross once visible. Modeling articulated pedestrians will also ease the AV to differentiate between the second and third pedestrians as their paths cross (see figure on right), as a sudden change in direction is unlikely on a crossroad and given the pedestrians’ articulated pose.
occlusions and to avoid confusion between the motion of the pedestrian and the camera. Robust and complete pedestrian motion sensing and prediction may directly reduce the number of lethal collisions with AVs.
Pedestrian trajectory forecasting is hard because pedestrians appear to move stochastically when compared to the more regular motion of cars, in particular when pedestrians are modelled by their bounding (3d) boxes [5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46]. In general pedestrian motion prediction is hard as the goal of the pedestrians and the reason for a particular speed is unknown even if articulated motion is available. But pedestrians plan their motion in the scene depending on the geometry of the semantics surrounding them; for example, pedestrians may cross the road to avoid staying on a pavement that is very shallow and is next to a densely trafficked road [48; 49]. Further, pedestrian dynamics depend on the particular pedestrian's physique [50]. A complete pedestrian forecasting model should therefore be semantically aware as well as articulated. Currently, to our knowledge only [119] present an articulated semantically reasoning pedestrian forecasting model. A key difficulty in training articulated and semantically reasoning pedestrian models lies in the lack of data as mentioned before, but also in the lack of varied data. Pedestrians act often monotonely in traffic [120; 121; 122] as complex behaviors occur seldom in traffic, and existing datasets often do not express the full variability in human dynamics and appearances. To be utilized on-board in real time further research is necessary into robust real-time articulated semantically reasoning pedestrian motion models.
Autonomous vehicles typically have a number of sensors that all together generate large amounts of data (possibly up to the order of Tb per minute), so the data must be filtered for salient objects. By filtering the data we stand at the risk of possibly missing something important like a partially occluded pedestrian. Therefore how to best represent a traffic scene for autonomous driving is still an open research topic [123; 124; 125; 126; 127]. Within motion planning High Definition (HD) maps containing scene details in a compact representation [128], and Bird's Eye
Figure 2: The modular reconstruction process: First the data capturing the vehicle’s trajectory is estimated from GPS coordinates or accelerometer data, this is then used to initialize camera matrices in the 3D reconstruction of the scene. The frames of the binocular video are semantically segmented, semantic segmentation is used to remove moving objects (vehicles and people), and the background objects are 3D reconstructed. The semantic segmentation (or images) is then used to find the bounding boxes (BBox) of pedestrians. Then pose estimation is performed followed by filtering to disallow physically unplausible poses. Note that from semantic segmentation 3D BBox of cars can be estimated.
View (BEV) images that is to say top view image of the scene, are common because they allow for 2D vision models to easily be utilized on traffic data[124]. Both HD and BEV are compressed scene representations that do not in general allow for sensor data augmentations. In this work we opt for a semantically labeled 3D reconstruction with articulated pedestrians because this allows for detailed modeling of pedestrians, the evaluation of physical distances between objects, and data augmentation for a number of sensors (such as camera and LiDAR). This is an uncompressed scene representation that allows for data augmentation and testing but it is difficult to recover from only onboard binocular video, as will be detailed. Human sensing is performed on images [129; 140; 141] because this is a more mature research field than human sensing from other sensor data such as LiDAR-scans [142; 143; 144; 145; 146].
Recovering a semantic 3D model of the scene with articulated pedestrians can be done modularly as shown in Fig. 2 by estimating the recording device's motion, semantically segmenting the scene, and 3D reconstructing the scene. Adding articulated pedestrians to the 3D scene reconstruction requires detecting the pedestrians in the scene, estimating the pose of the pedestrians in 2D, estimating the pose of the pedestrians in 3D, and filtering any physically unrealistic poses. A number of estimation errors can occur along the way, making such data gathering hard. We hypothesize that articulated human sensing, tracking and prediction could be improved by combining the three tasks, as is done for vehicles in [146; 147; 148; 149; 150; 151]. After the development of the presented results pose tracking has been posed as the problem of tracking the pose of one or more pedestrians [152; 153; 154].
Note that even though human motion can be captured with a Motion Capture (MoCap) system, or recently even from selected images [116], it is not trivial to set up large-scale experiments to gather traffic datasets that contain a large variety of possible scene geometries, semantics, and GT poses. This is because MoCap data gathering requires intervening with the scene, and existing human pose sensing methods from images cannot yet capture the poses of all humans in the images [116]. Further, most MoCap methods cannot be utilized accurately outdoors with large occlusions. More research is needed in human motion capture in traffic. Markerless human pose detection results often look impressive [155], but don't often present any results for humans who are far away in the presence of motion blur, which is the case in traffic data. Human detection at a distance in the presence of motion blur is still challenging, let alone human pose detection. Other sensors can be used to remedy motion blur and aid in human detection[156] and articulated human sensing, for example [157] perform an initial step to utilize LiDAR and images to detect distant humans in real traffic data.
Figure 3: A sub-sampled sequence of frames from the Cityscapes dataset, Aachen.
## 2 Scene reconstruction
We use the Cityscapes dataset[158] that consists of binocular video sequences, with a length of 30 frames at 17 frames per second, gathered from calibrated cameras placed on the front screen of a vehicle. Sample images are shown in Fig. 3. The data gathering vehicle's position can be estimated from the provided GPS coordinates or accelerometer data. Disparity maps are provided for each frame, and a GT semantic segmentation is provided of the leftmost camera's image at the 20th frame. The images contain some blur because they are captured from behind the windscreen as the vehicle moves. Image blur, the fast camera motion in the forward direction (most 3D reconstruction methods are fragile to this) and independently moving objects make 3D reconstruction of the sequences hard. The inherent difficulties in 3D reconstructing onboard videos have led to the increased popularity of LiDAR for depth estimation.
### Initial Camera Positions
Assuming that the cameras cannot move within the rig or the car we can estimate the cameras' motion as the vehicle's motion. The vehicle's motion can be estimated from the Global Positioning System (GPS) or the accelerometer data. The GPS
Figure 4: Visualizations of 3D pointclouds from using the vehicle’s GPS coordinates or accelerometer readings to estimate camera position with per-frame disparity maps. The GPS is noisier than the accelerometer resulting in a noisier pointcloud.
coordinates contain jumps as seen in Fig. 4 where the cameras' estimated position and each frame's disparity map are used to create pointclouds for each frame that are then aggregated. It can be seen that in the GPS-based vehicle trajectory, the vehicle's rotation oscillated from frame to frame causing the 3d point clouds of different frames to diverge, while the accelerometer data results in a smoother pointcloud. This suggests that the accelerometer-based vehicle trajectory is a better initialization for the camera matrices in a 3D reconstruction system.
### 3D Reconstruction
Multiple 3D reconstruction methods were tested, but only COLMAP[159] converged on a large number of the available sequences. It should be noted that all libraries were tested on the same three sequences, all containing some moving pedestrians and vehicles and strong forward motion as this is typical for the Cityscapes dataset. The following libraries were tested with the following results: _Open Structure for Motion Library_ (OpenSFM)[160]- A Structure from Motion(SFM) system, that is an incremental 3d reconstruction system. Fails to reconstruct the Cityscapes scenes likely because the change in camera rotation is too small between frames. Bundler[161] is also a SFM system. Finds <10 matches, and fails again likely because the images are blurry and the rotational difference between the initial camera views is too small. _OpenCV Structure from motion library[162]_ - A SFM library that uses DAISY features[163]. Result of 30 frames - finds relatively few points without a clear structure. See Fig. 5. _VisualSFM[164]_ a paralellized SFM pipeline with Bundler. Only a thresholded number of large-scale features are matched across images. This unfortunately fails possibly because of image blur or the lack of distinct large-scale structures in the images. The method is unable to find enough SIFT feature points likely because the images are blurry and finds no verified matches between two stereo images. Finally, VisualSFM cannot handle forward motion, not finding a good initial pair of images with enough matches. _ORBSLAM[165]- ORB-feature[166]_ (a fast feature descriptor combining gradient and binary features) based Simultaneous Localization And Mapping (SLAM) system. Finds too few keypoints, likely due to blur and depth threshold. Results in a too sparse reconstruction. _COLMAP[167, 168]- an incremental SFM and Muti-view stereo(MVS) system. Extracts SIFT[169] features that are exhaustively matched (other matching methods are also available) across all images. Converges for 150 scenes on the training and validation set and 150 scenes on the test set. See Fig. 6. For further details on the different systems see Table 1 and the Appendix.
A number of the reconstruction methods fail to find reliable matches across images, likely because of the motion blur and poor quality of the images as the cameras are mounted behind the windscreen of the vehicle. Secondly, the majority of visual 3D reconstruction methods fail at reconstructing in the presence of large forward motion of the camera in particular at fast speeds (i.e. the speed of a car) in the presence of a large number of objects at a large distance to the camera. COLMAP [167, 168] differs mostly from the other methods by the fact that it is modeled for camera views with at times large overlaps; by its outlier
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline \multirow{2}{*}{Method} & Image & \multirow{2}{*}{Matching} & \multirow{2}{*}{First view} & Method for & \multirow{2}{*}{Bundle Adjustment} \\ & features & & algorithm & selection & \\ \hline OpenSFM & HAFP+HOG & Exhaustive & First frames & largest overlap & \\ & Fast approx. & NN & \textgreater{}30\% outliers & with pointcloud & \\ Bundler & SIFT & Exhaustive & large difference & largest overlap & \\ & approx NN & in rotation & with pointcloud & SPA \\ OpenCv & DAISY & Exhaustive NN & & & inexact \\ VisualSFM & SIFT GPU & Preemptive & thresholded no. & largest overlap & \\ & matching & of large features with pointcloud & BA & \\ ORBSLAM ORB & Stereo matching & first frames & next frame & Levenberg \\ & closer than 40b & first frames & next frame & Marquardt \\ COLMAP & SIFT & Exhaustive NN & Algorithm of & high inlier ratio & PCG \\ \hline \hline \end{tabular}
\end{table}
Table 1: Overview of the algorithms used by the different SFM and SLAM libraries. See the Appendix for more details.
Figure 5: 3D pointcloud reconstructed by the OpenCV library. There are too few 3D points in the pointcloud to detect what has been reconstructed.
Figure 6: COLMAP’s sparse 3D reconstruction of the scene depicted in Fig. 7 and Fig. 4. The red rectangular pyramids depict the different camera positions, showing correctly that the vehicle traveled on a curved road.
robust triangulation, probabilistic new view selection and iteratively applied final Bundle adjustment (BA) alternated with filtering and triangulation. It should, however, be noted that COLMAP is not applicable in real-time, (recently a real-time adaption[159] has become available), and it could not reconstruct all of the Cityscapes scenes ( 3475 in the training set and 1525 in the test set). The majority of 3D reconstruction methods are not well-fitted to reconstruct images captured from a moving vehicle. This has led to the increased popularity of LiDAR sensors as they can measure directly the distance to objects which is particularly useful in the presence of moving objects (pedestrians, cars, bikers etc.) when only a few images may be available of the object at a particular location.
### Filtering out Non-stationary Objects in the 3D Reconstructions
Moving objects such as cars and pedestrians need to be removed in SFM, to this end the Gated Recurrent Flow Propagation (GRFP) net[170] that is a video segmentation network that utilizes optical flow to stabilize semantic segmentation in video data. The GRFP is used to segment the frames of the Cityscapes sequences.
COLMAP is adapted by adding semantic segmentation as an additional channel (in addition to the 3 RGB channels) describing points during SFM. The camera matrices are initialized based on the accelerometer data. A subsampled sequence of frames from the left camera can be seen in Fig. 3 and the semantic segmentation of the last frame in the left and the right images can be seen in Fig. 7. The resulting sparse reconstruction in Fig. 6 and dense reconstruction can be seen in Fig. 8. Semantic segmentation of the reconstructed 2D points is then transferred to the 3D pointcloud as detailed in the supplementary material of [171]. Note that moving objects are filtered out only during sparse reconstruction, the dense reconstruction is instead filtered for objects with dynamic object labels during voxelisation.
A number of reconstructions are shown in Fig. 9. Some reconstructions correctly recover the structure of the road such as Tubingen, Ulm and Weimar, also in Fig. 10. In general, the reconstruction deteriorates further away from the camera. This can be seen in the reconstruction of Tubingen in Fig. 9 where some of the road (in purple) is misaligned with the rest of the reconstruction and is tilted downwards. This is expected as objects further away from the camera are harder to recognize and estimate the distance to. The reconstructions elongate objects as can be seen in the reconstruction of Tubingen Ulm and Weimar in Fig. 9. COLMAP is however not always successful, when the frames change in viewpoints is small the found reconstruction ends up being flat like in Bremen in Fig. 9 or almost flat like seen in the top view of Darmstadt in Fig. 9.
To directly label a pointcloud experiments were conducted with the popular pontcloud segmentation network _Pointnet++[172]_. The Cityscapes has no GT segmented pointclouds, so a model that was finetuned on CARLA[173] generated pointclouds was tested but resulted in confused labels. Finetuning of Pointnet++[172] on the synthetic CARLA dataset(from [171]) resulted in a low
Figure 8: COLMAP’s dense 3D reconstruction of the scene depicted in Fig. 7 and Fig. 4. The dense reconstruction is noisy but the scene is recognizeable.
Figure 7: Points belonging to moving objects cannot be used in the SFM and must filtered out. In the top row, the semantic segmentation of the left and the right camera images are shown for one frame, and in the bottom row, the points used in the sparse reconstruction of COLMAP are shown. In red points that are included in the SFM are shown. In blue points that are omitted in the SFM (as they belong to semantic classes of pedestrians and vehicles) are shown.
Figure 9: Dense reconstructions. _First row:_ The Bremen sequence’s first frame (to the left) and a flat reconstruction (to the right) labeled with semantic segmentation labels (top) and RGB (bottom)._Second row:_ The Darmstadt’s reconstruction appears fine from the front view (middle) but is flat and curved when viewed from the top (left)._Third row:_\(\mathrm{Tübingen}\) results in a correct reconstruction of the street close to the camera (middle), but an incorrect estimation of the street topology due to uphill view (right)._Fourth row:_\(\mathrm{Ulm}\) is reconstructed correctly with a patch of grass separating the road and the sidewalk as seen in front (middle) and top view(to the right)._Fifth row:_ Correctly reconstructed street shape as seen in front (middle) and top view (to the right).
mean average class accuracy of 0.62, with per class results shown in Table 2. The classes that occur seldom get low accuracy, so objects such as traffic sign get almost always incorrectly labelled. It is also worth noting that strangely enough points belonging to walls are correctly marked only in half of the occurences. In general the results suggest that Pointnet++ results are not on bar with labelling 3D reconstructions according to projections of 2D semantic maps. It is possible that more recent methods[174, 175, 176, 177, 178, 179, 180, 181] could improve the results.
## 3 Pedestrian sensing
Detecting humans is hard because they are relatively small in traffic images, they vary in physique and visual qualities depending on the human's pose and clothing, and they change their positions from frame to frame. The fact that most popular
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c|c|c|c} \hline
**Class** & **vegetation** & **building** & **road** & **sidewalk** & **fence** & **static** & **wall** & **pole** & **sign** \\ \hline
**Frequency** \% & 37.62 & 18.50 & 17.87 & 17.55 & 3.00 & 2.81 & 1.71 & 0.83 & 0.09 \\
**Accuracy** & 0.93 & 0.86 & 0.88 & 0.67 & 0.84 & 0.51 & **0.50** & 0.88 & 0.71 \\ \hline \end{tabular}
\end{table}
Table 2: Frequency of the semantic classes on the CARLA dataset and accuracy of the _Pointnet++_ for the different semantic classes. The semantic classes are in the order of decreasing frequency. Objects of class wall obtains the lowest accuracy.
Figure 10: Additional dense reconstructions from Bremen showing that noise levels vary but the street shape is often successfully reconstructed.
object detectors are biased to detect close-up objects centered in an image makes them ill-fitted to traffic data because in traffic humans appear often a distance from the camera. We compared object detection, segmentation, and human pose detecting network's ability to detect pedestrians on the Cityscapes dataset[158] by comparing the detected pedestrian's BBox overlap with BBoxes generated from GT segmentations. The tested methods are
* _DilationalNet-10_[182]- A popular semantic segmentation network with dilated convolutions for larger receptive field.
* A temporally smoothed video segmentation network showing temporally smooth result on the Cityscapes dataset[158].
* A popular object detection network with high throughput and good performance on benhcmarks.
* A popular multi human 2D pose estimating network, that has a runtime that scales well with increasing number of visible humans.
In Table 3 it can be seen that _FRCNN_ produced the smallest false positive(FP) and false negative(FN) average BBox area, but has the second highest true positive(TP) Intersection over Union (IoU) area. Because the areas of the BBoxes vary we present both the FP, FN, and TP counts and areas (normalized with
Figure 11: Segmentation, BBoxes and 2D joint position estimates of _OpenPose_ with _Dilational_, _GRFP_ and _FRCNN_. _Dilational_ net and _GRFP_ manage to separate different pedestrians who are visually close by but also introduce false positives. _GRFP_ produces cleaner BBoxes than _Dilational_.
respect to the total GT BBox areas), to observe how many individuals are detected versus how much of the visual area is covered by the pedestrians. _FRCNN_ is accurate in detecting large BBoxes, and it detects on average larger BBoxes than the GRFP as seen in Fig. 12_Left_. _GRFP_ on the other hand is better at capturing distant pedestrians but also produces a large amount of FPs. Based on this _FRCNN_ is the most suitable pedestrian detector as it is the most accurate in detecting pedestrians close to the vehicle, these pedestrians have the highest risk of being run over if undetected.
_OpenPose_ is used to estimate the articulated human 2D pose on the BBoxes found by _Dilational_, _GRFP_ and _FRCNN_ and on the whole image. We introduce the Mean Per Joint Distance to Segmentation(MPJDS) metric which is the average distance from an estimated 2D joint position to a pedestrian or biker segmentation mask. The MPJDS is an approximate measure of how accurately
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Model & \begin{tabular}{c} **True** \\ **positives** \\ \end{tabular} & \begin{tabular}{c} **Average** \\ **TP** \\ \end{tabular} & \begin{tabular}{c} **False** \\ **area** \\ \end{tabular} & \begin{tabular}{c} **Average** \\ **positives** \\ \end{tabular} & \begin{tabular}{c} **False** \\ **rep area** \\ \end{tabular} &
\begin{tabular}{c} **Average** \\ **Negatives** \\ \end{tabular} \\ \hline
**Dilational** & 2887 & 0.699 & 7,516 & 0.008 & 16,664 & 0.016 \\
**GRFP** & **3588** & **0.707** & 14,317 & 0.01 & **15,980** & 0.015 \\
**FRCNN** & 2952 & 0.706 & 578 & **0.001** & 16,522 & **0.009** \\
**OpenPose** & 165 & 0.682 & **343** & 0.003 & 18,997 & 0.017 \\ \hline \hline \end{tabular}
\end{table}
Table 3: The number of true positive, false negative and false positive BBoxes on the training and validation sets of Cityscapes for the 20th frame. The two strongest contenders for pedestrian detection are the _GRFP_ and the _FRCNN_. The _GRFP_ produces the largest number and area of true positives, and the _FRCNN_ produces the smallest number and area of false positives and negatives.
Figure 12: _Left:_ The average relative BBox areas of the different pedestrian detection methods. FRCNN detects on average the largest BBoxes and OpenPose the smallest. _Right:_ The average distance from estimated joint positions to human mask (from GT segmentation). GRFP’s human BBoxes result in the lowest distance from estimated joint positions to human mask and OpenPose in the largest.
_OpenPose_ can estimate the pose of a pedestrian present in the BBoxes found by the different models, results are shown in Fig. 12 _Right. GRFP_ results is the smallest error likely because _GRFP_ detects smaller BBoxes than _FRCNN_ resulting in smaller absolute errors. _OpenPose_ applied on the whole image detects pedestrians that appear to be far away from the camera, but fails to estimate their pose, resulting in large joint errors for small BBoxes. Eventhough _OpenPose_ has presented impressive results it fails to detect multiple pedestrians in traffic scenarios without a separate pedestrian detector.
To study the accuracy of _OpenPose_ on BBoxes that truly contain a pedestrian we filter out the BBoxes that have at least 50% cross-over with the GT BBoxes, results are shown in Table 4. By _cross-over_ is meant the percentage that the GT BBox intercepts the estimated BBox with. If an estimated BBox intercepts with a number of GT BBoxes then only the highest cross-over is recorded. The MPJDS of _OpenPose_ applied on the GT BBoxes in Table 4 is much larger than that of the other two methods because the GT contains pedestrians who are hard to spot in the images (distant or occluded as seen in sizes in Fig. 14). These pedestrians go unnoticed by the _Dilational_ and _FRCNN_. Even though _FRCNN_ has a lower cross-over percentage than the _Dilational_ it obtains the lowest MPJDS
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Model** & **MPJDS** & **MPJDS** & **Number of** & **Number of** & **Crossover** & **Crossover** \\ & & **norm.** & **pedestrians** & **bikers** & **pedestrians** & **bikers** \\ \hline
**GT** & 32.50 & 0.99 & **1,803** & **17,415** & **1.0** & **1.0** \\
**Dilational** & 27.17 & 0.87 & 850 & 4,572 & 0.94 & 0.91 \\
**FRCNN** & **10.74** & **0.38** & 392 & 2,888 & 0.86 & 0.85 \\ \hline \hline \end{tabular}
\end{table}
Table 4: The FRCNN detects fewer pedestrians and bikers than Dilational but results in a lower MPJDS, suggesting that FRCNN detects pedestrians that are clearer.
Figure 13: _Left: FRCNN_ detects only large BBoxes. _Dilational_ can detect smaller BBoxes, but many GT bbes are undetected by both methods. _Right:_ The BBoxes found by _FRCNN_ are in general representative of the GT BBox sizes, while _Dilational_ underestimates BBox sizes.
suggesting that _FRCNN_ detects the most clearly visible pedestrians. In Table 4 it can be seen that _FRCNN_ detects fewer pedestrians and bikers _Dilational_, but results in a much lower MPJDS. The MPJDS of _FRCNN_ is high even though the cross-over is lower than for the other models. This is likely because _FRCNN_ finds pedestrians that are closer to the camera and thus clearer, omitting smaller pedestrians that are captured by _Dilational_ as seen in Fig. 13_Left_.
The _FRCNN_ correctly estimates the GT BBox sizes as seen in Fig. 13_Right_, but _Dilational_ underestimates BBox sizes showing the pedestrians only partially and therefore has a higher MPJDS than _FRCNN_ even for large GT BBoxes as seen in Fig. 14. _Dilational_ net can detect smaller pedestrians because it has been trained on the Cityscapes dataset in difference to _FRCNN_. It is possible that _FRCNN_ cannot detect small pedestrians because it has been trained with larger anchor sizes than the visible pedestrians. An example showing close up occluded pedestrians comparing the _GRFP_, _Dilational_ net, _FRCNN_ and GT can be seen in Fig. 11.
The _Dilational_ net has trouble differentiating between the labels: "pedestrian", "biker", and "bike" as seen in Fig. 11. Therefore _Dilational_ net BBoxes are fitted
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline
**Model** & **GT** & **GT Filtered** & **Dilational** & **GRFP** \\ \hline
**True Positives** & **15,934** & 8,986 & 3,316 & 3,643 \\
**False Positives** & 0 & 0 & **235** & 1,038 \\ \hline \hline \end{tabular}
\end{table}
Table 5: By introducing smallest size constraints on the BBoxes the number of false positives can be reduced significantly.
Figure 14: _Left:_ The MPJDS is plotted against the BBox area for the BBoxes found by the different methods. _FRCNN_ finds larger BBoxes and results in lower MPJDS for these larger BBoxes than for the BBoxes found by _Dilational_ net, suggesting that FRCNN finds easier to detect pedestrians. _Right:_ Histogram of MPJDS distribution of the _FRCNN_ detections shows that most errors are small. There appear to be no outliers with large MPJDS error.
with skeletons after allowing biker and pedestrian labels to be interpreted as the same label. Also, bike labels are allowed to be interpreted as human if they are in connection to rider or pedestrian labels. In Fig. 11 it can also be seen that only a small change in the placement of the BBox around a pedestrian results in variations in the estimate of the pose, showing that _OpenPose_ is not robust to errors in pedestrian BBox placement.
Further on crowded images the _FRCNN_ has superior performance because it can separate between pedestrians as seen in Fig. 22, and _GRFP_ is superior in distant pedestrian detection as seen in Fig. 23. In the crowded scene the pose estimator gets confused with BBoxes because for some pedestrians only a single body part is visible, and it is hard for the pose estimator to detect that the body part is not just a blurry image of a human. Videos of sample pose estimations can be found at [https://youtu.be/qpxpdtHbbGA](https://youtu.be/qpxpdtHbbGA) where it can be seen that the pose estimations are not temporally smooth for any of the proposed methods. To avoid false detections of the _Dilational_ net and _GRFP_ we remove any BBoxes that are smaller than 7 pixels in width and 25 pixels in height. This results in a decrease in the number of false positives for _GRFP_ and the _Dilational_ as seen in Table 5.
### Reconstructing Pedestrians
Triangulation can be used to reconstruct the human 3D poses from the 2D poses found with the dataset's disparity maps. This however results in a noisy
Figure 15: A scene’s semantic segmentation, disparity map, triangulation of the frame from the disparity map and the triangulated human pose.
pose estimate as seen in Fig. 15. Stereo triangulation results in a noisy 3D reconstruction. The triangulated 2D joint positions can therefore receive incorrect depth resulting in implausible 3D poses. Often a body joint receives the depth of the background resulting in an elongated limb, as seen in Fig. 16. To correct such errors we apply a threshold to limb lengths, proportioned according to the hip length or back-bone length of the pedestrian. This is not robust because the hip and backbone length are estimated according to a standard skeleton from Human3.6M[90]. The limb length can be estimated according to an average skeleton relative to the height of a person. The height of the person can be roughly approximated from the bounding box height, with the downside that BBox height is pose-dependent.
The corrected skeleton may still suggest a physically implausible pose. To correct this the nearest neighbour plausible pose is found from Human3.6M[90]. To find an outlier robust estimate of the nearest neighbor a thresholded loss is applied. Procrustes analysis is used to find the optimal alignment between the skeletons. The final corrected pose with scaling according to hip or backbone are shown in Fig. 16.
It is clear that the scaling and rotation of the resulting 3D pose are imperfect. When triangulating the pose for each frame jitter can be expected between frames due to noise. Therefore a monocular single-person 3D pose estimator _Deep Multitask Architecture for Fully Automatic 2D and 3D Human Sensing_ (DMHS)[184] is tested as well.
The _DMHS_ is applied to GT and _FRCNN_ see Fig. 17_left_ and _right_ respectively. At times _FRCNN_ provides a too small BBox, by increasing the boundary (with 10%) the results improve, see Fig. 17_right_. The pose detector fails when multiple people are present in the BB, or when the pedestrians are poorly visible.
Figure 16: _To the Left:_ Incorrectly triangulated head position, full image in Fig. 15. All axis are in meters. Procrustes corrected skeletons with (a) scaled according to backbone length and (b) scaled according to hipbone length. Neither of the scalings give the desired result.
Eventhough _FRCNN_ detects close-up pedestrians, still very few BBoxes are clearly visible and thus few obtain accurate pose estimations.
### Reconstructing Vehicles
The _FRCNN_ has trouble separating multiple instances of cars when cars are parked in a row as seen in Fig. 18 due to the large visual overlap. During triangulation for simplicity we model cars found by the detection model by fixed sized 3D BBoxes. As a result during 3D triangulation multiple vehicles that are visible in one 2D BBox get replaced by one 3D BBox with an average disparity for all ofthe cars visible in the 2D bbox. This results in an incorrect 3D reconstruction of the scene. To improve this Path Aggregation Network for Instance Segmentation (PANNET)[185] an _FRCNN_ architecture based instance segmentation network is utilized instead. Sample segmentation showing the correct instance segmentation of _PANNET_ is shown in Fig. 3.2.
Figure 17: To the _Left DMHS_ estimates the pose of a pedestrian decently correctly when the pedestrian is clearly visible. The 3D pose (scale in cm), 2D pose and Body part segmentation are shown respectively. To the _right_ BBox enlargement improves the pose estimation when some limbs are not visible.
Figure 18: To the _left_ Cityscapes disparity map and _right_ COLMAP depth estimate of a _FRCNN_ BBOx of a car clearly contain multiple cars.
## 4 Developments in the field
The presented results were developed from 2016 to 2018. In parallel to our work [186, 187] noted that detecting objects at a distance is hard, and in particular human detection at a distance or in the presence of occlusions has gained popularity[188, 189, 190, 191, 192, 193, 194, 195]. Further it has been noted by [196] that a number of human detection models do not generalize across datasets. Optimal alignment of BBoxes has also been studied in [197]. The run-time accuracy trade-off of object detection methods aimed to be utilized on AVs is studied in[198].
More compact representations of scenes are often utilized in AV planning containing either rasterized graphs with local context[199] or BEV representations[124]. This is suitable as planning must occur fast, but we still believe that articulated human motion ought to be included in the representation. The advantage of utilizing 3D pointclouds and images is that the 3D reconstructed scenes can easily be utilized to train AVs on augmented data[119, 171, 200].
Human and object detection in traffic from alternative sensors such as Radar[201], LiDAR[143, 202, 203, 204, 205, 206], event cameras[207] have been studied as a way to boost object and human detection performance. Methods to improve low-quality image data by reducing motion blur[208], increasing image quality in low light or low-resolution images [209] performing detection on RAW images[210], or object in-painting to recover from occlusions[211] could possibly greatly improve human sensing in real traffic data.
Further LiDAR sensors have gained popularity to avoid the difficult task of estimating the depths of moving objects. Unfortunately sensing of articulated humans in LiDAR[212] has not yet caught up with the methods developed to sense humans in videos. There exist methods that combine LiDAR and RGB fusion[213, 214, 215, 216, 216] for pedestrian detection and trajectory forecasting, the same could be done for human pose forecasting. Human pose estimation has developed greatly with more models that fit meshes to human bodies to densely estimate human pose and semantic mask[217, 218, 219, 220, 221], methods that reason about the physics of the estimated pose have been developed[222, 220, 221], as well as methods that utilize temporal constraints[223, 224, 225]. Still, the majority of articulated human sensing methods are developed on visuals where humans are centered in the images[217, 220, 221, 222, 184, 222], leaving a gap to traffic data where most
Figure 19: PANNET correctly separates different parked cars even in the presence of occlusions.
humans are relatively far away. Human pose estimation and forecasting are more frequently being combined with segmentation[226, 227, 228], tracking[226, 229, 230] gait recognition[231] and camera pose estimation[232, 233]. Human motion can be very informative in traffic and pedestrian behavior modeling can even be used to detect vehicles in blind spots in traffic [234]. To maximally utilize the available information in human motion, methods that are robust to variations in the human physique and behavior need to be developed, but this is hard due to the relative lack of data.
Vehicle orientation and shape estimation techniques are amongst others in images[235], in LiDAR[236]. A method that jointly performs semantic segmentation and 3D reconstructions could benefit both tasks[237].
Improved depth estimation of pedestrians and vehicles through a LiDAR data-like data representation Pseudo-LiDAR is studied in [238, 239] Finally, 3D reconstruction methods have developed greatly with more learning integrated into the 3D reconstruction pipeline [240, 241, 242, 243, 244], from learned monocular depthmaps[245], to learned 3D reconstruction features [246] to 3d matching[247] and the visually pleasing NERF-based methods[248, 249]. Semantic segmentation, tracking, and object detection methods are also becoming less supervised utilizing learned matching and language model based labels [250, 251, 252, 253, 254, 255, 256, 257]. Combining different visual tasks like object detection semantics segmentation, tracking with 3D modeling has seen success in [144, 258]. This is quite natural because as seen in Fig. 18 the two tasks are closely intertwined and information sharing may help in both directions.
Traffic datasets that are focused on pedestrians have become more abundant [259, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270], but there exist only datasets with estimated articulated labels for pedestrians [157]. Even though progress has been made on marker-less human motion capture [116] the methods need to be made robust for multiple humans at a distance and in the presence of occlusions. In parallel to our work, a study[271] on occlusion rates in pedestrian bounding boxes on the Cityscapes dataset was performed. We note that [271] may be treated as complementary to the work presented here that focuses on the task of 3D human pose reconstruction rather than just bounding box occlusions.
## 5 Conclusion
None of the discussed methods of 3D reconstructing human pose are robust enough to be utilized to forecast human motion for assisted driving. This is because there is a gap in performance for human sensing methods between the datasets used in standard benchmarks and the performance on real traffic data, suggesting that benchmarks of human motion sensing are not representative of utilization in traffic. Instead, traffic-based articulated 3D human sensing benchmarks should be developed. Available 3D human pose datasets in the wild [116] still lack in distant pedestrians under poor lighting conditions, or provide only approximate human poses[157]. To make articulated human sensing robust temporal smoothness, consistent use of an individual's estimated limb lengths,
foreseeing typical motions given the human's environment, and understanding of the physical constraints of the human body should be solved simultaneously as the problems share information. So far a number of methods have solved some of these subproblems, but a unifying method is still to be developed. As a result to the lack of a robust articulated humans sensing method a large number of existing autonomous vehicle planning models[5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36] treat pedestrians by their bounding boxes, thus omitting the motion cues available in human pose and therefore ignoring available future motion cues. If robust and complete articulated human sensing methods are developed, then complete human forecasting methods may be developed and utilized in the planning stages of AVs. |
2309.09020 | A Modelling study of hole transport in GaN/AlGaN superlattices | The transport of holes through p-doped wurtzite bulk GaN and AlGaN is poor so
transport of holes through GaN/AlGaN superlattices has been proposed and
investigated theoretically and experimentally with experimental results showing
poor transport. The reason for this poor performance is not fully understood.
In this paper, the transport of holes in GaN/AlGaN wurtzite crystal
superlattices is investigated through theoretical modeling, examining the role
of the composition of the Al$_x$Ga$_{1-x}$N barrier regions and the thickness
of the GaN quantum wells and the AlGaN barriers in determining the position and
width of the heavy hole miniband. To consider the transport of the holes in the
miniband we examine the effective mass of the miniband and possible scattering
mechanisms. In particular, ionized impurity(II) scattering from ionized
acceptors in the barrier regions is investigated as it is deemed to be the
dominating scattering mechanism degrading hole transport. The energy position
of the miniband relative to the ionized impurities and the wavefunction overlap
with the ionized acceptors in the barrier regions is investigated to minimize
II scattering. Some designs to optimize hole transport through wurtzite p-doped
GaN/AlGaN superlattices to minimize II scattering are proposed. | Mengxun Bai, Judy Rorison | 2023-09-16T15:23:31Z | http://arxiv.org/abs/2309.09020v1 | # A Modelling study of hole transport in GaN/AlGaN superlattices
###### Abstract
The transport of holes through p-doped wurtzite bulk GaN and AlGaN is poor so transport of holes through GaN/AlGaN superlattices has been proposed and investigated theoretically and experimentally with experimental results showing poor transport. The reason for this poor performance is not fully understood. In this paper, the transport of holes in GaN/AlGaN wurtzite crystal superlattices is investigated through theoretical modeling, examining the role of the composition of the Al,Ga\({}_{1-x}\)N barrier regions and the thickness of the GaN quantum wells and the AlGaN barriers in determining the position and width of the heavy hole miniband. To consider the transport of the holes in the miniband we examine the effective mass of the miniband and possible scattering mechanisms. In particular, ionized impurity(II) scattering from ionized acceptors in the barrier regions is investigated as it is deemed to be the dominating scattering mechanism degrading hole transport. The energy position of the miniband relative to the ionized impurities and the wavefunction overlap with the ionized acceptors in the barrier regions is investigated to minimize II scattering. Some designs to optimize hole transport through wurtzite p-doped GaN/AlGaN superlattices to minimize II scattering are proposed.
## Introduction
The transport of electrons and holes in GaAs/AlGaAs superlattices has been extensively studied using various device structures, such as resonant tunneling diodes, superlattice infrared photodetectors, and quantum cascade lasers[1][2][3]. These devices utilize the minibands to control the transport of electrons and holes and exhibit unique electronic properties. Similarly, it should be possible to grow GaN/AlGaN superlattices to have unique electronic and optical properties that can be tuned by adjusting the thickness and composition of the individual layers similar to the GaAs/AlGaAs system. In contrast to GaAs and AlGaAs, GaN and AlGaN have a wurtzite crystal structure resulting in different band structure, and also have other physical properties such as a wide bandgap making these materials particularly suitable for high power and high frequency applications[4][5][6]. In common with most wide-bandgap semiconductors the acceptor binding energy is very large (>100meV) making the activation of p-doping difficult resulting in high p-resistivity. The idea of using superlattices to free the holes and exploit transport in a miniband in the perpendicular direction to the superlattice period could aid devices which require electrons and holes in an active region such as LEDs or lasers or electronic devices which require hole transport such as PMOS[7, 8, 9, 10, 11, 12]. This concept was patented by one of the authors for use in an LED/laser design[7] with a priority date of 1996. However grown and fabricated superlattices in GaN/AlGaN were found not to exhibit good perpendicular hole transport[7][8]. The aim of this study is to investigate why this is so. GaN/AlGaN superlattices have high internal electric fields arising from their wurtzite band structure (piezoelectric fields) and spontaneous polarization from their interfaces which is different from the zinc blende GaAs/AlGaAs system and also have the very deep ionized acceptor levels. In this study we investigate how the miniband can be tuned by varying the barrier composition (low Al barrier content reduces the piezoelectric field) and the well and barrier thicknesses and investigate the miniband position and Fermi level relative to the acceptor levels.
This work was motivated by an investigation into GaN/AlGaN superlattices by Duboz (2014)[9] who examined these effects deciding that vertical hole transport will not be good through the superlattices. He restricted his investigation to equally sized Quantum Well (QW) and Quantum Barrier (QB) so we have examined varying QW and QB thicknesses(\(L_{QW}\) and \(L_{QB}\)) independently with the aim of creating a superlattice with a wide energy band and a large hole concentration in the superlattice. We then re-examined the role of II scattering in the superlattice with the aim of examining how different designs impact upon this effect. The tantilising promise of reduced resistivity and improved vertical hole transport in (Al)GaN/AlGaN superlattices is worth further study. We agree with Duboz that \(L_{QW}\) and \(L_{QB}\) larger than 8 monolayers(MLs) for each, corresponding to roughly 4nm thickness will result in multi-quantum well behaviour rather than superlattice behaviour which will not benefit vertical transport. Also \(L_{QW}\) and \(L_{QB}\) less than 4 monolayers(MLs) may result in an alloy rather than a superlattice so the focus on this paper will be on dimensions between these limits. Other considerations involve maintaining a continous miniband through the structure for the applied field rather than breaking up into Wannier-Stark ladders in which transport would be |
2309.07019 | Polaron spectroscopy of interacting Fermi systems: insights from exact
diagonalization | Immersing a mobile impurity into a many-body quantum system represents a
theoretically intriguing and experimentally effective way of probing its
properties.In this work, we study the polaron spectral function in various
environments, within the framework of Fermi-Hubbard models. Inspired by
possible realizations in cold atoms and semiconductor heterostructures, we
consider different configurations for the background Fermi gas, including
charge density waves, multiple Fermi seas and pair superfluids. While our
calculations are performed using an exact-diagonalization approach, hence
limiting our analysis to systems of few interacting Fermi particles, we
identify robust spectral features supported by theoretical results. Our work
provides a benchmark for computations based on mean-field approaches and reveal
surprising features of polaron spectra, inspiring new theoretical
investigations. | Ivan Amelio, Nathan Goldman | 2023-09-13T15:21:57Z | http://arxiv.org/abs/2309.07019v2 | **Polaron spectroscopy of interacting Fermi systems:**
## Abstract
**Immersing a mobile impurity into a many-body quantum system represents a theoretically intriguing and experimentally effective way of probing its properties. In this work, we use exact diagonalization to compute the polaron spectral function in various instances of interacting Fermi settings, within the framework of Fermi-Hubbard models. Inspired by possible realizations in cold atoms and semiconductor heterostructures, we consider different configurations for the background Fermi gas, including charge density waves, multiple Fermi seas and pair superfluids. Our results offer a benchmark for computations based on mean-field approaches and reveal surprising features of polaron spectra, inspiring new theoretical investigations.**
###### Contents
* 1 Introduction
* 2 Polaron spectra from Exact Diagonalization
* 2.1 General framework
* 2.2 Benchmarking Fermi polarons
* 3 Charge density waves
* 4 Impurity interacting with two Fermi seas
* 5 Fermi superfluids
* 5.1 Spin-balanced fermion populations
* 5.2 Spin-unbalanced fermion populations
* 6 Conclusion
Introduction
A mobile impurity immersed in a many-body background represents a paradigmatic setting of many-body physics. Historically, Landau and Pekar were the first to discuss the renormalized mobility of an electron interacting with the phonons of a polar crystal, and they coined the emerging quasi-particle _polaron_[1].
Following the achievements in the precise preparation and control of ultracold atomic mixtures, there has been a renewed interest in studying polarons at a fundamental level [2, 3, 4, 5]. Indeed, immersing a single mobile impurity in a non-interacting or weakly-interacting background already represents a fully fledged many-body problem. So far, ultracold atom experiments have been able to probe the attractive and repulsive branches of standard, weakly interacting Fermi [6] or Bose [7, 8] polarons, using radio-frequency spectroscopy. Such measurements are complicated by the metastability of atomic gases under three-body recombination and by the challenge of independently tuning the interactions within the background and those involving the impurity, so that polaron studies have yet to be extended to the case of strongly correlated backgrounds. It is still worth mentioning recent measurements of the so-called "magnetic polaron" in doped Hubbard anti-ferromagnets [9], of Nagaoka polarons arising from kinetic magnetism in triangular lattices [10, 11], and of the dressed, quantum-statistics-dependent interactions between polarons [12].
In parallel, the last years have witnessed the thriving of 2D materials, such as graphene and transition metal dichalchogenide (TMD) semiconductors [13]. Contrary to cold atoms, in TMD heterostructures experimentalists have access to few observables only, and optical polaron spectroscopy [14], where resonantly injected excitons are used to probe the state of the TMD material, is now daily used. More precisely, this method relies on the exciton being dressed by the electronic excitations, with the exciton-electron scattering properties being determined by the binding energy of the trion, which is the bound state of an exciton with an electron. Remarkably, TMD bilayers hosting moire potentials have emerged as an ideal platform to simulate Fermi Hubbard and extended Fermi-Hubbard physics [15, 16]. Transport or optical signatures of several exotic phases of matter have already been reported, including Wigner crystals [17, 18], charge density waves [19, 20], excitonic insulators [21, 22, 23], superconductivity [24, 25] and anomalous quantum Hall states [26]. However, the very large trion binding energy with respect to the moire-electronic scales may have limited the number of features observable in polaron spectra, as we will further illustrate in this work.
Given the hardness of the many-body problem, polarons in strongly interacting many-body backgrounds have been theoretically considered in a few special settings only or under very rough approximations. We mention attempts in the context of fractional Chern insulators [27, 28], Fermi superfluids along the BEC-BCS crossover [29, 30], excitonic insulators [31], Mott and charge transfer insulators [32], kinetic magnetism [33, 34], Holstein polarons in Luttinger liquids [35], together with a few related works on the dressing of optical excitations in Mott insulators [36, 37] and fractional quantum Hall systems [38]. Diffusion Quantum Monte Carlo can be used to study Bose [39] and Fermi polarons [40] in the intermediate correlation regime, but does not allow for the determination of the full spectral information.
In this work, we use exact diagonalization (ED) to tackle this challenging problem, exploring various configurations of the interacting Fermi gas. Specifically, we compute zero-momentum polaron spectra in lattice systems of a few interacting fermions, both in the spin-less and in the spinful case, with different classes of repulsive or attractive interactions. The main drawback of ED is that this method is limited to very small system sizes. In spite of these finite-size effects, our results are qualitatively consistent when comparing between different lattices or, when possible, to other methods (e.g. using the Chevy ansatz within a mean-field theory for the many-fermion background). In this sense, we believe that our ED approach
provides solid results, which allow to benchmark approximate methods but also provide novel insights.
This manuscript is structured as follows: in Section 2, we illustrate the ED method and benchmark it with the non-interacting Fermi polaron. In Section 3, we discuss polaron spectra in the presence of strong charge density wave correlations for a system of spinless fermions with long range repulsion. In Section 4, we tackle the spinful case with a spin-dependent impurity-fermion interaction, and we study the fate of two-body and three-body absorption lines in the presence of contact repulsion interactions between the fermions. In Section 5, we consider polaron spectroscopy of fermionic superfluids, in the attractive Fermi-Hubbard model, both for balanced and nearly-balanced populations of the two species. Finally, we draw our conclusions and outline future directions in Section 6.
## 2 Polaron spectra from Exact Diagonalization
### General framework
This work explores the problem of an impurity immersed in a system of spin \(1/2\) fermions, described by a Hamiltonian of the form
\[H=H_{ff}+H_{I}+H_{fI}. \tag{1}\]
Here \(H_{ff}\) denotes the interacting fermionic background, described by a generalized Fermi-Hubbard Hamiltonian
\[H_{ff}=-t_{f}\sum_{(i,j),\sigma}c^{\dagger}_{i\sigma}c_{j\sigma}+\frac{1}{2} \sum_{i,j,\sigma,\sigma^{\prime}}V_{ij}c^{\dagger}_{i\sigma}c^{\dagger}_{j \sigma^{\prime}}c_{j\sigma^{\prime}}c_{i\sigma}, \tag{2}\]
where \(c^{\dagger}_{j\sigma}\) denotes the creation operator of a fermion of spin \(\sigma\in\{\uparrow,\downarrow\}\) at lattice site \(j=(x,y)\), \(t_{f}\) is the hopping constant and \(V_{ij}\) denotes the matrix elements of a general (possibly long range) interaction between the fermions.
The kinetic energy for free impurities is \(H_{I}=-t_{I}\sum_{(i,j)}a^{\dagger}_{i}a_{j}\), where \(a^{\dagger}_{j}\) denotes the creation operator of the impurity and \(t_{I}\) its hopping rate. In this work, we will limit ourselves to contact impurity-fermion interactions
\[H_{fl}=\sum_{j,\sigma}U_{\sigma}c^{\dagger}_{j\sigma}c_{j\sigma}a^{\dagger}_{j }a_{j}. \tag{3}\]
Notice that the coupling constant \(U_{\sigma}\) depends on the spin of the fermion. This is a natural choice both in atomic systems, where Feschbach resonances are spin-dependent [41], and in solid-state spectroscopy, where the trion binding energy depends on the polarization of the probe excitons [42, 43, 44] (p.e. in molybdenum based TMD monolayers, spin-triplet trions are typically unbound).
Describing a Fermi polaron in a non-interacting Fermi sea is already a fully fledged many-body problem, which cannot be solved exactly [2, 3, 4]. Here we commit ourselves to the study of models of interacting fermions, such that accessing the ground-state \(|f_{0}\rangle\) of \(H_{ff}\) already represents a formidable task. We address this problem using ED, which consists in expressing the Hamiltonian as a sparse matrix in the real space basis [45, 46, 47, 48], and in using the Lanczos method [49] to determine the ground state.
ED gives also access to the ground-state \(|\Psi\rangle\) of the full impurity-fermion Hamiltonian \(H\). However, the most natural observable in both ultracold atom [50] and solid-state experiments [14] is provided by the spectral function of the impurity. Our main goal is thus to compute polaron spectra, which experimentally corresponds to the following protocol: first,
the many-fermion system is prepared in its ground state in the absence of the impurity, or the fermion-impurity interaction is switched off by preparing the impurity in some hyperfine level; then, the impurity is resonantly injected, or its internal state is resonantly flipped to a hyperfine level characterized by a sizable fermion-impurity interaction; the final observable is the frequency-resolved absorption curve of the resonant excitation. Mathematically, the polaron spectral function is defined as
\[A(\omega)=-2\text{Im}\langle\Psi_{0}|\frac{1}{\omega-H+E_{0}}|\Psi_{0}\rangle, \tag{4}\]
where \(|\Psi_{0}\rangle\) is the ground state of the fermion-impurity decoupled Hamiltonian \(H_{ff}+H_{I}\), with energy \(E_{0}\), while \(H\) is the full Hamiltonian with finite fermion-impurity coupling. Since generally \(|\Psi_{0}\rangle=a_{k=0}^{\dagger}|f_{0}\rangle\), one can make a link with optical spectroscopy in TMDs, where an exciton is injected with very small momentum. The spectral function is also related via Fourier transform to the overlap \(S(t)=\langle\Psi_{0}|e^{-i(H-E_{0})t}|\Psi_{0}\rangle\) following a quench of the impurity-fermion interaction. Notice that the spectral function (4) satisfies the normalization \(\int_{-\infty}^{\infty}\frac{d\omega}{2\pi}\,A(\omega)=1\). In plotting the spectra, the lines are artificially broadened by replacing \(\omega\to\omega+i\gamma\).
A crucial technical remark concerns the fact that, even though the full spectrum of \(H\) cannot be obtained for large sparse matrices, the spectral function can still be reliably and efficiently obtained [46, 49, 51]. The trick consists in constructing the Krylov space of dimension \(M\) by applying \(M\) times the full Hamiltonian \(H\) to the decoupled ground state \(|\Psi_{0}\rangle\). The Hamiltonian in this space is represented by a tridiagonal matrix, for which the resolvent of the first entry can be conveniently computed by recursion and expressed as a continued fraction. It can be proven that this approach captures exactly the first \(2M+1\) moments of the spectral function.
In this work, we restrict ourselves to nearest-neighbour hopping without complex phases and we consider two-dimensional square or triangular lattices. We will be limited to very small system sizes such that finite size effects represent a major concern; however, in the following, we will show and argue that one can still extract solid and useful qualitative insights using this approach. In our philosophy, this method naturally needs to be complemented with other approaches, such as Chevy ansatz computations [52, 53]. For instance, Section 5 focuses on a polaron in a Fermi superfluid, which is a good example of how ED can confirm a highly non-trivial feature previously reported using mean-field calculations complemented by the Chevy-ansatz [30, 31].
An insidious consequence of finite size effects is that the ground-states \(|\Psi_{0}\rangle\) and/or \(|\Psi\rangle\) may not be invariant under the action of the spatial symmetries of the model. For instance, the ground state of the non-interacting system with 2 identical fermions (i.e. \(N_{\uparrow}=2,N_{\downarrow}=0,V_{ij}=0\)) consists of a superposition of states in the form \(c_{k=0\uparrow}^{\dagger}c_{k\neq 0\uparrow}^{\dagger}|0\rangle\), which is not a zero momentum eigenstate of the total momentum. In the following, we restrict ourselves to the sector of zero total momentum and impose that \(|\Psi_{0}\rangle=a_{k=0}^{\dagger}|f_{0}\rangle\), i.e. that the fermion and impurity momentum before quenching the impurity-fermion interaction are separately zero: this imposes a strong constraint on the number of particles that one can fit into the system. In the case of crystalline phases, one should also respect the commensurability of the crystal with respect to the size of the system.
### Benchmarking Fermi polarons
In this subsection, we consider the simplest scenario of an impurity immersed in a non-interacting Fermi sea of spin-polarized fermions (\(N_{\downarrow}=0\) for definiteness).
The fermion-impurity binding energy in the vacuum \(E_{B}>0\) is related to the coupling \(U_{\uparrow}\) appearing in \(H_{f1}\) by the Lippmann-Schwinger equation [54]
\[\frac{1}{U_{\uparrow}}=\frac{1}{L_{x}L_{y}}\sum_{k}\frac{1}{-E_{B}-\epsilon_ {k}^{f}-\epsilon_{k}^{\dagger}}, \tag{5}\]
with \(L_{x},L_{y}\) the number of lattice sites in the two independent spatial directions and \(\epsilon_{k}^{f},\epsilon_{k}^{I}\) denoting the free fermion and impurity dispersions, respectively. In analogy to the formula \(E_{F}=\frac{2\pi\hbar^{2}}{m}n\) holding in continuous infinite systems, we define the Fermi energy as \(E_{F}=4\pi t_{f}n\), where \(n=N_{\uparrow}/A\) is the density 1. Here we have defined the area of the system \(A\), where, assuming unit distance between adjacent lattice sites, \(A=L_{x}L_{y}\) for a square lattice and \(A=\frac{\sqrt{3}}{2}L_{x}L_{y}\) for triangular lattices.
Footnote 1: On a lattice, the effective mass \(m_{f}^{*}\) can be defined from the curvature of the dispersion at small momentum, yielding \(\frac{\hbar^{2}}{2m_{f}^{*}}=t_{f}\).
Rescaling the energies by \(E_{F}\), and as far as properties such as the positions of the repulsive and attractive polaron peaks or the oscillator strength transfer are concerned, we find that calculations performed using different sizes and lattices yield consistent results, also compatible with Chevy ansatz computations [52, 53, 55, 56]. We note that other properties, such as details of the molecule-hole continuum, the broadening of the repulsive polaron or some very weak high-energy peaks, were found to strongly depend on the size of the system, clearly showing signatures of the modes discretization. Two examples are shown in Fig. 1, where we compare Fermi polaron spectral function for a \(5\times 5\) square lattice with \(N_{\uparrow}=3\) fermions (left panel) to the case of a \(4\times 4\) triangular lattice with \(N_{\uparrow}=4\) (right panel). The red lines mark the vacuum impurity-fermion binding energy \(-E_{B}\), while the magenta AP and RP labels indicate the attractive and repulsive polaron branches, respectively. In these plots we used a small linewidth \(\gamma=0.1E_{F}\) and a logarithmic color scale to highlight the fine structure of the molecule-hole continuum, which depends on the details of the computation.
## 3 Charge density waves
In this Section, we will be interested in the physics of fermions with repulsive finite-range interactions. For definiteness, we will consider spin-polarized systems with Coulomb repulsion \(V_{ij}=\frac{V_{0}}{|\mathbf{r}_{i}-\mathbf{r}_{j}|}\), where \(V_{0}\) is the coupling constant and \(\mathbf{r}_{i}\) denotes the position of the \(i\)-th lattice
Figure 1: Polaron spectra on top of a spin-polarized non-interacting Fermi sea. Two ED calculations are compared, on a \(5\times 5\) square lattice with \(N_{\uparrow}=3\) fermions (left panel) and on a \(4\times 4\) triangular lattice with \(N_{\uparrow}=4\) (right panel). The red line represents \(-E_{B}\), while AP and RP indicate the attractive and repulsive polaron branches, respectively. The spectral function intensity is color-coded in a logarithmic scale.
site. Given the underlying lattice structure and the finite filling factors used in the following, we expect our results to qualitatively hold for generic finite-range repulsive potentials.
Polaron spectroscopy has already been used in experiments to detect Wigner crystals in TMD monolayers [17] and bilayers [18] without a sizable moire potential. The main signature of the presence of the Wigner crystal is a weak umklapp line, which departs from the repulsive polaron peak with a splitting that scales with doping2 like \(\propto\frac{\hbar^{2}}{2m_{x}a_{w}^{2}}\), with \(m_{x}\) the mass of the exciton and \(a_{M}\) the lattice constant of the Wigner crystal. These experiments can be modeled assuming that the Wigner crystal is a static external potential that scatters the probe exciton.
Footnote 2: In borrowing from semiconductor physics the term doping, we mean the density of active electrons in conduction bands, which in TMDs can be controlled via electrical gates. In our lattice model, it is just the density of fermions.
While Wigner crystals are formed in continuum systems and spontaneously break a continuous translational symmetry, our ED approach is limited to finite-size lattice models, and we will refer to states with strong crystalline fluctuations as "charge density waves". Because of the finite size, the discrete translational symmetry cannot be spontaneously broken and one needs to inspect the density-density correlator \(\langle f_{0}|n_{i}n_{j}|f_{0}\rangle\), with \(n_{i}=c_{i\uparrow}^{\dagger}c_{i\uparrow}\), in order to monitor the crossover from a Fermi liquid to a charge density wave.
Impurities in lattice systems in the presence of strong repulsion have been considered as well in the TMD literature. For instance, the umklapp peak was used to reveal the onset of charge incompressibility of correlated electrons in moire bilayers [57], even without the breaking of the discrete translational symmetry. The optical signatures of generalized trions in a moire system displaying charge density waves at fractional filling were very recently reported in [58].
We computed the polaron spectrum for a \(6\times 6\) square lattice with \(N_{\uparrow}=4\) fermions, a number commensurate with the formation of a crystal of lattice constant \(a_{CDW}=3a_{0}\). We fix the strength of fermion-impurity interactions according to \(E_{B}/E_{F}\simeq 2.2\) and scan a broad range of values of \(V_{0}\); as it is usually done for Wigner crystals, we express the strength of the interactions as \(K=\frac{V_{0}}{a_{CDW}E_{F}}\), i.e. the ratio of the typical interaction to kinetic energy. The results are shown in Fig. 2. In panel (a) the polaron spectrum is depicted as a function of \(K\). The color mesh is in linear scale, while in panel (b) we plot exactly the same data in log scale. In panels (c) and (d), computed along the vertical dotted slices of panel (a), we display the density-density correlator \(\langle f_{0}|n_{(0,0)}n_{(x,y)}|f_{0}\rangle\) at small and large \(K\), respectively. These two plots illustrate the building up of crystalline correlations along the crossover from Fermi liquid to charge density wave.
Let us now focus on the features visible in panels (a) and (b). To start with, the attractive polaron (AP) experiences a redshift for increasing \(K\). We attribute this feature to the renormalization of the fermion mass, resulting in a larger binding energy; in particular, in the limit \(V_{0}\rightarrow\infty\) all the fermions are perfectly correlated, so the impurity binds to an object of effective mass \(N_{\uparrow}m_{f}\), where \(m_{f}^{*}=\frac{\hbar^{2}}{2\epsilon_{f}}\) is the bare effective mass of a fermion. Then, we have highlighted by the pink dashed line the umklapp peak [17], well defined at large \(V_{0}\). This can be thought of as a scattering state of the impurity with the fermionic crystal and occurs at an energy essentially determined by the folding of the free impurity dispersion in the first Brillouin zone of the crystal, i.e. in a square lattice at wavevector \(2\pi/a_{CDW}\). We indeed verified that this peak does not shift with \(t_{f}\) or \(V_{0}\) (not shown).
Finally, the most intriguing and original feature is the avoided crossing occurring for comparable values of the binding energy \(E_{B}\) and the effective interaction strength \(V_{0}/a_{CDW}\). Here we speculate on the origin of this effect, which could be further studied and confirmed in the future using the Chevy-ansatz. In a Hartree-Fock picture, the formation of a charge density wave is described in terms of Bloch waves on top of the Hartree-Fock potential, with the same periodicity as the charge density wave and determined self-consistently by filling the lowest
Bloch band. When \(V_{0}\) is increased, the gap between the lowest and second band scales approximatively linearly; moreover, the low-lying bands also become flatter and flatter. It seems then likely that the repulsive polaron (RP) hybridizes with the state made of the bound impurity plus an excitation of the neutral mode corresponding to the interband excitation of one Hartree-Fock quasi-particle; the dashed-dotted line in panel (b) is a tentative guide to the eye for the energy of this candidate state, which blushifts linearly with \(V_{0}\). Notice that a similar feature was observed for a homobilayer model with local repulsion, in a ladder computation using the electron Green's function computed in DMFT, and was attributed to doublons [32]. An even more characteristic feature is observed on a triangular lattice, as we show in Fig. 3.a. Understanding this peculiar behavior is also the subject of ongoing efforts.
A piece of evidence that these results are not just an artifact of finite size effects comes from the study of one-dimensional chains, for which it is possible to reach a reasonably large number of sites \(L=22\) with \(N_{\uparrow}=11\) fermions. In \(1\)D, we define the Fermi energy as \(E_{F}=2\pi t_{f}n^{2}\). As
Figure 2: Polaron spectroscopy of the crossover from non-interacting Fermi sea to charge density wave, for fixed \(E_{B}/E_{F}\simeq 2.2\). (a) Polaron spectrum for increasing values of the interactions. Horizontal pink dashes indicate the position of the very weak umklapp peak (visible only from panel (b)). The vertical orange and red dots correspond to panels (c) and (d), respectively. (b) Same as (a) but in logarithmic color scale, in order to highlight the weaker features. The black dash-dots are a guide to the eye for the line that blushifts with \(V_{0}\) and generates the avoided crossing with the repulsive polaron branch RP AP indicates the attractive polaron. Panels (c) and (d) report the density-density correlator \(\langle f_{0}|n_{(0,0)}n_{(x,y)}|f_{0}\rangle\) at small and large \(V_{0}\), respectively. The lattice is a \(6\times 6\) square with periodic boundary conditions.
one can see in Fig. 3.b, the avoided crossing is also found in such a long chain. Interestingly, no umklapp peak is found in this 1D case.
To conclude this Section, we comment on the relation of our results with existing experiments in TMDs. The avoided crossing, in particular, has not been observed in the charge density waves and Wigner crystals reported so far in the literature. This is due to the fact that the trion Bohr radius is much smaller than a moire cell, or in other words, the trion binding energy is large compared to the electronic many-body gap. The avoided crossing occurs when the two energies are comparable, and this could be made possible by engineering trions with a smaller binding energy in heterostructures with strong moire potential. Such trions may be accessible in multilayer systems, with the excitons and the fermions mainly localized in two different layers.
## 4 Impurity interacting with two Fermi seas
While polaron spectra of molybdenum-based TMDs are essentially consistent with the Fermi polaron picture [14, 59], experiments in WSe\({}_{2}\) monolayers display a much richer structure [42, 60]. At small electron doping, two attractive polaron lines are visible together with the repulsive polaron; at high densities, instead, all the oscillator strength is taken by a low energy line, which redshifts with increasing density.
This difference originates from the fact that two identical electrons and a hole will generally not bind into a trion, as a consequence of the anti-symmetrization of the wavefunction [42, 43, 44]. In MoS\({}_{2}\) and MoSe\({}_{2}\) monolayers exciton formation involves the lowest conduction bands of quantum numbers \((K,\uparrow)\) and \((-K,\downarrow)\), so that right circularly polarized radiation can excite only the \((K,\uparrow)\) transition and the only visible trion will be the one formed from this exciton and an electron in \((-K,\downarrow)\). In WSe\({}_{2}\) monolayers, instead, the bright conduction bands \((K,\uparrow)\) and \((-K,\downarrow)\) lie higher in energy than the dark \((K,\downarrow)\) and \((-K,\uparrow)\) lower conduction bands. This means that the dark conduction bands get doped, while the electron forming an exciton comes from the bright conduction bands and can form a trion with either the \((K,\downarrow)\) and \((-K,\uparrow)\) electron, which is distinguishable by valley or spin.
This argument explains qualitatively the presence of two attractive polaron resonances.
Figure 3: Polaron spectrum for increasing values of Coulomb repulsion on a triangular lattice (left) and on a chain (right), respectively. Pink dashes indicate the position of the umklapp peak (not visible on this scale), which is absent in the 1D system.
Some theoretical understanding of the crossover to a single line red-shifting with doping has been provided in [60, 61] based on variational wavefunction calculations. The first step is to realize that the attractive polaron is not really associated with a trion (exciton bound to an electron), but rather with a tetron, i.e. an exciton bound to an electron-pair formed out of the conduction band Fermi sea. At small doping, the phase space of the hole is limited to very small momenta and the tetron is basically a trion very loosely bound to a hole. In practice, to locate the two attractive polaron lines at small doping, one can compute the binding energy of the singlet and triplet trion. However, when the exciton can bind to two Fermi seas and at large doping, a so-called "hexciton" can be formed from the exciton and a particle-hole pair out of each of the two Fermi seas. The high doping condition ensures that the particle-hole pair is small and akin to a neutral object; otherwise, if the holes were highly delocalized, the exciton could not bind to two charged electrons.
This rich physics can, in principle, also be accessed in ultracold atom setups with three species [62, 63]; however, to our knowledge, no experiment has ever investigated the spectral properties of a polaron in two Fermi seas to this date.
Here, we apply our ED method to an impurity immersed in a background with two species of fermions. For the sake of dealing with only one parameter to tune the strength of repulsion, we consider the case of contact repulsive interactions with coupling \(V_{ij}=V_{\uparrow\downarrow}\delta_{ij}\). We have checked that one obtains similar results in the presence of Coulomb terms, both inter- and intra-spin (contact interactions are only effective for fermions of different spin, due to the Pauli principle). To observe two attractive polaron peaks, we allow for spin imbalance of the contact fermion-impurity attraction \(U_{\uparrow}\!\neq\!U_{\downarrow}\). In order to plot our results, we define the mean \(U=\frac{U_{\uparrow}+U_{\downarrow}}{2}\) and the binding energy scale \(E_{\text{B}}\) from the energy of the molecule formed by the impurity and a fermion with coupling \(U\). Since we cannot continuously change the density, we change the ratio \(E_{F}/E_{\text{B}}\) by varying \(U\).
Exact diagonalization results are plotted in Fig. 4 for a system of \(N_{\uparrow}=N_{\downarrow}=4\) fermions in a \(4\times 4\) triangular lattice (plots for the square lattice are very similar). We choose a small spin asymmetry of \(\frac{U_{\uparrow}-U_{\downarrow}}{U}=0.3\) for the fermion-impurity interaction and we increase the fermion-fermion contact repulsion \(V_{\uparrow\downarrow}\) through panels a,b,c. Apart from the repulsive polaron peak, one can spot two attractive polaron resonances, denoted respectively \(\text{AP}_{\uparrow}\) and \(\text{AP}_{\downarrow}\) in panel (a),
Figure 4: Polaron spectra for a spinful Fermi system with a slightly spin-asymmetric binding energy to the impurity. On the \(x\)-axis \(E_{B}\) is decreased and panels a), b) and c) correspond to increasing contact repulsive interactions, \(V_{\uparrow\downarrow}=0,2E_{F},4E_{F}\) respectively. The central panel is particularly reminescent of spectral measurements in \(\text{WSe}_{2}\) monolayers. The main lines are labelled in panel (a): RP denotes the repulsive branch, \(\text{AP}_{\uparrow}\) (\(\text{AP}_{\downarrow}\)) is the attractive polaron associated with the 2-body bound state between the impurity and the spin up (down) fermion, while \(\text{AP}_{\uparrow\downarrow}\) comes from the 3-body bound state.
which are associated with spin up and spin down two-body bound states and which are predominant at small \(E_{F}/E_{B}\); and a lower energy resonance \(\text{AP}_{\uparrow\downarrow}\), corresponding to the three-body bound state of the impurity with both the spin up and spin down particles. The identification of these lines is supported by the calculation of the two- and three-body binding energies in vacuum, which recover the many-body lines in the limit of small \(E_{F}/E_{B}\) (not shown).
In the absence of the repulsive interactions between the fermions, \(V_{\uparrow\downarrow}=0\), all these three lines are simultaneously visible, with a slow oscillator strength transfer to the three body resonance for increasing \(E_{F}/E_{B}\). We explain this effect by noting that, for the three-body line, the oscillator strength should scale with the square of the density, while for the usual two-body attractive polarons it scales linearly in the density. In other words, at large \(E_{B}\) the three-body state has a very small radius and its wavefunction is very different from the ground-state without impurity.
When the repulsive interaction \(V_{\uparrow\downarrow}\) is turned on, there seems to be an avoided crossing between the lowest two-body line \(\text{AP}_{\uparrow}\) and the three-body line \(\text{AP}_{\uparrow\downarrow}\). This come from the fact that, at first order in perturbation theory, the three-body state blueshifts linearly with \(V_{\uparrow\downarrow}\), since the spin up and down fermions are bound together by the interaction mediated via the impurity. Interestingly, panel b reminds of the experimental data from WSe\({}_{2}\) monolayers [60]. The splitting of the avoided crossing decreases with \(V_{\uparrow\downarrow}\), see panel c. Notice that, since the vertical axis is in units of \(E_{B}\), the redshift of the lines with \(E_{F}/E_{B}\) is due to the well-known shift of the attractive polaron with density, which is linear for small \(E_{F}/E_{B}\)[64].
Our method is somehow complementary to the variational computation of [60, 61], which is effectively a few-body computation, where the many-body background is traced out and enters via an effective static screening potential. Also, the variational method is well suited to compute the ground-state energy and oscillator strength, but it is difficult to get information about the full spectrum using that method. This makes our ED approach particularly interesting to study the crossover between the two-body and three-body lines as a function of repulsive interactions.
## 5 Fermi superfluids
In this Section, we will be dealing with polaron spectroscopy of Fermi superfluids, with the pairing of spin up and down fermions. The fermionic Hamiltonian we consider is essentially an attractive Hubbard model, with contact inter-spin interactions \(V_{ij}=\delta_{ij}V_{\uparrow\downarrow}\), where \(V_{\uparrow\downarrow}<0\) is related to the binding energy \(E_{\text{pair}}\) of a fermionic pair in vacuum via the Lippmann-Schwinger equation
\[\frac{1}{V_{\uparrow\downarrow}}=\frac{1}{L_{x}L_{y}}\sum_{k}\frac{1}{-E_{ \text{pair}}-2e_{k}^{f}}. \tag{6}\]
As to the fermion-impurity interaction, we allow for spin-dependent interaction \(U_{\uparrow}\neq U_{\downarrow}\). As in the previous Section, we define \(U=\frac{U_{\uparrow}+U_{\downarrow}}{2}\) and the binding energy \(E_{B}\) from the energy of the molecule formed by the impurity and a fermion with coupling \(U\).
The motivation to study this setting comes from future experiments in both ultracold atoms and solid-state devices. On the ultracold atom side, the tunability of interactions via Feschbach resonances has allowed to observe the BEC-BCS crossover [65]. More recently, there has been some effort in implementing three-species Fermi mixtures [62, 63], also inspired by analogies with the \(SU(3)\) group relevant in high-energy physics for the theory of the strong interaction. Therefore, polaron experiments in Fermi superfluids will hopefully be realized in the near future. On the theory side, a Chevy ansatz study of polaron formation in 3D Fermi superfluids with spin-symmetric \(U_{\uparrow}=U_{\downarrow}\) interactions has been performed in [29], while in the fully spin asymmetric case \(U_{\downarrow}=0\) full spectra were reported in [30], in both 2D and 3D.
On the solid-state side, instead, we were particularly inspired by the possibility of optically probing the presence of pairing in putative excitonic insulator states found in recent transport experiments in TMD heterostructures [21, 22, 23]. In this case, a 2D Chevy ansatz study was performed in [31], where the two fermionic species correspond to two different layers and fully pseudo-spin asymmetric interactions were considered.
### Spin-balanced fermion populations
We used our ED approach to compute polaron spectra in a Fermi superfluid background. The results are reported in Fig. 5 for a \(5\times 5\) square lattice with \(N_{\uparrow}=N_{\downarrow}=3\). Similar results were obtained for a triangular lattice, or for \(N_{\uparrow}=N_{\downarrow}=4\) on a \(4\times 4\) lattice. In Fig. 5, \(U_{\sigma}\) is different but fixed in each panel, while \(E_{\text{pair}}\) is scanned. Panels a,b,c correspond respectively to \(\frac{U_{\uparrow}-U_{\downarrow}}{U}=2,1,0\) and \(E_{B}/E_{F}\simeq 0.3,0.5,0.5\). In other words, the interaction is fully spin asymmetric in (a), while it is symmetric in (c), and (b) lies in between. The solid cyan line, labelled \(E_{3}\), represents the energy of the three-body bound state in vacuum, while the dashed-dotted line \(E_{3}^{*}\) stands for the first three-body excited state.
The lowest line can be identified as the attractive polaron corresponding to the 3-body bound state. On the BCS side it is mainly the interaction with the impurity to provide the attraction, while one rather has a Fermi pair bound to the impurity on the BEC side. This follows from the fact that in the small \(E_{\text{pair}}\) limit one basically has a Fermi polaron, while for large \(E_{\text{pair}}\) a Bose polaron description is adequate. What is more unexpected is that in the spectra this occurs not just as a redshift of a single line, but for large finite \(E_{\text{pair}}/E_{F}\) a double line is clearly visible. The pink arrows signal the weakest of these two peaks in the deep BEC regime, which interestingly has lower energy than the bright peak. The transfer of oscillator strength between the two lines occurs around \(E_{\text{pair}}/E_{F}\sim 10\). Interestingly, this double line is missing in Chevy ansatz calculations and it will be interesting to investigate it further in the future.
The other non-trivial feature visible in the spectrum of panel (a) is the avoided crossing on top of the repulsive branch for \(E_{\text{pair}}\sim E_{F}\). This is perfectly consistent with the Chevy ansatz prediction of [30, 31], where the wavefunction of the state shifting with \(E_{\text{pair}}\) was shown
Figure 5: Polaron spectra for the Fermi superfluid, with \(E_{\text{pair}}/E_{F}\) tuning along the BEC-BCS crossover. Panels a,b,c correspond to increasing spin-symmetry of the fermion-impurity interactions, namely \(\frac{U_{\uparrow}-U_{\downarrow}}{U}=2,1,0\). In the three panels we set \(E_{B}/E_{F}\simeq 0.3,0.5,0.5\) respectively and \(N_{\uparrow}=N_{\downarrow}=3\) on a \(5\times 5\) square lattice. The cyan lines stand for the energies of the two lowest three-body bound states in vacuum, the ground state \(E_{3}\) (solid line) and the first excited \(E_{3}^{*}\) (dash-dotted). The pink arrows highlight the presence of a weaker peak below the main AP line at large \(E_{\text{pair}}/E_{F}\).
to have \(2s\) symmetry, suggesting its relation to the Higgs mode of the superfluid on the BCS side and to the pair \(2s\) excited state on the BEC side. We remark that it makes a very strong argument that two completely different methods (i.e. Chevy ansatz and ED) show the same feature, since the former is obtained by a mean-field BCS theory approximation followed by a variational restriction to the space of one quasi-particle pair excitations, while the latter is an exact method suffering from finite-size effects.
### Spin-unbalanced fermion populations
Within ED one can also consider the case \(N_{\uparrow}\neq N_{\downarrow}\), i.e. a homogeneous mixture with spin-unbalanced fermion numbers. This may be achieved in both ultracold mixtures and excitonic insulator setups, even though instabilities towards Fulde-Ferrell-Larkin-Ovchinnikov states or phase separation may strongly limit the available parameter region [66, 67].
In Fig. 6, we present spectra for a \(4\times 4\) square lattice with \(N_{\uparrow}=4,N_{\downarrow}=3\). We fix \(E_{B}\simeq 0.5E_{F}\) (where the Fermi energy is computed for spin up) and vary \(E_{\rm pair}\). Panels (a-c) correspond to different fermion-impurity interactions, namely \(\frac{U_{\uparrow}-U_{\downarrow}}{U}=2,1,0,-2\).
We hereby highlight some interesting features. First of all, the three-body attractive po
Figure 6: Polaron spectra for spin-density unbalanced Fermi superfluid, with \(E_{\rm pair}/E_{F}\) tuning along the BEC-BCS crossover. Panels (a-d) correspond to \(\frac{U_{\uparrow}-U_{\downarrow}}{U}=2,1,0,-2\). In the three panels we fix \(E_{B}\simeq 0.5E_{F}\) and unbalanced polarization \(N_{\uparrow}=4,N_{\downarrow}=3\) on a \(4\times 4\) square lattice. The lines dubbed AP\({}_{\uparrow}\) visible in the deep BEC regime of panels (a-c) come from the binding of the impurity with the extra unpaired spin up fermion, and are highlighted by the cyan dotted boxes.
laron line does not seem to double in the present case, differently from the results shown in Fig. 5. Second, analogously to Fig. 5, an avoided crossing is well visible on the repulsive line in the panel (a) or (d) corresponding to \(U_{\downarrow}=0\) or \(U_{\uparrow}=0\), respectively. Finally, in the BEC limit one has three tightly bound fermionic pairs plus one spin up unpaired fermion (for this choice of particle numbers \(N_{\uparrow}=4,N_{\downarrow}=3\)). Then, for \(U_{\uparrow}<0\) the impurity can bind to either a pair or the unpaired spin up fermion, so that the attractive lines labelled AP\({}_{\uparrow}\) are visible in panels (a-c), where we have also added the cyan dotted boxes as a guide for the eye. For \(U_{\uparrow}=0\), instead, the impurity can bind only to a pair, giving rise to only one attractive line, see panel (d).
## 6 Conclusion
To summarize, we have adapted the exact diagonalization method to the computation of polaron spectra on top of different many-body backgrounds, described by extended Fermi-Hubbard models. Varying the range and the sign of the interactions between the fermions and the polarization of the system, we have predicted that charge density waves, multiple Fermi seas and Fermi superfluids display polaron spectra with distinctive features. This supports polaron spectroscopy as an effective way of probing many-body systems.
On the technical side, ED is limited to very small system sizes. Despite this limitation, comparisons with other approximate methods or experiments (whenever possible) suggest that our ED approach yields qualitatively solid results. An important theoretical puzzle, which is worth investigating in the future with complementary methods, concerns the double line that is visible in the attractive polaron branch displayed in Fig. 5, in the case of the fermionic superfluid. Moreover, the ED results shown in Figs. 2 and 3 have pushed us towards developing a theory of polarons in charge density waves, an investigation which is in progress. Other future efforts will be directed to computing polaron spectra in topological systems using the ED method.
## Acknowledgements
We would like to thank Atac Imamoglu, Jacques Tempere, Giacomo Mazza and Haydn Adlong for stimulating discussions. All numerical calculations were performed using the Julia Programming Language [68].
Funding informationThis research was financially supported by the ERC grant LATIS, the EOS project CHEQS and the FRS-FNRS (Belgium).
|
2309.10565 | A Simplified Expression for Quantum Fidelity | Quantum fidelity is one of the most important measures of similarity between
mixed quantum states. However, the usual formulation is cumbersome and hard to
understand when encountering the first time. This work shows in a novel,
elegant proof that the expression can be rewritten into a form, which is not
only more concise but also makes its symmetry property more obvious. Further,
the simpler expression gives rise to a formulation that is subsequently shown
to be more computationally efficient than the best previous methods by avoiding
any full decomposition. Future work might look for ways in which other theorems
could be affected or utilize the reformulation where fidelity is the
computational bottleneck. | Adrian Müller | 2023-09-19T12:19:12Z | http://arxiv.org/abs/2309.10565v4 | # A Simplified Expression for Quantum Fidelity
###### Abstract
Quantum fidelity is one of the most important measures of similarity between mixed quantum states. However, the usual formulation is cumbersome and hard to understand when encountering the first time. This work shows in a novel, elegant proof that the expression can be rewritten into a form, which is not only more concise but also makes its symmetry property more obvious. Further, the simpler expression gives rise to a formulation that is subsequently shown to be more computationally efficient than the best previous methods by avoiding any full decomposition. Future work might look for ways in which other theorems could be affected or utilize the reformulation where fidelity is the computational bottleneck.
## 1 Introduction
One of the most fundamental tasks in quantum information processing is the ability to tell the similarity or closeness of two quantum states. It is important in a wide range of applications, like quantum metrology, quantum machine learning, quantum communication, quantum cryptography, or quantum thermodynamics. For example, a similarity measure would be used to assess how much a message was disturbed when transported over distance [1], or to characterize phase transitions in quantum systems where the ground state might abruptly change [2].
A common method for this purpose is the quantum fidelity, also known as Uhlmann fidelity or Uhlmann-Jozsa fidelity, which is capable to assess the similarity of a pair of mixed states. However, the usual formulation of this most general form of quantum fidelity, where both quantum states are mixed states, has several drawbacks. One of the most important drawbacks is that is it computationally expensive, and might become a bottleneck if it has to be calculated often and on large density matrices [3]. Another drawback is that its symmetry property is not immediately obvious and it might be hard to grasp on the first encounter [4].
Previous work has already shown that there exists a simpler expression that is equivalent to the usual formulation [5, 6]. However, the present work shows a more elegant proof that does not require the geometric mean of matrices nor pseudo-inverses. The simpler expression also gives rise to a more computationally efficient formulation, which also was already noted in [6]. However, the available empirical evidence is only very limited so far. As a second contribution, this work also seeks to close this gap using rigorous empirical testing to compare it with the most efficient previously known methods and comes with code for reproducibility.
The rest of this work is structured as follows. Firstly, the traditional formulation of quantum fidelity is given. Then, preliminaries are shown which are required for the subsequent proof and the actual main theorem is proven. Finally, the efficiency of the more recent method is investigated before the work is concluded.
## 2 Quantum fidelity
Mixed quantum states are mathematically described by density matrices, which are positive-semidefinite (PSD) complex matrices with trace equal to 1. The textbook definition of fidelity between two mixed states \(\rho\) and \(\sigma\) is
\[F(\rho,\sigma):=\operatorname{Tr}\left(\sqrt{\sqrt{\rho}\,\sigma\sqrt{\rho}} \right)^{2} \tag{1}\]
where \(\rho\) and \(\sigma\) are the density matrices to compare and \(\sqrt{\rho}\) denotes the positive square root of \(\rho\)[4, 7]. It is also sometimes introduced with the
equivalent formulation
\[F(\rho,\sigma)=\|\sqrt{\rho}\sqrt{\sigma}\|_{1}^{2} \tag{2}\]
where \(\|\cdot\|_{1}\) is the trace norm [8].
While there are simplification of this formulation for pure states, this work focuses on the most general case, where both density matrices can be mixed states. However, one relevant well-known simplification is that if \(\rho\) and \(\sigma\) commute, then equation (1) simplifies to
\[F(\rho,\sigma)=\operatorname{Tr}\left(\sqrt{\rho\sigma}\right)^{2} \tag{3}\]
which can be seen by commuting \(\sigma\) with the second \(\sqrt{\rho}\) in equation (1). This work will reconfirm that equation (3) holds even if \(\rho\) and \(\sigma\) do not commute.
## 3 Preliminaries
### Relation between trace and eigenvalues
It is well known that the trace of a diagonalizable square matrix is equal to the sum of its eigenvalues. A deeper explanation can be found, for example, in [9, p. 296].
**Lemma 1**.: _Let \(n\in\mathbb{N}\) and \(A\in\mathbb{C}^{n\times n}\) be diagonalizable. Then_
\[\operatorname{Tr}(A)=\sum_{i}\lambda_{i}(A) \tag{4}\]
_where \(\lambda_{i}(A)\) is the \(i\)-th eigenvalue of \(A\)._
Proof [9].: Express the characteristic polynomial \(\operatorname{char}(A)\) as
\[\operatorname{char}(A) =\det(\lambda I-A)\] \[=\lambda^{n}-\operatorname{Tr}(A)\lambda^{n-1}+...+(-1)^{n}\det(A)\] \[=(\lambda-\lambda_{1}(A))(\lambda-\lambda_{2}(A))...(\lambda- \lambda_{n}(A))\]
where the second line is the standard form and the last line the factored form of the polynomial. Comparing coefficients of the \(\lambda^{n-1}\) term yields the claim.
### Cyclic property of the spectrum
Quantum fidelity is essentially a sum over eigenvalues. The two \(\sqrt{\rho}\) in (1) can be brought together using a cyclic permutation.
**Lemma 2** (Cyclicity of the spectrum).: _Let \(A,B\in\mathbb{C}^{n\times n}\) with \(n\in\mathbb{N}\). Then_
\[\sigma(AB)=\sigma(BA) \tag{5}\]
_where \(\sigma(A)\) is the spectrum of \(A\)._
Proof.: It is well-known that the products \(AB\) and \(BA\) have the same characteristic polynomial [10, 11]. Because the spectrum consists of just the roots of the characteristic polynomial, \(AB\) and \(BA\) must have the same eigenvalues, with the same multiplicities.
This would already allow to bring the \(\sqrt{\rho}\) in (1) together, if there was not an additional square root. To account for that, it also has to be shown that the cyclic property of the spectrum holds after applying a mapping \(f\).
**Lemma 3** (Cyclicity of the spectrum with mapping).: _Let \(n\in\mathbb{N}\), \(A,B\in\mathbb{C}^{n\times n}\), their products \(AB\) and \(BA\) be diagonalizable, and \(f\) a continuous function with domain containing \(\sigma(AB)\). Then_
\[\sigma(f(AB))=\sigma(f(BA)) \tag{6}\]
_where \(\sigma(A)\) is the spectrum of \(A\) and \(f\) operating on a matrix is defined via the eigendecomposition._
Proof.: Since \(AB\) is diagonalizable and \(f\) is defined on \(\sigma(AB)\), the transformation \(f(AB)\) is applicable. Because \(BA\) has the same spectrum as \(AB\) (Lemma 2), \(f\) is also defined on \(\sigma(BA)\), and since \(BA\) is also diagonalizable, \(f(BA)\) is applicable, as well. Finally, because \(f\) is defined to only transform the eigenvalues, the transformed spectra are the same, as well.
Intuitively speaking, since \(AB\) and \(BA\) have the same characteristic polynomial and the matrix operation \(f\) is defined to be a function only of the roots of that polynomial, the matrices \(f(AB)\) and \(f(BA)\) have the same (transformed) characteristic polynomial, as well.
Note also that if \(A,B\) are PSD matrices then the product \(AB\) is diagonalizable with non-negative eigenvalues, as well, despite generally not being PSD itself (except \(A\) and \(B\) commute) [12, p. 486]. That it has non-negative eigenvalues also follows from Lemma 2 by observing that \(\sigma((\sqrt{B}\sqrt{A})^{\dagger}(\sqrt{B}\sqrt{A}))=\sigma(\sqrt{A}B\sqrt{ A})=\sigma(AB)\).
Simplified formulation of quantum fidelity
**Theorem 1**.: _Quantum fidelity, as defined in equation (1), can be written as_
\[F(\rho,\sigma)=\operatorname{Tr}\left(\sqrt{\rho\sigma}\right)^{2} \tag{7}\]
_for any two density matrices \(\rho\) and \(\sigma\)._
Proof.: For more concise notation, the positive square root of above will be shown. Firstly, use that the trace is equivalent to the sum of the eigenvalues (Lemma 1).
\[\operatorname{Tr}\left(\sqrt{\sqrt{\rho}\,\sigma\sqrt{\rho}}\right)=\sum_{i} \lambda_{i}\left(\sqrt{\sqrt{\rho}\,\sigma\sqrt{\rho}}\right) \tag{8}\]
Secondly, since \(\sqrt{\rho}\,\sigma\sqrt{\rho}=(\sqrt{\sigma}\sqrt{\rho})^{\dagger}(\sqrt{ \sigma}\sqrt{\rho})\) is PSD and thus diagonalizable with non-negative eigenvalues, \(\rho\sigma\) is diagonalizable with non-negative eigenvalues (see section 3.2), and the square root is continuous on \(\mathbb{R}_{0}^{+}\), Lemma 3 can be applied with \(A=\sqrt{\rho}\,\sigma\), \(B=\sqrt{\rho}\), and \(f(x)=\sqrt{x}\).
\[\sum_{i}\lambda_{i}\left(\sqrt{\sqrt{\rho}\,\sigma\sqrt{\rho}}\right)=\sum_{i} \lambda_{i}\left(\sqrt{\rho\sigma}\right) \tag{9}\]
Finally, use Lemma 1 again.
\[\sum_{i}\lambda_{i}\left(\sqrt{\rho\sigma}\right)=\operatorname{Tr}\left( \sqrt{\rho\sigma}\right) \tag{10}\]
Squaring this result again yields the claim.
## 5 Efficiency
Applying the spectral mapping theorem [13, p. 263] to the RHS of equation (8) followed by Lemma 2 with \(A=\sqrt{\rho}\,\sigma\), \(B=\sqrt{\rho}\) provides a computationally more efficient method
\[F(\rho,\sigma)=\left(\sum_{i}\sqrt{\lambda_{i}\left(\rho\sigma\right)}\right) ^{2} \tag{11}\]
since it requires only one eigendecomposition. The form (11) was already noted before [6].
To compare the efficiency of the formulation (11) with previous methods, it is important to note that there are already more efficient ways to calculate the fidelity based on the usual formulations, compared to applying three general spectral decomposition. Since the eigenvalues of PSD matrices, and thus the matrix square root, can be calculated using the SVD, formulation (2) can be utilized applying SVD for \(\sqrt{\sigma}\), \(\sqrt{\rho}\), and the trace norm. In formulation (1), \(\sqrt{\rho}\sigma\sqrt{\rho}\) is a hermitian matrix which thus allows to utilize more efficient algorithms to compute eigendecompositions of hermitian matrices provided by common linear algebra libraries. Further, applying the spectral mapping theorem on formulation (8) yields
\[\sum_{i}\sqrt{\lambda_{i}\left(\sqrt{\rho}\,\sigma\sqrt{\rho}\right)} \tag{12}\]
which requires eigenvectors only for \(\sqrt{\rho}\). In contrast, equation (11) does not need eigenvectors at all, only the eigenvalues of \(\rho\sigma\). However, since \(\rho\sigma\) is generally not hermitian, the general eigendecomposition has to be used.
For the experiment, \(\left\lceil 10^{4}/2^{k-3}\right\rceil\) random pairs of density matrices of different sizes corresponding to \(k=1,...,13\) qubits have been generated to compare the methods (see Figure 1). The method following equation (11) performed sig
Figure 1: Comparing the performance of different methods to calculate quantum fidelity. “2x sqrtm” refers to calculating two general eigendecompositions to calculate the matrix square roots of \(\rho\) and \(\sqrt{\rho}\sigma\sqrt{\rho}\) in equation (1). “3x svd” is another commonly used method based on the alternative formulation in equation (2). “sqrthh” computes fidelity as \(\sum_{i}\sqrt{\lambda_{i}(\sqrt{\rho}\sigma\sqrt{\rho})}\), utilizing the specialized eigendecomposition routine for hermitian matrices. “sqrt_svd_ + svd” works similarly, but using the SVD. Finally, “eigvals” only uses one general routine to calculate eigenvalues, following equation (11). Note that the y-axis is in log-scale. The average values are calculated over \(\left\lceil 10^{4}/2^{k-3}\right\rceil\) runs, where \(k\) is the system size. The Python library Numpy running on an Apple M1 chip have been used for the computation.
nificantly better for the smallest and the largest tested random pairs of density matrices (about 2 times faster) and was otherwise almost on par with the best alternative method. This result can probably be further improved by observing that the product \(\rho\sigma\) has non-negative eigenvalues (see section 3.2). All methods discussed so far have a worst-case time complexity of \(\mathcal{O}(n^{3})\) in the state dimensionality. However, for more structured or sparse density matrices, the time complexity can be greatly improved, as well [14]. Furthermore, with quantum computers even an exponential speed-up could be possible [15].
## 6 Conclusion
This work has shown an elegant way to prove that quantum fidelity can be simplified to \(F(\rho,\sigma)=\mathrm{Tr}\left(\sqrt{\rho\sigma}\right)^{2}\) for any two density matrices \(\rho\) and \(\sigma\). This form is more concise then the usual expression and allows to grasp the symmetry property immediately. Further, a more efficient calculation - by avoiding any full decomposition - has been discussed and empirically validated.
Future work might take a look at the consequences of this reformulation on other theorems. Additionally, the computational advantage might allow for faster results where fidelity on mixed states is the computational bottleneck. Finally, advances in reducing the time complexity for calculating eigenvalues can be directly translated into improvements to calculate quantum fidelity.
## Acknowledgements
I thank Prof. Vikas Garg at Aalto University for support and encouragement. I also thank Jonathan A. Jones, Bartosz Regula, and Danylo Yakymenko for feedback on earlier versions of this work.
|
2302.14471 | Safe Peeling for L0-Regularized Least-Squares with supplementary
material | We introduce a new methodology dubbed ``safe peeling'' to accelerate the
resolution of L0-regularized least-squares problems via a Branch-and-Bound
(BnB) algorithm. Our procedure enables to tighten the convex relaxation
considered at each node of the BnB decision tree and therefore potentially
allows for more aggressive pruning. Numerical simulations show that our
proposed methodology leads to significant gains in terms of number of nodes
explored and overall solving time.s show that our proposed methodology leads to
significant gains in terms of number of nodes explored and overall solving
time. | Théo Guyard, Gilles Monnoyer, Clément Elvira, Cédric Herzet | 2023-02-28T10:29:42Z | http://arxiv.org/abs/2302.14471v4 | # Safe peeling for \(\ell_{0}\)-regularized least-squares
###### Abstract
We introduce a new methodology dubbed _"safe peeling"_ to accelerate the resolution of \(\ell_{0}\)-regularized least-squares problems via a Branch-and-Bound (BnB) algorithm. Our procedure enables to tighten the convex relaxation considered at each node of the BnB decision tree and therefore potentially allows for more aggressive pruning. Numerical simulations show that our proposed methodology leads to significant gains in terms of number of nodes explored and overall solving time.
Sparse model, \(\ell_{0}\) regularization, Branch-and-Bound algorithm.
## I Introduction
This paper focuses on the resolution of the so-called "\(\ell_{0}\)-regularized least-squares" problem given by
\[p^{*}=\min_{\mathbf{x}\in\mathbb{R}^{n}}\ \mathrm{P}(\mathbf{x})\triangleq \tfrac{1}{2}\|\mathbf{y}-\mathbf{A}\mathbf{x}\|_{2}^{2}+\lambda\| \mathbf{x}\|_{0}\] (1- \[\mathcal{P}\] )
where \(\mathbf{y}\in\mathbb{R}^{m}\) and \(\mathbf{A}\in\mathbb{R}^{m\times n}\) are input data, \(\lambda>0\) is a regularization parameter and \(\left\|\cdot\right\|_{0}\) denotes the \(\ell_{0}\)-pseudonorm which counts the number of non-zero elements in its argument.
Solving (1-\(\mathcal{P}\)) is of paramount interest in many scientific fields such as statistics, machine learning or inverse problems [1, 2, 3]. Unfortunately, this problem also turns out to be NP-hard [4, Th. 1]. Hence, the last decades have seen a flurry of contributions proposing tractable procedures able to recover approximate solutions of (1-\(\mathcal{P}\)). Canonical examples include greedy algorithms or methodologies based on convex relaxations, see [5, Ch. 3]. Although these procedures successfully recover the actual solutions of (1-\(\mathcal{P}\)) in "easy" setups, they usually fall short for more challenging instances of the problem. This observation, combined with some recent advances in integer optimization and hardware performance, has revived the interest in methods solving (1-\(\mathcal{P}\)) exactly. A standard approach is to use a Branch-and-Bound (BnB) algorithm that solves (1-\(\mathcal{P}\)), see [6, 7, 8, 9, 10, 11].
In this paper, we propose a new strategy, dubbed _"safe peeling"_, to accelerate the exact resolution of (1-\(\mathcal{P}\)). In a nutshell, our contribution is a computationally simple test applied at each node of the BnB decision tree to identify some intervals of \(\mathbb{R}^{n}\) which cannot contain a solution of (1-\(\mathcal{P}\)). This information allows to construct tighter convex relaxations and more aggressive pruning of the nodes of the decision tree. Our numerical experiments show that the proposed method leads to a significant reduction of the solving time as compared to state-of-the-art concurrent methods. The name _"safe peeling"_ comes from the fact that the proposed method enables to reduce (or in more figurative terms, "to peel") the feasible set of the problem at each node of the decision tree while safely preserving the correctness of the BnB procedure.
The rest of the paper is organized as follows. Sec. III describes the main ingredients of BnB methods. Our peeling strategy is presented in Sec. IV and its performance is illustrated in Sec. V. Proofs of the results presented in the following are postponed to the appendix.
## II Notations
We use the following notations. \(\mathbf{0}\) and \(\mathbf{1}\) denote the all-zero and all-one vectors. The \(i\)-th column of a matrix \(\mathbf{A}\) is denoted \(\mathbf{a}_{i}\) and the \(i\)-th entry of a vector \(\mathbf{x}\) is denoted \(x_{i}\). Any vectorial relation has to be understood component-wise, _e.g.,_\(\mathbf{x}\in[\![\mathbf{l},\mathbf{u}]\!]\) means \(x_{i}\in[\![l_{i},u_{i}],\forall i\). Moreover, \(\eta(\cdot)\) denotes the indicator function which equals to \(0\) if the condition in argument is fulfilled and to \(+\infty\) otherwise, \([x]_{+}=\max(x,0)\) refers to the positive-part function and \(|\cdot|\) denotes the cardinality of a set. Finally, \([\![1,n]\!]\) with \(n\in\mathbb{N}^{*}\) is a short-hand notation for the set \(\{1,\ldots,n\}\).
## III Principles of BnB Methods
In this section, we recall the main principles of BnB procedures. Due to space limitation, we only review the elements of interest to introduce the proposed peeling method. We refer the reader to [12, Ch. 7] for an in-depth treatment of the subject.
### _Pruning_
The crux of BnB methods consists in identifying and discarding some subsets of \(\mathbb{R}^{n}\) which do not contain a minimizer of (1-\(\mathcal{P}\)). To do so, one constructs a decision tree in which each node corresponds to a particular subset of \(\mathbb{R}^{n}\). In our context, a tree node is identified by two disjoint subsets of
\([\![1,n]\!]\), say \(\nu_{0}\) and \(\nu_{1}\). The goal at node \(\nu\triangleq(\nu_{0},\nu_{1})\) is to detect whether a solution of (1-\(\mathcal{P}\)) can be attained within
\[\mathcal{X}^{\nu}\triangleq\{\mathbf{x}\in\mathbb{R}^{n}\colon\mathbf{x}_{\nu_{ 0}}=\mathbf{0},\ \mathbf{x}_{\nu_{1}}\neq\mathbf{0}\}, \tag{2}\]
where \(\mathbf{x}_{\nu_{k}}\) denotes the restriction of \(\mathbf{x}\) to its elements in \(\nu_{k}\). In particular, let \(\mathcal{X}^{\star}\) be the non-empty set of minimizers of (1-\(\mathcal{P}\)). Then, if some upper bound \(\bar{p}\) on the optimal value \(p^{\star}\) is known and if we let
\[p^{\nu}\triangleq\inf_{\mathbf{x}\in\mathbb{R}^{n}}\mathrm{P}^{\nu}(\mathbf{x}) \tag{3}\]
with \(\mathrm{P}^{\nu}(\mathbf{x})\triangleq\mathrm{P}(\mathbf{x})+\eta(\mathbf{x} \in\mathcal{X}^{\nu})\), we obtain the implication
\[p^{\nu}>\bar{p}\implies\mathcal{X}^{\nu}\cap\mathcal{X}^{\star}=\emptyset. \tag{4}\]
In words, if the left-hand side of (4) is satisfied, \(\mathcal{X}^{\nu}\) does not contains any solution of (1-\(\mathcal{P}\)) and can therefore be discarded from the search space of the optimization problem. This operation is usually referred to as "_pruning_".
### _Bounding and relaxing_
Making a pruning decision at node \(\nu\) requires the knowledge of \(\bar{p}\) and \(p^{\nu}\). On the one hand, finding \(\bar{p}\) is an easy task since the value of the objective function in (1-\(\mathcal{P}\)) at any feasible point constitutes an upper bound on \(p^{\star}\). On the other hand, evaluating \(p^{\nu}\) is NP-hard. This issue can nevertheless be circumvented by finding a tractable lower bound \(r^{\nu}\) on \(p^{\nu}\) and relaxing (4) as
\[r^{\nu}>\bar{p}\implies\mathcal{X}^{\nu}\cap\mathcal{X}^{\star}=\emptyset. \tag{5}\]
One ubiquitous approach in the literature [7, 9, 13] to find such a lower bound consists in:
* Adding an extra term "\(\eta(\mathbf{x}\in[\![\mathbf{l},\mathbf{u}])\)" to the cost function of (1-\(\mathcal{P}\)), for some _well-chosen_ bounds \(\mathbf{l}\in\mathbb{R}^{n}_{-}\) and \(\mathbf{u}\in\mathbb{R}^{n}_{+}\).1 In particular, the new constraint "\(\mathbf{x}\in[\![\mathbf{l},\mathbf{u}]\!]\)" must lead to a problem fully equivalent to (1-\(\mathcal{P}\)), that is Footnote 1: This additional constraint usually takes the form “\(-M\leq x_{i}\leq M,\ \forall i\)” with \(M>0\) and is known as “_Big-M_” constraint, see [7, Sec. 3] \[\forall\mathbf{x}^{\star}\in\mathcal{X}^{\star}:\ \mathbf{x}^{\star}\in[\![ \mathbf{l},\mathbf{u}]\!].\] (6)
* Exploiting the convex relaxation of the function \(\left\|\cdot\right\|_{0}\) on the bounded set \(\mathcal{X}^{\nu}\cap[\![\mathbf{l},\mathbf{u}]\!]\), given by \[\|\mathbf{x}\|_{0}\geq|\nu_{1}|+\sum_{i\in\nu_{\bullet}}\frac{[x_{i}]_{+}}{u_{ i}}-\frac{[-x_{i}]_{+}}{l_{i}},\] (7) with \(\nu_{\bullet}\triangleq[\![1,n]\!]\setminus(\nu_{0}\cup\nu_{1})\) and the convention "\(0/0=0\)".
On the one hand, item _i)_ implies that the pruning test (4) involves the following quantity (rather than \(p^{\nu}\)):
\[p^{\nu}(\mathbf{l},\mathbf{u})=\inf_{\mathbf{x}\in\mathbb{R}^{n}}\ \mathrm{P}^{\nu}(\mathbf{x};\mathbf{l},\mathbf{u})\] (8- \[\!\mathcal{P}^{\nu}\] )
where \(\mathrm{P}^{\nu}(\mathbf{x};\mathbf{l},\mathbf{u})\triangleq\mathrm{P}^{\nu}( \mathbf{x})+\eta(\mathbf{x}\in[\![\mathbf{l},\mathbf{u}]\!])\). On the other hand, a lower bound \(r^{\nu}(\mathbf{l},\mathbf{u})\) on \(p^{\nu}(\mathbf{l},\mathbf{u})\) can be obtained by using (7) and solving
\[r^{\nu}(\mathbf{l},\mathbf{u})=\min_{\mathbf{x}\in\mathbb{R}^{n}}\mathrm{R}^{ \nu}(\mathbf{x};\mathbf{l},\mathbf{u})\] (9- \[\!\mathcal{R}^{\nu}\] )
where
\[\mathrm{R}^{\nu}(\mathbf{x};\mathbf{l},\mathbf{u})\triangleq\tfrac {1}{2}\|\mathbf{y}-\mathbf{A}\mathbf{x}\|_{2}^{2}+\lambda\sum_{i\in\nu_{ \bullet}}\tfrac{[x_{i}]_{+}}{u_{i}}-\tfrac{[-x_{i}]_{+}}{l_{i}}\\ +\lambda|\nu_{1}|+\eta(\mathbf{x}_{\nu_{0}}=\mathbf{0})+\eta( \mathbf{x}\in[\![\mathbf{l},\mathbf{u}]\!]).\]
We note that (9-\(\mathcal{R}^{\nu}\)) is a convex problem and can be solved efficiently to good accuracy via numerous polynomial-time numerical procedures, see _e.g.,_[14, Ch. 10].
In practice, the choice of \(\mathbf{l}\) and \(\mathbf{u}\) must respect two conflicting imperatives. First, the new constraint "\(\mathbf{x}\in[\![\mathbf{l},\mathbf{u}]\!]\)" should not modify the solution of our target problem (1-\(\mathcal{P}\)) and condition (6) must therefore be verified. Since \(\mathcal{X}^{\star}\) is obviously not accessible beforehand, this suggests that the entries of \(\mathbf{l}\) and \(\mathbf{u}\) should be chosen "large-enough" in absolute values.2 Second, the tightness of \(r^{\nu}(\mathbf{l},\mathbf{u})\) with respect to \(p^{\nu}(\mathbf{l},\mathbf{u})\) degrades with the spread of the set \([\![\mathbf{l},\mathbf{u}]\!]\).3 In particular, the right-hand side of (7) tends to zero when \(\mathbf{l}\ll\mathbf{x}\) or \(\mathbf{x}\ll\mathbf{u}\). Therefore, setting the entries of \(\mathbf{l}\) and \(\mathbf{u}\) with too large absolute values is likely to degrade the effectiveness of the relaxed pruning decision (5).
Footnote 2: Some heuristics are commonly used in the literature to select proper values of the bounds, see [7, Sec. V.B], [11, Sec. 5.1] or [15, Sec. 4].
Footnote 3: This impairment pertains to a large class of mixed-integer problems and is well known in the literature, see _e.g.,_[16].
In the next section, we propose a solution to address this problem by deriving a methodology which locally tightens the constraint \(\mathbf{x}\in[\![\mathbf{l},\mathbf{u}]\!]\) at each node of the decision tree while preserving the correctness of the BnB procedure.
## IV Peeling
In this section, we introduce our proposed peeling procedure. As an initial assumption, we suppose that some interval \([\![\mathbf{l},\mathbf{u}]\!]\) verifying condition (6) is known. This assumption will be relaxed later on in Sec. IV-C.
Our goal is to find a new interval \([\![\mathbf{l}^{\prime},\mathbf{u}^{\prime}]\!]\) such that
\[\forall\mathbf{x}\in[\![\mathbf{l},\mathbf{u}]\!]\setminus[\![ \mathbf{l}^{\prime},\mathbf{u}^{\prime}]\!]:\ \mathrm{P}^{\nu}(\mathbf{x};\mathbf{l},\mathbf{u})>\bar{p} \tag{10a}\] \[[\![\mathbf{l}^{\prime},\mathbf{u}^{\prime}]\!]\subseteq[\![\mathbf{l}, \mathbf{u}]\!]. \tag{10b}\]
These requirements imply that the pruning decision (4) made at node \(\nu\) remains unchanged when replacing constraint "\(\mathbf{x}\in[\![\mathbf{l},\mathbf{u}]\!]\)" by "\(\mathbf{x}\in[\![\mathbf{l}^{\prime},\mathbf{u}^{\prime}]\!]\)" in (8-\(\mathcal{P}^{\nu}\)). More specifically, the following result holds:
**Lemma 1**.: _Assume \([\![\mathbf{l},\mathbf{u}]\!]\) and \([\![\mathbf{l}^{\prime},\mathbf{u}^{\prime}]\!]\) verify (10a)-(10b), then_
\[p^{\nu}(\mathbf{l}^{\prime},\mathbf{u}^{\prime})>\bar{p}\iff p^{\nu}(\mathbf{l}, \mathbf{u})>\bar{p}. \tag{11}\]
A proof of this result is available in App. A. A consequence of preserving the pruning decision is that taking the new constraint "\(\mathbf{x}\in[\![\mathbf{l}^{\prime},\mathbf{u}^{\prime}]\!]\)" into account at node \(\nu\) does not alter the output of the BnB procedure. In particular, it still correctly identifies the solutions of (1-\(\mathcal{P}\)). The second requirement (10b) implies that \(r^{\nu}(\mathbf{l}^{\prime},\mathbf{u}^{\prime})\) can possibly be larger than \(r^{\nu}(\mathbf{l},\mathbf{u})\) since the lower bound in (7) is tightened by considering lower absolute values for \(\mathbf{l}\) and \(\mathbf{u}\). Overall, any choice of \([\![\mathbf{l}^{\prime},\mathbf{u}^{\prime}]\!]\) verifying (10a)-(10b) thus keeps unchanged the output of the
BnB procedure while allowing for potentially more aggressive pruning decisions.
In the rest of this section, we describe a strategy to find some interval \([\mathbf{l}^{\prime},\mathbf{u}^{\prime}]\) satisfying (10a)-(10b). Because of the symmetry of the problem at stake, we only focus on the construction of the upper bound \(\mathbf{u}^{\prime}\). The identification of a lower bound \(\mathbf{l}^{\prime}\) can be done along the same lines.
### _Target peeling strategy_
Given some index \(j\in\nu_{\bullet}\) and \(\alpha>0\), we consider the following perturbed versions of (8-\(\mathcal{P}^{\nu}\)):
\[p_{\alpha}^{\nu}(\mathbf{l},\mathbf{u})\triangleq\inf_{\mathbf{x}\in\mathbb{R }^{n}}\mathrm{P}^{\nu}(\mathbf{x};\mathbf{l},\mathbf{u})+\eta(x_{j}>\alpha). \tag{12}\]
Problem (12) corresponds to (8-\(\mathcal{P}^{\nu}\)) where \(x_{j}\) is additionally constrained to be strictly greater than \(\alpha\). The following lemma then trivially follows from the definition of \(p_{\alpha}^{\nu}(\mathbf{l},\mathbf{u})\):
**Lemma 2**.: _If \(\alpha\in[0,u_{j}[\) and_
\[p_{\alpha}^{\nu}(\mathbf{l},\mathbf{u})>\bar{p}, \tag{13}\]
_then (10a)-(10b) hold with_
\[u_{i}^{\prime}\,=\,\begin{cases}\alpha&\text{if }i=j\\ u_{i}&\text{otherwise}.\end{cases} \tag{14}\]
This result thus states that any \(\alpha\in[0,u_{j}[\) verifying (13) enables to construct some \(\mathbf{u}^{\prime}\) automatically fulfilling (10a)-(10b). Unfortunately, evaluating (13) involves the same computational burden as solving (8-\(\mathcal{P}^{\nu}\)). This problem can nevertheless be circumvented by finding some proper lower bound on \(p_{\alpha}^{\nu}(\mathbf{l},\mathbf{u})\) as described in the next section.
### _Tractable implementation_
In App. A, we leverage Fenchel-Rockafellar duality for problem (9-\(\mathcal{R}^{\nu}\)) to show that for any \(\mathbf{w}\in\mathbb{R}^{m}\), the following lower bound on \(p_{\alpha}^{\nu}(\mathbf{l},\mathbf{u})\) holds:
\[p_{\alpha}^{\nu}(\mathbf{l},\mathbf{u})\geq\mathrm{D}^{\nu}(\mathbf{w}; \mathbf{l},\mathbf{u})+\psi_{j}(\mathbf{w};\mathbf{l},\mathbf{u})+\alpha[- \mathbf{a}_{j}^{\mathrm{T}}\mathbf{w}]_{+}, \tag{15}\]
where
\[\mathrm{D}^{\nu}(\mathbf{w};\mathbf{l},\mathbf{u}) \triangleq\tfrac{1}{2}\|\mathbf{y}\|_{2}^{2}-\tfrac{1}{2}\| \mathbf{y}-\mathbf{w}\|_{2}^{2}+\lambda|\nu_{1}|\] \[\qquad-\sum_{i\in\nu_{1}}\mu_{0,i}(\mathbf{a}_{i}^{\mathrm{T}} \mathbf{w})-\sum_{i\in\nu_{\bullet}}\mu_{\lambda,i}(\mathbf{a}_{i}^{\mathrm{ T}}\mathbf{w})\] \[\psi_{j}(\mathbf{w};\mathbf{l},\mathbf{u}) \triangleq\mu_{\lambda,j}(\mathbf{a}_{j}^{\mathrm{T}}\mathbf{w})-u_{ j}[\mathbf{a}_{j}^{\mathrm{T}}\mathbf{w}]_{+}+\lambda\]
and \(\mu_{\rho,i}(v)\triangleq[u_{i}v-\rho]_{+}+[l_{i}v-\rho]_{+}\).
Using this result, condition (13) can be relaxed as
\[\mathrm{D}^{\nu}(\mathbf{w};\mathbf{l},\mathbf{u})+\psi_{j}(\mathbf{w}; \mathbf{l},\mathbf{u})+\alpha[-\mathbf{a}_{j}^{\mathrm{T}}\mathbf{w}]_{+}> \bar{p}. \tag{16}\]
Hence, choosing any \(\alpha\in[0,u_{j}[\) verifying (16) for some \(\mathbf{w}\in\mathbb{R}^{m}\) defines a new valid constraint via (14), in the sense of (10a)-(10b). Interestingly, the left-hand side of (16) depends linearly on \(\alpha\), thus allowing to precisely characterize the range of possible values satisfying the strict inequality (16). This leads us to the main result of this section.
**Proposition 1**.: _Let \(\mathbf{w}\in\mathbb{R}^{m}\). If \(\mathbf{a}_{j}^{\mathrm{T}}\mathbf{w}\geq 0\) and_
\[\mathrm{D}^{\nu}(\mathbf{w};\mathbf{l},\mathbf{u})+\psi_{j}(\mathbf{w}; \mathbf{l},\mathbf{u})>\bar{p}, \tag{17}\]
_then \(\mathbf{u}^{\prime}\) defined as in (14) with \(\alpha=0\) fulfills (10a)-(10b). Moreover, if \(\mathbf{a}_{j}^{\mathrm{T}}\mathbf{w}<0\), then \(\mathbf{u}^{\prime}\) defined as in (14) with any \(\alpha\in[0,u_{j}[\) verifying_
\[\alpha>\bar{\alpha}\triangleq\frac{\bar{p}-\mathrm{D}^{\nu}(\mathbf{w}; \mathbf{l},\mathbf{u})-\psi_{j}(\mathbf{w};\mathbf{l},\mathbf{u})}{[- \mathbf{a}_{j}^{\mathrm{T}}\mathbf{w}]_{+}} \tag{18}\]
_fulfills (10a)-(10b)._
Our next result shows that Prop. 1 can be applied to all indices \(j\in\llbracket 1,n\rrbracket\) either sequentially or in parallel, while preserving the correctness of the BnB procedure:
**Lemma 3**.: _Let \([\mathbf{l}^{\prime},\mathbf{u}^{\prime}]\) and \([\mathbf{l}^{\prime\prime},\mathbf{u}^{\prime\prime}]\) be two intervals satisfying (10a)-(10b). Then, the interval \([\mathbf{l}^{\prime},\mathbf{u}^{\prime}]\cap[\mathbf{l}^{\prime\prime}, \mathbf{u}^{\prime\prime}]\) also fulfills (10a)-(10b)._
A proof is available in App. A. We note that in terms of complexity the parallel application of Prop. 1 to all indices \(j\in\llbracket 1,n\rrbracket\) requires the computation of the inner products \(\{\mathbf{a}_{i}^{\mathrm{T}}\mathbf{w}\}_{i=1}^{n}\) and _one single_ evaluation of \(\mathrm{D}^{\nu}(\mathbf{w};\mathbf{l},\mathbf{u})\). Interestingly, these inner products are already computed in most numerical procedures solving (9-\(\mathcal{R}^{\nu}\)) and are thus usually available at no additional cost, see _e.g.,_[11, Sec. 4.3]. The overhead complexity of applying in parallel our proposed peeling strategy thus scales as \(\mathcal{O}(n+m)\).
### _Propagating peeling down the tree_
In this section, we emphasize that any interval \([\mathbf{l}^{\prime},\mathbf{u}^{\prime}]\) verifying (10a)-(10b) at node \(\nu\) can be used as a starting point to apply our peeling procedure at the child nodes of \(\nu\). More specifically, the following result holds:
**Lemma 4**.: _Let \([\mathbf{l}^{\prime},\mathbf{u}^{\prime}]\) be some interval verifying (10a)-(10b) at node \(\nu\) and let \(\nu^{\prime}\) be some child node of \(\nu\). Assume that the peeling procedure defined in Prop. 1 is applied at node \(\nu^{\prime}\) with \([\mathbf{l}^{\prime},\mathbf{u}^{\prime}]\) as input, rather than \([\mathbf{l},\mathbf{u}]\), to generate a new interval \([\mathbf{l}^{\prime\prime},\mathbf{u}^{\prime\prime}]\). Then we have_
\[\forall\mathbf{x}\in[\mathbf{l},\mathbf{u}]\setminus[\mathbf{l}^{ \prime\prime},\mathbf{u}^{\prime\prime}]:\,\mathrm{P}^{\nu^{\prime}}( \mathbf{x};\mathbf{l},\mathbf{u})>\bar{p} \tag{19a}\] \[[\mathbf{l}^{\prime\prime},\mathbf{u}^{\prime\prime}]\subseteq[ \mathbf{l},\mathbf{u}]. \tag{19b}\]
A proof of this result is available in App. A. In other words, Lem. 4 states that any peeled interval \([\mathbf{l}^{\prime},\mathbf{u}^{\prime}]\) computed at node \(\nu\) can be used as a starting point to apply a new peeling step at any child node \(\nu^{\prime}\). This allows to propagate the peeled interval \([\mathbf{l}^{\prime},\mathbf{u}^{\prime}]\) down the decision tree to hopefully improve sequentially the tightness of the convex relaxation (9-\(\mathcal{R}^{\nu}\)).
## V Numerical results
This section reports an empirical study demonstrating the effectiveness of the proposed peeling procedure to accelerate the resolution of (1-\(\mathcal{P}\)) on a synthetic dataset. Additional simulation results can be found in App. C.
### _Experimental setup_
We consider instances of problem (1-\(\mathcal{P}\)) with dimensions \((m,n)=(100,150)\). For each trial, new realizations of \(\mathbf{A},\mathbf{y}\) and \(\lambda\) are generated as follows. Each row of the dictionary \(\mathbf{A}\) is drawn from a multivariate normal distribution with zero mean and covariance matrix \(\mathbf{K}\in\mathbb{R}^{n\times n}\). The entry \((i,j)\) of \(\mathbf{K}\) is defined as \(K_{ij}=\rho^{|i-j|}\), \(\forall i,j\in\llbracket 1,n\rrbracket\), with \(\rho=0.1\). Each realization of \(\mathbf{y}\) is generated in two steps. We first create a \(k\)-sparse vector \(\mathbf{x}^{\dagger}\in\mathbb{R}^{n}\) with evenly-distributed non-zero components, where \(k=5\). The non-zero entries are defined as \(x_{i}^{\dagger}=\mathrm{sign}(r_{i})+r_{i}\) where \(r_{i}\) is an independent realization of a zero-mean Gaussian with variance \(\sigma^{2}\). We then set \(\mathbf{y}=\mathbf{A}\mathbf{x}^{\dagger}+\mathbf{n}\) for some zero-mean white Gaussian noise \(\mathbf{n}\). The variance of the noise is adjusted so that the \(\mathrm{SNR}\triangleq 10\log_{10}(\|\mathbf{A}\mathbf{x}^{\dagger}\|_{2}^{2}/\| \mathbf{n}\|_{2}^{2})\) is equal to \(15\)dB. The parameter \(\lambda\) is calibrated for each instance of (1-\(\mathcal{P}\)) using the cross-validation tools of the LOLearn package [17] with the default parameters. More specifically, we call the cv.fit procedure that takes \(\mathbf{y}\) and \(\mathbf{A}\) as inputs and returns couples \((\mathbf{x}_{\lambda_{i}},c_{\lambda_{i}})\) from a grid of values \(\{\lambda_{i}\}_{i\in\mathbb{N}}\) selected data-dependently by the package. The vector \(\mathbf{x}_{\lambda_{i}}\) is an approximate solution of (1-\(\mathcal{P}\)) where the weight of the \(\ell_{0}\)-norm is set to \(\lambda_{i}\) and \(c_{\lambda_{i}}\) is an associated cross-validation score computed on 10 folds of \(m/10\) randomly sampled rows in \(\mathbf{A}\) and entries in \(\mathbf{y}\). We then set
\[\lambda=\operatorname*{arg\,min}_{\lambda_{i}}\ c_{\lambda_{i}}\ \text{s.t.}\ \|\mathbf{x}_{\lambda_{i}}\|_{0}=\left\|\mathbf{x}^{\dagger}\right\|_{0}. \tag{20}\]
### _Competing procedures_
We consider the following numerical solvers addressing (1-\(\mathcal{P}\)): _i)_Cplex[6], a generic mixed-integer problem solver; _ii)_LObnb[10], a standard BnB procedure using a "breadth-first search" exploration strategy, see [10, Sec. 3.3]; _iii)_Sbnb, a standard BnB procedure using a "depth-first search" exploration strategy, see [15, Sec. 2.2]; _iv)_Sbnb-N, corresponding to Sbnb enhanced with additional "node-screening" techniques, see [11]; _v)_Sbnb-P, corresponding to Sbnb enhanced with the peeling strategy presented in this paper. LObnb, Sbnb, Sbnb-N and Sbnb-P all use the same solving procedure for relaxed problem (9-\(\mathcal{R}^{\nu}\)), namely a coordinate descent method [18]. We use the C++ implementation of Cplex4 and the Python implementation of LObnb.5Sbnb, Sbnb-N and Sbnb-P are implemented in Julia.6
Footnote 4: [https://github.com/jump-dev/CPLEX.jl](https://github.com/jump-dev/CPLEX.jl)
Footnote 5: [https://github.com/hazimehh/LDLearn](https://github.com/hazimehh/LDLearn)
Footnote 6: [https://github.com/TheGuyard/BnbPeeling.jl](https://github.com/TheGuyard/BnbPeeling.jl)
Footnote 7: [https://github.com/haez/BnbPeeling.jl](https://github.com/haez/BnbPeeling.jl)
For Sbnb-P, peeling is applied at each iteration of the numerical procedure solving the relaxed problem (9-\(\mathcal{R}^{\nu}\)). We use the current iterate, say \(\mathbf{x}^{(k)}\), to define \(\mathbf{w}\triangleq\mathbf{y}-\mathbf{A}\mathbf{x}^{(k)}\) and apply the peeling rules defined in Prop. 1 in parallel, _i.e._, simultaneously for all the components of \(\mathbf{x}\). The value of \(\alpha\) satisfying (18), if any, is chosen as \(\alpha=\bar{\alpha}+10^{-16}\). The peeled intervals are propagated through the decision tree as described in Sec. IV-C.
All the solving procedures are provided with the initial bounds \(\mathbf{1}=-M\mathbf{1}\) and \(\mathbf{u}=M\mathbf{1}\) for some proper value of \(M\). This corresponds to the standard _"Big-\(M\)"_ constraint commonly considered in the literature [7, 9, 13, 15]. As far as our random simulation setup is concerned, it can be shown that (1-\(\mathcal{P}\)) admits a unique minimizer \(\mathbf{x}^{\star}\) with probability one and we thus choose \(M=\gamma\|\mathbf{x}^{\star}\|_{\infty}\) for some \(\gamma\geq 1\) in our simulations. This requires to solve (1-\(\mathcal{P}\)) once beforehand to identify \(\mathbf{x}^{\star}\). This operation is here only done for the sake of comparing the sensibility of the solving methods to the choice of \(\gamma\). In practice, we obtain \(\mathbf{x}^{\star}\) by solving a sequence of problems with an increasing value of \(M\) in the _Big-\(M\)_ constraint. More specifically, letting \(\mathbf{x}^{\star}_{M}\) denote the solution of (1-\(\mathcal{P}\)) with the additional constraint "\(-M\mathbf{1}\leq\mathbf{x}\leq M\mathbf{1}\)", we are guaranteed that \(\mathbf{x}^{\star}=\mathbf{x}^{\star}_{M}\) as soon as the strict inequality \(\|\mathbf{x}^{\star}_{M}\|_{\infty}<M\) holds. We thus compute the solution \(\mathbf{x}^{\star}_{M}\) for a sequence of \(M\) of the form \(\{\eta^{i}M_{0}\}_{i\in\mathbb{N}}\) with \(\eta=1.1\) and for some \(M_{0}>0\) and stop as soon as \(\|\mathbf{x}^{\star}_{M}\|_{\infty}<M\).
### _Computational gains_
Fig. 1 presents the performance of the considered solving procedures. All results are averaged over 50 problem instances. Experiments were run on one Intel Xeon E5-2660 v3 CPU clocked at 2.60 GHz with 16 GB of RAM. The left column in Fig. 1 represents the average solving time of each procedure as a function of \(\gamma\) (top) and \(\sigma\) (bottom); the right column illustrates the gain allowed by the proposed method in terms of solving time (solid) and number of nodes explored (dashed) as compared to its best competitor, that is Sbnb-N.
We note that Sbnb-P leads to the smallest running time in all the considered setups. Since the latter corresponds to Sbnb where peeling has been added, the spacing between the red and green curves materializes the gain provided by peeling. As far as our simulation setup is concerned, we see that the proposed method enables an acceleration of almost one order of magnitude with respect to Sbnb. It is noticeable that this
Fig. 1: Left: Solving time as a function of \(\gamma\) (top, \(\sigma=1\)) and \(\sigma\) (bottom, \(\gamma=5\)). Right: gain in terms of solving time (solid) and number of nodes explored (dashed) with respect to Sbnb-N.
acceleration occurs even if \(\gamma=1\), that is the _Big-\(M\)_ constraint is perfectly tuned to the problem at hand. This is due to the fact that peeling can refine _individually_ each component of the initial bounds 1 and \(\mathbf{u}\) at _each node_ of the BnB decision tree to fit the local geometry of the problem.
We also notice that Sbnb-P improves over Sbnb-N, which can be seen as another acceleration of Sbnb. In particular, Sbnb-P performs always as well as Sbnb-N as emphasized by the gains in the right-hand side of Fig. 1. We note in particular the gain provided by peeling in terms of number of nodes processed by the BnB procedure: as expected, peeling allows for more aggressive pruning and thus reduces the number of nodes to be explored.
## VI Conclusion
In this paper, we presented a tractable strategy, named "_peeling_", to tighten the box constraints used in a BnB procedure tailored to \(\ell_{0}\)-regularized least-squares problems. Unlike the standard approach which imposes _one global_ constraint to the problem, our strategy aims to locally refine the box constraints _at each node_ of the decision tree. This refinement enables to strengthen the convex relaxations used in the pruning decisions made by the BnB procedure and can lead to significant improvements in terms of solving time, as emphasized by our simulation results. |
2309.16114 | Comparing Active Learning Performance Driven by Gaussian Processes or
Bayesian Neural Networks for Constrained Trajectory Exploration | Robots with increasing autonomy progress our space exploration capabilities,
particularly for in-situ exploration and sampling to stand in for human
explorers. Currently, humans drive robots to meet scientific objectives, but
depending on the robot's location, the exchange of information and driving
commands between the human operator and robot may cause undue delays in mission
fulfillment. An autonomous robot encoded with a scientific objective and an
exploration strategy incurs no communication delays and can fulfill missions
more quickly. Active learning algorithms offer this capability of intelligent
exploration, but the underlying model structure varies the performance of the
active learning algorithm in accurately forming an understanding of the
environment. In this paper, we investigate the performance differences between
active learning algorithms driven by Gaussian processes or Bayesian neural
networks for exploration strategies encoded on agents that are constrained in
their trajectories, like planetary surface rovers. These two active learning
strategies were tested in a simulation environment against science-blind
strategies to predict the spatial distribution of a variable of interest along
multiple datasets. The performance metrics of interest are model accuracy in
root mean squared (RMS) error, training time, model convergence, total distance
traveled until convergence, and total samples until convergence. Active
learning strategies encoded with Gaussian processes require less computation to
train, converge to an accurate model more quickly, and propose trajectories of
shorter distance, except in a few complex environments in which Bayesian neural
networks achieve a more accurate model in the large data regime due to their
more expressive functional bases. The paper concludes with advice on when and
how to implement either exploration strategy for future space missions. | Sapphira Akins, Frances Zhu | 2023-09-28T02:45:14Z | http://arxiv.org/abs/2309.16114v1 | Comparing Active Learning Performance Driven by Gaussian Processes or Bayesian Neural Networks for Constrained Trajectory Exploration
###### Abstract
Robots with increasing autonomy progress our space exploration capabilities, particularly for in-situ exploration and sampling to stand in for human explorers. Currently, humans drive robots to meet scientific objectives, but depending on the robot's location, the exchange of information and driving commands between the human operator and robot may cause undue delays in mission fulfillment. An autonomous robot encoded with a scientific objective and an exploration strategy incurs no communication delays and can fulfill missions more quickly. Active learning algorithms offer this capability of intelligent exploration, but the underlying model structure varies the performance of the active learning algorithm in accurately forming an understanding of the environment. In this paper, we investigate the performance differences between active learning algorithms driven by Gaussian processes or Bayesian neural networks for exploration strategies encoded on agents that are constrained in their trajectories, like planetary surface rovers. These two active learning strategies were tested in a simulation environment against science-blind strategies to predict the spatial distribution of a variable of interest along multiple datasets. The performance metrics of interest are model accuracy in root mean squared (RMS) error, training time, model convergence, total distance traveled until convergence, and total samples until convergence. Active learning strategies encoded with Gaussian processes require less computation to train, converge to an accurate model more quickly, and propose trajectories of shorter distance, except in a few complex environments in which Bayesian neural networks achieve a more accurate model in the large data regime due to their more expressive functional bases. The paper concludes with advice on when and how to implement either exploration strategy for future space missions.
## I Nomenclature
\(d\) \(=\) distance
\(d_{c}\) \(=\) distance until convergence
\(D\) \(=\) dataset
\(e\) \(=\) error
\(f\) \(=\) true model
\(\hat{f}\) \(=\) oracle model
\(g\) \(=\) suggestion policy
\(i\) \(=\) index
\(i_{c}\) \(=\) samples until convergence
\(I\) \(=\) objective function
\(k\) \(=\) kernel
\(N\) \(=\) normal distribution
\(\mu\) & = model posterior mean \(r\) & = location in environment \(\mathcal{R}\) & = the entire environment space \(\sigma\) & = measurement noise \(t\) & = time \(V\) & = model posterior variance \(X\) & = aggregate input dataset target position \(x\) & = single training pair target position \(Y\) & = aggregate input dataset target variable of interest \(y\) & = single training pair target variable of interest
## 1 Introduction
Traditionally in robotic exploration either robots are teleoperated by humans or autonomous robots are provided with user-defined waypoints within the environment prior to deployment. There is always human involvement. Now, intelligent, adaptive autonomous robots are needed to explore unknown, dynamic environments where little is known a priori. The robot must use its own sensors to fully understand its environment. An in-situ exploration strategy that incorporates science information and maximizes a formal cost objective generating proximal destinations of interest, yielding more efficient scientific data collection, time savings, and potentially convergence properties. Even if this exploration strategy algorithm is not fully autonomous, the generated waypoints can inform teleoperators of potential destinations of interest, which could accelerate the site selection process or affirm sites selected by teleoperators.
The science mission that motivates this technology is the search for water ice. Water ice is one of the most important resources on the Moon and Mars [1, 2]. The direct detection of surface-exposed water ice using infrared data in the lunar polar regions accelerates the progress of exploring lunar ice in-situ resources [3]. Data gathered from observations of surface-level water-ice deposits on the Moon suggest these deposits may also exist subsurface. However, we do not currently have the knowledge necessary to classify any subset of the total volume of lunar water-ice resources. Orbital InfraRed (IR) measurements suggest that water ice exists in approximately 5% of Lunar cold traps (regions where the annual maximum temperature is less than 110 K and water-ice is stable) and in up to 30% of the total exposed surface mass [3]. At present, we do not yet understand enough about the physical characteristics of lunar water-ice deposits to consider these reserves for future exploration and resource utilization efforts. The most direct way to characterize the volume of subsurface water is to conduct an in-situ investigation, necessitating human or robot surface operations.
Currently, human operators intuit the scientific value of exploring specific destinations, much like NASA's Sojourner, China's Yutu-2, and MERs [4]. Although the most recently landed rover MSL shows hints of autonomy, the autonomous interactions are restricted to mobility actions - separate from any science [5]. Rover will very likely face power and thermal limitations dependent on time spent in a permanently shadowed region for which the mission cannot afford extensive sampling or teleoperators to stop and intuit the next waypoint to visit. The optimization problem of space exploration is that a limited set of spacecraft resources (power) must be allocated between competing choices (destinations) in a way that maximizes science discovered and mitigates risk, a specific formulation of the Bayesian optimization problem [6].
This paper directly compares the performance of active learning strategies driven by a Gaussian process or Bayesian neural network along metrics of accuracy (RMS error), train time, and samples until convergence in a constrained trajectory exploration application. Section 2 reviews core concepts in understanding Gaussian process performance to neural network performance in driving active learning algorithms and distinguishes this work from previous work. Section 3 discusses the active learning algorithm, the algorithm implementation, the benchmark environments, and the experiments run to compare Gaussian processes to Bayesian neural networks. Section 4 reports the results of the comparison by defining the metrics for comparison, performance along these metrics, and an interpretation of performance for other applications.
## 2 Background
Active learning algorithms use historical measurements to generate an uncertainty map that suggests a location in the space with the highest uncertainty to sample next, which offers a sample-efficient method for exploring and characterizing a space. The agent is encoded with an objective function, \(J\), that aims to minimize a learned model's prediction \(\hat{f}(X,t,D,k(\cdot))\) with respect to ground truth \(f(X,t)\) at a location on the surface across a set of discretized locations \(X\in[x_{1},\cdots,x_{i}]\) using dataset \(D\) and kernel \(k(\cdot)\). This model error takes the form of the \(L_{2}\) norm or root
mean-squared (RMS) error, seen in Eq. (1).This data \(D\), defined in Eq. (2), is collected iteratively by the robot in the environment with a control policy \(g^{*}\) that chooses a proximal location \(x_{\nu max}\) that has the highest variance (or uncertainty) \(V_{pred}\) in the model prediction \(\hat{f}(\cdot)\).
\[J =\ \big{\|}f(X,t)-\hat{f}(X,t,D,k(\cdot))\big{\|}_{2} \tag{1}\] \[D =\begin{bmatrix}t_{1}&x_{k,1}&y_{k,1}\\ t_{2}&x_{k,2}&y_{k,2}\\ &\vdots&\\ t_{j}&x_{k,j}=\tau_{\nu max}&y_{k,j}\\ &\vdots&\\ t_{m}&x_{k,m}&y_{k,m}\end{bmatrix} \tag{2}\]
Active learning algorithms are underpinned by two components: an oracle that predicts a mean and covariance function across space \(\hat{f}(\cdot)\) and a policy that suggests the next location to sample \(g(\cdot)\). The oracle is typically a Gaussian process due to its highly expressive capacity (lends well to characterization) and convenient uncertainty quantification in the posterior prediction (lends well to exploration), but can be represented by any model that offers a mean and covariance function as the model output shown in Eq. (3), like a probabilistic or Bayesian neural network.
\[\hat{f}(X)\!\sim\!N(\mu,V) \tag{3}\]
A Gaussian process is a probabilistic kernel method that relies on a user definition of basis kernel, most commonly the radial basis function. The basis function heavily determines the performance of the Gaussian process in generating an accurate mean and covariance function to the true underlying function, which is unknown. While Gaussian processes are mathematically elegant and conceptually simple, the kernel definition can be constraining. Neural networks offer more flexible, adaptable bases to represent a wider range of underlying functions but need more data and training time to generate an accurate model. Neural networks excel in applications of large data, complex bases, and unconstrained training time. Gaussian processes excel in applications of sparse, unevenly distributed data but can be computationally prohibitive for large datasets due to the single matrix operation that relies on matrix inversion.
For the sake of exploration, a policy \(g\) chooses a location \(r_{\nu max}\) that has the highest variance (or uncertainty) \(V_{pred}\) in the model prediction \(\hat{f}\) in some space \(\mathcal{R}\).
\[r_{\nu max}=g(r\in\mathcal{R})=\operatorname*{argmax}_{r\in\mathcal{R}}\ \tilde{V}_{pred} \tag{4}\]
In conventional active learning algorithms, the suggestion policy is free to select the location of high uncertainty to sample across the entire global space, like a satellite leveraging remote sensing that can point any visible point on the Earth's surface depicted in Figure 1. But for applications involving in-situ sampling, like a robot visiting a destination in an environment and sampling at that specific destination depicted in Figure 2, an agent is limited to sampling at locations within finite distance. This sequence of sample locations \(x_{D}=[x_{k,1},\cdots,x_{k,m}]\) can be thought of as a constrained trajectory and
Figure 1: Difference between prediction horizon \(d_{\mathit{horizon}}\) and sampling distance \(d_{\mathit{samples}}\)
Figure 2: Difference between sampling in remote sensing (left) vs. in-situ exploration (right) applications
adds nuance to how a suggestion policy may be crafted, namely defining 1) the distance between sequential samples \(d_{samples}\) and 2) the uncertainty horizon to consider sampling \(d_{horizon}\). Given in Eq. (5), the constrained trajectory suggestion policy is a modified version of the aforementioned unconstrained suggestion policy.
\[\begin{split}\vspace{-0.1cm}\vspace{-0.1cm}\vspace{-0.1cm}vspace{ -0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace {-0.1cm}vspace{-0.1cm}vspace{-0.1cm}vspace{-0.
Load the environment's geometry (parabola, Townsend, or lunar crater), size (length and width of surface), and noise level (random Gaussian noise ranging in variance). Selection of these environment geometries are discussed in the following section.
2. Define the exploration strategy (spiral, snake, or active learning) and stopping condition (number of total samples in the training dataset divided by two for active learning strategies, detailed for each surface: parabola - 219 samples, Townsend - 219 samples, 3km Lunar - 83 samples, 6km Lunar - 311 samples).
3. Define the Gaussian process model and the Bayesian neural network hyperparameters as defined in Model Selection.
4. Initialize the agent's starting location.
5. Seed the training dataset with 10 training points. 1. For spiral and snake methods, predefined pairs are utilized throughout the entire experiment. 2. For active learning methods, a random walk generates the initial training data.
6. Explore the surface \(R\) until a predefined maximum number of samples is reached. 1. For spiral and snake methods, continue to sample and train the model along the predefined trajectory. 2. For active learning: 3. Train the Gaussian process and Bayesian neural network models on the \(n\) input-output pairs in the training set thus far \((X,Y)\rightarrow\hat{f}\) where \(X=\begin{bmatrix}\hat{x}_{1}\\ \vdots\\ \hat{x}_{n}\end{bmatrix}\) and \(Y=\begin{bmatrix}\gamma_{1}\\ \vdots\\ \gamma_{n}\end{bmatrix}\). 4. Predict scalar expected values \(\hat{P}_{pred}\) and variance \(\hat{V}_{pred}\) in the prediction horizon \(r_{pred}\). 5. Generate a control policy \(g^{*}\) that identifies the location in the prediction horizon with the highest variance \(r_{\gamma max}\) defined in Eq. (6). \[r_{\gamma max}=g^{*}(r\in r_{pred})=\underset{r\in r_{pred}}{\text{argmax}}\ \hat{V}_{pred}\] (6) 6. Traverse to the nearest neighbor location in the direction of the high-variance location \(r_{\gamma max}\). The action \(a\) is the next location, given in Eq. (7). \[a=\underset{r_{way\in d}\in d_{samples}}{\text{argmin}}\left\|r_{way}-r_{ \gamma max}\right\|_{2}\] (7) 6. Sample the value \(y_{n+1}\) at this location \(a=x_{n+1}\) and append this training pair to the training set.
### Benchmark Surfaces
Each algorithm's performance is evaluated through its ability to map an environment across three distinct surfaces: parabola, Townsend, and the lunar south pole crater. Due to the surfaces' changing complexities, the Bayesian neural network and Gaussian process machine learning strategies can be evaluated on their performance in these varying conditions, allowing a deeper understanding of their strengths and weaknesses in relation to the complexity of the environment. The inclusion of multiple surfaces facilitates a comprehensive assessment of algorithm adaptability across diverse environments, thus enhancing its robustness. Moreover, utilization of the lunar surface enables the testing of these frameworks in a real-world setting.
These surfaces have two independent dimensions (planar position \(r=(x_{1},x_{2})\)) and a third dependent dimension \(y\); the dependent variable's algebraic relationship to position is known for the parabola and Townsend benchmark surfaces but unknown for the lunar ice data. The parabola surface is defined by Eq. (8), where \(\sigma_{noise}^{2}=0.02\) or \(0\), \(x_{1}\in[-1\text{:}\,0.1\text{:}\,1]\), and \(x_{2}\in[-1\text{:}\,0.1\text{:}\,1]\). The Townsend surface is defined by Eq. (9), where \(\sigma_{noise}^{2}=0.02\) or \(0\), \(x_{1}\in[-2.5\text{:}\,0.1\text{:}\,2.5]\), and \(x_{2}\in[-2.5\text{:}\,0.1\text{:}\,2.5]\).
\[y=x_{1}^{2}+x_{2}^{2}+\sigma_{noise}^{2} \tag{8}\] \[y=-(\cos((x_{1}-0.1)x_{2})^{2}-x_{1}\sin(3x_{1}+x_{2})+\sigma_{noise }^{2} \tag{9}\]
The lunar surface, derived from LAMP data [14], consists of a digital elevation map (DEM) \(\left(r=(x_{1},x_{2},x_{3})\right)\) of the lunar south pole in 5 m spatial resolution and hydroxyl data \(y\) in 250 m spatial resolution. Noise is present in the data and significant gaps appear near the crater rim. The results comparing the six exploration strategies is presented in
order of ascending complexity of the surfaces shown in Figure 3: noiseless parabola, noisy parabola, noiseless Townsend, noisy Townsend, 3km lunar crater swath, and 6km lunar crater swath.
## Appendix E Simulation Experiment Campaign
The Gaussian process and Bayesian neural network informed exploration strategies of various movement and prediction horizons (Table 2) were evaluated on the three surfaces of varying surface size, toggling between a noiseless and noisy measurements (Table 1). The movement horizon is varied between one grid space (movement to nearest neighbor) for active learning methods and two to four grid spaces for the science-blind snake and spiral exploration strategies. For the active learning strategies, the prediction horizon is set to look at one grid space (\(1\Delta r\)), three grid spaces (\(3\Delta r\)), or globally across the entirety of the surface (\(r\in R\)). By varying these parameters, the exploration efficiency of each model can be analyzed thoroughly. Although different movement horizons are compared between the science-blind and active learning methods, the science-blind method serves as a baseline metric over a pre-determined path, unlike the active learning strategy that changes with each "step" the agent takes. Additionally, note these tests are run for multiple trials to verify the validity of the data.
## IV Results
### Metrics
To comprehensively assess the performance of Gaussian process and Bayesian neural network active learning exploration strategies, the following metrics were utilized with their associated definitions and desired intention.
* Training Time \(\circ\) All experiments were run on the same compute node on University of Hawaii's high-performance computing cluster that leveraged 32 CPUs and 128 GB of RAM. Execution time was measured using the operating system's time function prior to and after either model's training function call. Training time is a proxy for the computational intensity of either implemented algorithm.
* RMS Error upon Convergence \(e_{c}\) \(\circ\) To determine the concept of convergence loosely, the 2% settling time from control theory was adopted. The global RMS error between the model prediction and true values is inspected to verify that 1) there are enough data points to confirm convergence and 2) that the final values of RMS error stay within a 2% band of the final value \(e_{f}\). The 2% error band \(e_{2\%}\) is found by differencing the initial RMS error \(e_{0}\) and final RMS error \(e_{f}\), given in Eq. (10). The RMS error upon convergence \(e_{c}\) is thus defined as the upper bound of this error band, defined by Eq. (11). \[\Delta e_{2\%} =0.02(e_{0}-e_{f})\] (10) \[e_{c} =e_{f}+e_{2\%}\] (11) It is important to note that RMS error upon convergence and along with samples/distance until convergence (mentioned in detail below) are particularly relevant for trials that exhibit asymptotic behavior, implying a convergence to a constant value. However, such convergence can only be speculated when the terminal value is unknown.
* Samples until Convergence \(i_{c}\) \(\circ\) The index of convergence or samples until convergence \(i_{c}\) is then found through minimizing the difference between the error at an index \(i\), \(e_{l}\), and the error upon convergence \(e_{c}\), given in Eq. (12). \[i_{c}=\underset{i}{\text{argmin}}\|e_{l}\ -\ e_{c}\|_{2}\] (12)
* Distance until Convergence \(d_{c}\) \(\circ\) The distance traveled until convergence is the sum of radial difference between each waypoint until the sample of convergence \(i_{c}\), given in Eq. (13). \[d_{c}=\sum_{i=1}^{i_{c}}\big{\|}x_{k,i+1}\ -\ x_{k,i}\big{\|}_{2}\] (13) Note that distance until convergence can provide insight regarding which methods are more effective, as lower distance traversed until convergence implies a more efficient exploration strategy
* Position Error in Identifying Location of Global Minimum \(e_{\text{min}}\)
Eq. (14) calculates the difference between the location of the true minima and the minimum converged upon by the exploration algorithm, where the true location of the minima of the target surface for the parabola is \(r_{min}=(0,0)\), the Townsend \(r_{min}=(-1.75,-1.75)\), and the lunar surface \(r_{min}=(1,0.5)\). \[e_{\min}=\left\|r_{min}\ -\ \underset{r}{\operatorname{argmin}}\hat{f}(r \in\mathcal{R}\ )\right\|_{2}\] (14)
## Appendix B Resulting Simulations
The variety of simulations aims to emphasize the difference in performance between the exploration strategies embedded with differing models and to specifically highlight the performance of the constrained active learner. The science-blind algorithms, snake and spiral alike, offer baseline performance metrics to compare the effectiveness of intelligent strategies. The constrained active learning algorithms aim to mimic rovers, the main interest of this paper. Simulations of exploration using the constrained active learner, illustrated in Figures 5 to 7, display specific exploration trajectories and model performance over the iterative sampling experiment.
As previously displayed in Table 2, there are three different prediction horizons associated with the algorithms that utilize active learning and a constrained movement horizon: nearest neighbor (NN), local, and global prediction horizons. Each simulation of an exploration algorithm generates figures to illustrate the evolution of the underlying model's performance. Figures 5 - 7 are formatted such that the top three graphs display data regarding the BNN algorithm, the middle three graphs display data regarding the GP algorithm, and the bottom two graphs compare the GP and BNN performance, with GPs being graphed in blue and BNNs graphed in black. Figure 5 has each subplot labeled. Subplot a) illustrates the BNN prediction across the surface test location, where the colored surface is the ground truth and the gray surface is the prediction. The purple star represents the agent's location on the surface and the black lines represent the agent's historical trajectory. Subplot b) displays the BNN algorithm's uncertainty across the environment at the model's most recent evaluation. Next, subplot c) displays the BNN algorithm's error across the surface at the most recent model evaluation. Subplots d), e), and f) show similar information as the plots above them, but for a GP algorithm. Lastly, subplot g) graphs the RMS error and subplot h) graphs the variance, which is defined as the mean of the uncertainty graphed in plots b) and e).
Figure 5 illustrates a constrained active learning algorithm comparison between a BNN and GP with a local prediction horizon across a noiseless parabola. The GP active learning strategy initially preferences the agent to traverse the outer edges of the surface. The agent then moves inward and explores the remainder of the surface in a set number of total samples. The BNN active learning strategy explores one half of the space thoroughly. These behaviors carry onto the constrained AL comparison but with a nearest neighbor prediction horizon across a Townsend surface, shown in Figure 8. GP driven active learning with nearest neighbor and local prediction horizons demonstrated the best overall performance across all active learning strategies. The results derived from trials utilizing these algorithms were not only consistent across trials, but also the most precise in finding the global minimum and on the higher end of computational efficiency.
Figure 7 displays algorithm performance when a global prediction horizon is imposed on a constrained trajectory active learner. Although still one of the higher performing algorithms, the GP driven global prediction horizon algorithm does not compare to the efficiency reached with GP algorithms of smaller prediction horizons. Instead of traveling across the edges, the agent mimics the BNN algorithm's movement patterns of oversampling a region of the space. Consequently, the GP algorithm ends with higher error in finding the global minimum, as compared to other exploration strategies that utilize GPs. Along with this, the GP algorithm requires increased samples and distance to reach convergence.
Figure 6: Example of an active learning GP and BNN model with a nearest neighbor prediction horizon and constrained movement horizon on a noiseless Townsend
### Analysis
Across all surface environments and algorithm hyperparameters, the Gaussian process active learner required less time to train the model for training datasets containing up to 331 samples, the size of the largest environment GP active learning algorithms are generally more accurate than the BNN active learning algorithms with some exceptions. GPs usually converge to a good model in less samples than a BNN. Active learners (BNN and GP alike) require less roving distance to converge to an accurate model, though not much less roving distance than science blind methods. GPs are more accurate in identifying the surface's true minimum location. These findings underscore the distinct advantages of GP-based active learning strategies in optimizing training efficiency and accuracy across diverse surface environments for constrained movement horizons.
The results are displayed in the order of the metrics defined in Section A. For nearly all plots, the performance metrics were plotted in a log scale across the y-axis for all figures, except when analyzing the error in locating the global minimum. The x-axis spans the surface type and exploration strategy. Exploration strategy is defined by the acronyms "SB" and "AL", which are acronyms respectively for science-blind and active learning. The movement horizon is indicated after the exploration strategy definition and is categorized as either \(1\Delta x\), \(2\Delta x\), or \(4\Delta x\). Each point in the figure represents a mean value and the error bars around each point represent standard deviation across all noiseless/noisy trials completed for each exploration strategy. Note that results for both snake and spiral science-blind strategies are captured into one data point. Lastly, whether the algorithms are driven by GPs or BNNs is denoted on the legend with the use of "GP" or "BNN", as well as color coordination.
Figure 7: Example of an active learning GP and BNN model with a global prediction horizon and constrained movement horizon on a 6km lunar crater swath
GP algorithms have a shorter training time compared to their BNN counterparts across all surface types as seen Figure 8 below. The training time across science-blind and constrained active learning strategies did not differ much when comparing GP or BNN algorithms individually. GP algorithms are generally more computationally efficient than BNN algorithms.
Figure 9 highlights a single trial in which training time per each sample is graphed. There appears to be a small increase in training time for the BNN algorithm as the number of samples increases. For example, this BNN algorithm started at 23.3 seconds per sample and ended with 35.3 seconds per sample. This instance of a GP algorithm maintained more steadiness with 0.308 seconds per sample initially and moved to 0.288 seconds per sample by the end of the simulation.
_RMS Error Upon Convergence_
GP algorithms outperform BNN algorithms, showing lower average RMS error, as shown in Figure 10 below. There are a few exceptions to this trend. One deviation can be witnessed in the baseline science-blind exploration strategy across the Townsend surface, where the GP science-blind algorithms have a higher RMS error as compared to their BNN counterparts. This could be due to the complexity of the surface, which may require a more expressive basis function to model it effectively. Of note, the science-blind snake method performed poorly on the Townsend surface and did not converge; therefore, the only data shown comes from the science-blind spiral method. Figure 10 illustrates that on average, GP algorithms produce higher state accuracy as compared to BNN algorithms. Comparing the active learning algorithms to the science-blind methods, note that as the surface complexity increases, there appears to be a increase in the RMS error for SB strategies. This can be seen through the Townsend surface, as well through the lunar crater, where the RMS error for SB methods approaches that of the active learner.
Figure 8: Comparison of BNN and GP science blind and active learning strategies on training time
Figure 9: Training time versus samples taken across a 3km lunar surface utilizing the constrained active learning NN exploration strategy
The effectiveness and superior performance of GP algorithms continues to be displayed in Figure 11, which details the samples until convergence is reached. The GP algorithms display lower sample count until convergence in all instances of exploration strategy and movement horizon, except one case. Figure 11 illustrates the instance in which the GP algorithm does not converge with lower sample numbers across the 6km lunar crater surface where the baseline science-blind GP algorithm does not outperform the BNN algorithm in terms of number of samples taken. Regardless of this baseline metric, the active-learning strategies that utilize GP algorithms take lower samples than their BNN counterparts. In regards to the science blind methods, lower samples were taken in every instance (due to the nature of the pre-determined path), and therefore there were less samples available upon evaluation of convergence. Regardless, the graph below demonstrates that science-blind methods converged to a global minimum in less samples than the active learning strategies.
Figure 11: Comparison of BNN and GP science blind and active learning strategies on sample until convergence
Figure 10: Comparison of BNN and GP science blind and active learning strategies on RMS error
### Distance Until Convergence
Data regarding the distance traveled until reaching convergence is illustrated in Figure 12. BNN algorithms generally travel farther distances than GP algorithms to converge on a model, suggesting decreased effectiveness as compared to GP models. The 6km lunar surface displays an exception to this trend where the snake science-blind GP algorithm cannot converge on an accurate model over the 6km lunar crater surface. The GP algorithm requires a slightly higher distance for convergence as compared to the BNN algorithm. Again, the increased complexity of the surface may require a more expressive basis function, which is provided by the BNN algorithm in this instance. This data ultimately confirms that GP algorithms generally perform better when utilizing active learning exploration strategies over science-blind methods.
Figure 12: Comparison of BNN and GP science blind and active learning strategies on distance until convergence
Figure 13: Comparison of BNN and GP science blind and active learning strategies on position error in finding the global minimum
The GP algorithms continue to outperform the BNN algorithms, as seen in Figure 13 where all active learning GP algorithms have lower average position error in finding the global minimum. In fact, the GP algorithm converges to the correct global minimum with zero position error in two instances and three other instances approach near zero error. None of the BNN models could precisely identify minima locations across any surface. In terms of the science-blind methods, although convergence to a low position error global minimum did occur, it is not suggestive of a better performing exploration strategy. This is due to the fact that the science-blind method traversed an environment in a meticulous way that required the agent to travel farther than was necessary. As such, the active learners provide a more cost-effective strategy.
## 5 Conclusion
This paper investigates the comparative performance between active learning algorithms driven by Gaussian processes and Bayesian neural networks tested in various simulation environments to predict the spatial distribution of a variable of interest along multiple datasets. The active learning algorithms consistently converge to an accurate model after traversing less distance as compared to science-blind methods. Note that a smaller distance traveled does not signify fewer samples were taken, as science-blind methods require fewer samples to reach convergence than their active learning counterparts. We can also conclude that GP models are superior oracles for active learning strategies, provide higher computational efficiency, and predict with more accuracy than BNN models across all environments tested. GP algorithms outperform BNN algorithms in nearly all cases, with the exception being when target surface is very complex or when global prediction horizons are utilized. Instead, GP algorithms benefit from short-sightedness where greedy actions lead to increased rewards.
This model has potential for future applications in rovers traversing planetary surfaces, such as the Moon or Mars. Not only do these algorithms have the capability to assist in the search for water-ice on these surfaces, but can be easily extended to search for other science objectives. The authors recommend that Gaussian process oracle models assist in science operations whether in real time onboard the rover or as a suggestion system to teleoperators offline. The next step in furthering this research is to encode the GP algorithm onto a physical rover and conduct field testing with real time science data.
## Acknowledgments
This work was supported by NASA Grant HI-80NSSC21M0334. We would like to extend our sincerest appreciation to the University of Hawaii's high-performance computing (HPC) cluster IT department, who assisted in not only ensuring the full-time operation of the HPC cluster but also in solving the many technical difficulties that arose throughout the duration of our research.
|
2309.04510 | Decreasing the Computing Time of Bayesian Optimization using
Generalizable Memory Pruning | Bayesian optimization (BO) suffers from long computing times when processing
highly-dimensional or large data sets. These long computing times are a result
of the Gaussian process surrogate model having a polynomial time complexity
with the number of experiments. Running BO on high-dimensional or massive data
sets becomes intractable due to this time complexity scaling, in turn,
hindering experimentation. Alternative surrogate models have been developed to
reduce the computing utilization of the BO procedure, however, these methods
require mathematical alteration of the inherit surrogate function, pigeonholing
use into only that function. In this paper, we demonstrate a generalizable BO
wrapper of memory pruning and bounded optimization, capable of being used with
any surrogate model and acquisition function. Using this memory pruning
approach, we show a decrease in wall-clock computing times per experiment of BO
from a polynomially increasing pattern to a sawtooth pattern that has a
non-increasing trend without sacrificing convergence performance. Furthermore,
we illustrate the generalizability of the approach across two unique data sets,
two unique surrogate models, and four unique acquisition functions. All model
implementations are run on the MIT Supercloud state-of-the-art computing
hardware. | Alexander E. Siemenn, Tonio Buonassisi | 2023-09-08T14:05:56Z | http://arxiv.org/abs/2309.04510v1 | # Decreasing the Computing Time of Bayesian Optimization using Generalizable Memory Pruning
###### Abstract
Bayesian optimization (BO) suffers from long computing times when processing highly-dimensional or large data sets. These long computing times are a result of the Gaussian process surrogate model having a polynomial time complexity with the number of experiments. Running BO on high-dimensional or massive data sets becomes intractable due to this time complexity scaling, in turn, hindering experimentation. Alternative surrogate models have been developed to reduce the computing utilization of the BO procedure, however, these methods require mathematical alteration of the inherit surrogate function, pigenonling use into only that function. In this paper, we demonstrate a generalizable BO wrapper of memory pruning and bounded optimization, capable of being used with any surrogate model and acquisition function. Using this memory pruning approach, we show a decrease in wall-clock computing times per experiment of BO from a polynomially increasing pattern to a sawtooth pattern that has a non-increasing trend without sacrificing convergence performance. Furthermore, we illustrate the generalizability of the approach across two unique data sets, two unique surrogate models, and four unique acquisition functions. All model implementations are run on the MIT Supercloud state-of-the-art computing hardware.
efficient computing, bounded search, time complexity scaling, generalizable optimization, data pruning
## I Introduction
Bayesian optimization (BO) is a data-based global optimization tool that discovers optima without an analytical model of the response function [1, 2, 3]. A standard BO procedure consists of two primary steps: (1) using a surrogate model to estimate the topology of the target response function given a collection of input data and (2) acquiring new suggested experimental conditions to run based on the estimated surrogate model means and variances [4, 5]. For the first step, a common surrogate model used in BO is a Gaussian Process (GP) regression. GPs model complex, multi-dimensional input-output response relationships using a mixture of kernel functions that interpolate the missing space between collected experiments [6, 7, 8]. For the second step, a mathematical figure of merit called an acquisition function (AF), acquires new experimental conditions to run, governed by balancing the exploitation of regions of low predicted response function means (for a minimization problem) and the exploration of regions of high predicted response function variances [4, 6, 9]. The interleaving steps of response function estimation _via_ surrogate model computation and acquisition of new experiments leverage the estimation power of the surrogate model to discover the optima of challenging experimental problems where it may be otherwise intractable to develop an analytical model representative of the response function [10, 11, 12].
However, as the complexity or dimensionality of the response function increases, more experimental data points, \(N\), are required for accurate estimation of the response function's surrogate model [13, 14]. This increased data requirement of the surrogate model becomes problematic because the time required to compute a GP regression increases polynomially following the scaling law \(O(N^{3})\)[7, 8, 15, 16, 17]. Both _in silico_ and _in situ_ optimization experiments can be significantly bottlenecked by this unfavorable scaling law if large volumes of data are being collected, hence, by selectively processing subsets of this data in tandem with bounded optimization, the computing times of the BO process can be reduced.
In this study, we explore the use of memory pruning and bounded surrogate models as a method to decrease the number of required experimental data points needed to accurately run an online BO procedure, therefore, decreasing the computing time of optimization. We benchmark the computing times of two surrogate models: (1) a GP and (2) a pre-trained neural network, each with four acquisition functions: (1) expected improvement (EI), (2) lower confidence bound (LCB), (3) EI Abrupt, and (4) LCB Adaptive, all run on the MIT Supercloud, a high-performance supercomputer consisting of Nvidia Volta V100 GPUs [18]. Existing literature on decreasing the computing time of BO conventionally alters the mathematics of a surrogate model to make computation more efficient [5, 7, 9, 19, 20], however, this constrains the user to only this newly developed surrogate for optimization.
In this contribution, we demonstrate the use of a generalized method of memory pruning and search space bounding to efficiently decrease BO computing times without constraining the procedure to a single surrogate model or AF. Furthermore, we demonstrate the reduction of computing times on two relevant problems: (1) optimization of a 6-dimensional ana
lytical Ackley function to demonstrate relevance for _in silico_ experimentation and (2) optimization of a 5-dimensional real-world data set of inorganic crystalline material band gaps to demonstrate relevance for _in situ_ experimentation.
## II Related Work
Existing literature exists on decreasing the computing time of BO, however, most of this literature requires significant changes to the mathematical structure of the GP or AF. For example, a common method of decreasing the computing time of BO is to implement a Sparse Pseudo-input GP (SPGP) [19, 20, 21, 7]. A standard GP is non-parametric in nature, meaning that when constructing a prediction, the entire prior training data set is required to compute the response function of a target variable [19]. Instead of using the full number of training data, \(N\), to compute this response function, an SPGP uses a pseudo data set of size \(M<N\), such that
\[\mathrm{X}_{\mathrm{SPGP}}=\{\mathrm{X}_{\mathrm{GP}}\}_{m=1}^{M}, \tag{1}\]
where \(\mathrm{X}_{\mathrm{SPGP}}\) is the set of input data used to compute the prediction in an SPGP and \(\mathrm{X}_{\mathrm{GP}}\) is the set of input data used to compute the predicted response in a standard GP. Hence, the spacing between pseudo data points is known. Moreover, \(||\mathrm{X}_{\mathrm{SPGP}}||=M\) and \(||\mathrm{X}_{\mathrm{GP}}||=N\). This reforged structure of a GP into an SPGP enables computing time decreases on the order of \(O(N^{3})\to O(NM^{2})\) since \(M<N\).
Another method to decrease the computing time of BO is efficient global optimization (EGO) [1, 22, 23]. Similar to standard BO, EGO implements a surrogate model to generate the input-output response function, however, EGO can acquire a global optimum in fewer online iterations than BO by bounding the derivatives of the acquisition function relative to either the target variable or the surrogate standard error [10]:
\[\begin{split}\frac{\delta\mathrm{EI}(\mathrm{X})}{\delta y( \mathrm{X})}<0\quad\mathrm{and}\\ \frac{\delta\mathrm{EI}(\mathrm{X})}{\delta s(\mathrm{X})}>0, \end{split} \tag{2}\]
where EI is the Expected Improvement acquisition function defined in the next section, \(y\) is the target response variable and \(s\) is the standard error of the surrogate. Additionally, van Stein _et al._[24] further decrease the computing time of EGO by parallelizing the computation of the gradients.
In order to decrease the computing time of BO, the methods mentioned above either (1) make significant changes to the surrogate model or (2) rely on computing the gradients of the AF to bound the search space. A downfall of computing the gradients of an AF is immediately constraining the AF of choice to be differentiable. Thus, the EGO studies above are constrained to using only the expected improvement AF and cannot use other AFs, even if it may be more advantageous. Therefore, in this study, we implement methods of decreasing the computing time of BO that do not constrain the user to select a certain surrogate model or AF to run the procedure. Instead, the method used in this paper is a lightweight implementation of a simple space-bounding method, not requiring gradients, from which new experiments are acquired. Furthermore, selective pruning of memory data from outside the bounded search space drives a significant decrease in BO compute time relative to standard BO. Hence, the method described in the next section supports the use of any surrogate model or AF. In this paper, we illustrate the computing times achieved using a GP surrogate model and four different AFs.
## III Methods
In this paper, we extend the implementation of a BO wrapper developed by Siemenn _et al._[25] that bounds the acquisition and search space while pruning old memory data that lay outside of these computed bounds. This approach is entitled Zooming Memory-Based Initialization (ZoMBI) and is described further in [25] with code publicly available. In brief, for a minimization objective, \(f\), the bounds for each dimension, \(d\), are computed uniquely based on the \(\min(\mathrm{X}_{d})\) and the \(\max(\mathrm{X}_{d})\) of the \(m\) best-performing memory points, _i.e._, the points that achieve the \(m\) lowest target \(f\) values, from the set X. For every loop, all data points that lie outside of the constrained space will be pruned from memory. This is computationally favorable because as the search bounds iteratively zoom in, the target space inside the bounds increases in resolution by the surrogate model while all other space decreases in resolution. A standard GP surrogate model as well as a neural network (NN) surrogate model are used in the ZoMBI optimization procedure to demonstrate its generalizability to several unique surrogate models.
The computing times of four unique AFs implemented using ZoMBI are benchmarked against their standard BO counterparts. These four AFs are expected improvement (EI), lower confidence bound (LCB), EI Abrupt, and LCB Adaptive. Each of these mathematical figures of merit uniquely balances the exploitation of surrogate posterior means and the exploration of surrogate posterior variances.
EI is defined as [26, 27, 28]:
\[\begin{split} a_{\text{EI}}(\mathrm{X},\mathrm{Y};\xi,\eta)=& \left(\mu(\mathrm{X})-\min(\mathrm{Y})-\xi\right)\Phi(Z)+\sigma( \mathrm{X})\psi(Z),\\ \mathrm{where}\quad Z=&\frac{\mu(\mathrm{X})-\min( \mathrm{Y})-\xi}{\sigma(\mathrm{X})},\end{split} \tag{3}\]
where \(\mathrm{X}\) is the set of input data \(\{x_{1},x_{2},...x_{N}\}\), \(x_{j}\in\mathbb{R}^{d}\) for \(d\) dimensions, \(\mathrm{Y}\) is the set of corresponding response values \(\{y_{1},y_{2},...y_{N}\}\), \(y_{j}\in\mathbb{R}\), \(\xi\) is a hyperparameter tuned to favor exploration or exploitation of the surrogate, and \(\Phi(\cdot)\) and \(\psi(\cdot)\) are the normal cumulative and probability density functions, respectively. EI strikes a balance between exploration and exploitation while considering the prior best-performing response variable of the set, \(\min(\mathrm{Y})\).
LCB is defined as [29, 28]:
\[a_{\text{LCB}}(\mathrm{X};\beta)=\mu(\mathrm{X})-\beta\sigma(\mathrm{X}), \tag{4}\]
where \(\beta\) is a hyperparameter tuned to factor exploration or exploitation of the surrogate means, \(\mu\), and variances \(\sigma\). A
higher \(\beta\) favors exploration of surrogate variances while a lower \(\beta\) favors exploitation of surrogate means.
EI Abrupt is defined as [24]:
\[a_{\text{EI Abrupt}}(\text{X},\text{Y};\beta,\xi,\eta)= \tag{5}\] \[\begin{cases}a_{\text{EI}}(\text{X},\text{Y};\xi,\eta),&\text{if }| \Delta\{y_{N-3...N}\}|\leq\eta\\ a_{\text{LCB}}(\text{X};\beta),&\text{otherwise}\end{cases}\]
where the mode of acquisition is abruptly switched between EI and LCB depending on if the finite difference between the \(\{y_{N-3...N}\}\) previous response values is below a hyperparameter threshold, \(\eta\). EI Abrupt provides another level of tunable exploration-exploitation by actively swapping between these modes as more data is collected.
LCB Adaptive is defined as [24, 30, 31]:
\[a_{\text{LCB Adaptive}}(\text{X},N;\beta,\epsilon)=\mu(\text{X})-\epsilon^{N }\beta\sigma(\text{X}), \tag{6}\]
where the hyperparameter, \(\beta\), is actively tuned as the number of collected data points, \(N=||\text{X}||\), increases. LCB Adaptive exponentially decays from being more explorative to then becoming more exploitative as \(N\) increases. Since ZoMBI
Fig. 1: Computing times of a Bayesian optimization procedure with a Gaussian process surrogate model. (A) Wall-clock computing times per experiment for only the GP computation component across \(N\) data points on a 6D analytical Ackley function. (B) Wall-clock computing times per experiment for the acquisition of new data from a GP computed across a mesh grid of 10k data points on a 6D analytical Ackley function. Each panel represents the computing times of four AFs, from top to bottom: EI, LCB, EI Abrupt, and LCB Adaptive. The colored scatter points represent the compute times per experiment of twelve independent optimization procedures using the memory pruning ZoMBI method for each AF. The black scatter points represent the benchmark compute times per experiment of one independent optimization using standard BO for each AF. For ZoMBI, \(N\leq 20\)_via_ memory pruning and for standard BO, \(N\) is the number of experiments. All compute times are wall-clock compute times measured from the MIT Supercloud Nvidia Volta V100 GPUs. The \(y\)-axes are shown in log scale.
actively prunes the set \(\mathrm{X}\), which decreases \(N\) until more experiments are collected, LCB Adaptive is always switching acquisition modes throughout the optimization procedure.
In this paper, we demonstrate the computing times of each of these AFs implemented with the ZoMBI bounding and pruning method as well as implemented with just standard BO. The computing times are further bifurcated into (1) the surrogate model compute times per experiment alone on \(N\) data points and (2) the surrogate + AF compute times per experiment on a mesh grid of 10k data points since acquisition of new data points requires the computation of the surrogate across a mesh grid of points in the space.
First, the computing times of each AF with a GP surrogate model are measured on a 6-dimensional analytical Ackley function [32] for 1000 experiments for both ZoMBI and standard BO implementations. Second, the computing times of just the ZoMBI AF implementations with a NN surrogate model for 200 experiments are measured on a real-world 5-dimensional data set of inorganic crystalline material band gaps, available as open-access from Materials Project [33]. Both of these experiments are run on the high-performance supercomputer, MIT Supercloud to measure the wall-clock computing times of both the surrogate models and the acquisition functions [18].
## IV Results
### _Gaussian Process Surrogate on a 6D Analytical Data Set_
In this section, we demonstrate a decrease in BO computing times, relative to standard BO, using the ZoMBI method of memory pruning on an _in silico_ optimization experiment of an analytical 6-dimensional Ackley function [24, 32]. This _in silico_ optimization experiment is run on the MIT Supercloud Nvidia Volta V100 GPU [18].
Figure 1 illustrates (A) the time to compute a GP surrogate model for each iterative experiment and (B) the time to acquire new data points by computing the GP surrogate across a mesh grid for each iterative experiment. One operation is included in the measurement of GP compute time: (1) the fitting of training data, \(\mathrm{X}\), to a GP model using a mixture of kernel functions, in this case, Matern 5/2 kernels. Two operations are included in the measurement of GP + acquisition compute time: (1) the prediction and storage of the response values \(\mathrm{Y}\) from the GP for a mesh grid of 10k data points from a bounded set of \(\mathrm{X}\) and (2) the computation of the acquisition figure of merit from one of Equations 3-6, hence, GP + acquisition computing times are higher than just GP computing times alone.
In Figure 1(A), a square wave pattern in ZoMBI computing times is shown by the colored scatter points. This is a result of the ZoMBI process selecting the top performing acquired experiments every \(N=20\) experiments and pruning the rest from the memory to bound the surrogate mesh grid computation. Hence, for every 20 experiments, a drop in computing times is noted for ZoMBI, whereas computing times for standard BO continue to increase polynomially per experiment as additional experiments are collected to calculate the GP. As a result of this memory pruning, ZoMBI computing times per experiment demonstrate a non-increasing trend, even after 1000 experiments are acquired. Therefore, the memory pruning procedure significantly decreases the computing time per experiment relative to standard BO. Furthermore, a low spread between scatter points plotted from each of the twelve independent trials demonstrates the high reproducibility of results using the MIT Supercloud GPUs.
Similar to the GP surrogate computing time results, a significant decrease in computing times using memory pruning is demonstrated for the acquisition of new data points, shown in Figure 1(B). A log-transformed sawtooth pattern is shown between memory pruning steps where a drop in compute time occurs. Again, low variance between the twelve independent trials is demonstrated due to the high overlap between scatter
Fig. 2: Loss of standard and ZoMBI Bayesian optimization on a 6-dimensional Ackley function. (A) Loss traces over the 1000-experiment optimization procedure from Figure 1. Only the minimum standard BO loss trace is shown for clarity. (B) Final loss values after 1000 sampled experiments. The colored bars and traces illustrate the median minimum values discovered by twelve independent trials of the memory pruning ZoMBI method for each AF with the 5th and 95th percentile indicated by (A) the shaded region and (B) the error bars. The grey bars illustrate the minimum values discovered by one independent trial of standard BO for each AF.
points. Using ZoMBI, the time to compute the GP surrogate alone is approximately 0.2 seconds per experiment (Figure 1(A)), while the time to compute and acquire new data points from the surrogate model takes approximately 1 second per experiment (Figure 1(B)). This difference arises due to the number of computations being performed: Figure 1(A) computes the GP across only \(N\leq 20\) points while Figure 1(B) computes the GP and the acquisition value (Equations 3-6) across 10k points in a mesh grid to acquire new data points. After 1000 experiments are collected, the ZoMBI method still achieves computing times of 1 second per experiment, however, the standard BO method polynomially approaches compute times of 100 seconds per experiment, a factor of 100x slower. Therefore, computing times of BO are significantly reduced using a memory pruning and bounded optimization approach. But, does this pruning and bounding process adversely impact optimization performance?
Figure 2 illustrates the convergence of ZoMBI and standard BO on the global minimum of the 6-dimensional Ackley function. Not only does this memory pruning and bounded optimization procedure not adversely impact optimization performance, but it is also demonstrated to outperform standard BO on the 6-dimensional Ackley function. ZoMBI EI achieves the lowest function values after 1000 experiments with LCB,
Fig. 3: Computing times of a Bayesian optimization procedure with a pre-trained neural network surrogate model. (A) Wall-clock computing times per experiment for only the NN computation component across \(N\) data points on a 5D real-world materials data set. (B) Wall-clock computing times per experiment for the acquisition of new data from a NN computed across a mesh grid of 10k data points on a 5D real-world materials data set. Each panel represents the computing times of four AFs, from top to bottom: EI, LCB, EI Abrupt, and LCB Adaptive. The median values are shown by the solid line and the 5th and 95th percentile range is shown by the shaded region. All compute times are wall-clock compute times measured from the MIT Supercloud Nvidia Volta V100 GPUs.
then EI Abrupt, then LCB Adaptive following, in that order. The reverse is noted for standard BO. This implies that without the memory pruning and search space bounding features of ZoMBI, the actively adapting acquisition functions, EI Abrupt and LCB Adaptive, perform better than the conventional EI and LCB acquisition functions. Moreover, we note that standard BO, shown as the black trace in Figure 2(A), stops learning after fewer than 50 experiments due to local minima and the sharpness of the Ackley function global minimum [32, 34] while all ZoMBI methods continue to learn by continuously zooming in the search space bounds.
### _Neural Network Surrogate on a 5D Real-world Data Set_
In this section, we demonstrate a decrease in BO computing times, relative to standard BO, using the ZoMBI method of memory pruning on an optimization problem translatable to _in situ_ experimentation. The data set optimized is a 5-dimensional open-access data set of inorganic crystals with the objective of optimizing the properties density, formation energy, energy above hull, Fermi energy to find a material with 1.4eV band gap [33, 35]. A pre-trained NN is used as the surrogate model instead of the GP to demonstrate the generalizability of the memory pruning method to various surrogate models.
Figure 3 illustrates the time to fit a pre-trained NN to the set X on the MIT Supercloud [18]. Figure 3(A) illustrates fitting the NN surrogate to a maximum of \(N=20\) points using ZoMBI, whereas Figure 3(B) illustrates fitting the NN and computing the respective AF to a mesh grid of 10k points using ZoMBI. The combination of the NN fitting to few data and also being pre-trained produces a noisy trace of computing times in Figure 3(A). However, as the number of fitting points increases from 20 to 10k, a much clearer trend in computing times can be seen in Figure 3(B).
Similar to the GP surrogate results on the 6D Ackley function in Figure 1, a sawtooth pattern, resetting every 20 experiments is shown for the NN + AF computing times in Figure 3 for the 5D real-world data set. Although each of these AFs has a similar structure to their compute time curves, each \(y\)-axis has a different scale, and LCB is noted to have the highest computing time. This is likely due to LCB's exploitative nature constantly generating a wide search bound which encompasses many more data points when compared to any of the greedier AF methods. Furthermore, an interesting pattern is seen in the EI Abrupt curves where the first rising segment has a different structure than the second rising segment, this is the abrupt switch between EI and LCB sampling modes that changes the bounding and, in turn, changes the number of data points kept in memory.
Overall, the NN surrogate run on the 5D real-world data set produces similar non-increasing computing times per experiment to the GP surrogate run on the 6D analytical Ackley function. Hence, demonstrating the potential for the ZoMBI memory pruning and bounding optimization method to be generalizable to various surrogate models, without modification, to decrease the computing time of BO.
## V Summary & Conclusions
In this paper, we demonstrate the capabilities of search space bounding and memory pruning in Bayesian optimization to significantly decrease the optimization procedure's computing time. We demonstrate this decrease in compute time by up to 100x across two unique data sets, two unique surrogate models, and four unique acquisition functions, all of which are run on the high-performance MIT Supercloud supercomputer [18].
The method of bounding and memory pruning using Zooming Memory-Based Initialization (ZoMBI) [24] implemented in this paper takes the best-performing memory points and uses those values to construct a constrained search region for the acquisition function to sample from. Upon consecutive constraints, prior data points that lay outside of these bounds are pruned from memory, decreasing the number of data points used to fit a surrogate model, in turn, decreasing the time required to compute the surrogate model and its acquisition function.
We demonstrate that this iterative constraining and pruning process achieves a sawtooth computing time pattern per experiment, relative to standard BO that exhibits a polynomially increasing computing time trend following \(O(N^{3})\) for \(N\) experiments. The sawtooth computing time pattern is shown to reset back to near-zero after each memory pruning update, hence, producing a non-increasing computing time trend per experiment. Furthermore, this decreased computing time is shown to persist across analytical and real-world data sets, across Gaussian Process regression and neural network surrogate models, and across four acquisition functions: expected improvement, lower confidence bound, abrupt expected improvement, and adaptive lower confidence bound. The results demonstrated in this paper are also shown to be reproducible with low variance across several independent trials by being run on the MIT Supercloud supercomputer. Hence, in this paper, we demonstrate the reproducibility and generalizability of the proposed ZoMBI memory pruning and bounded optimization method to decrease the computing times of Bayesian optimization across a variety of data sets, surrogate models, and acquisition functions.
|
2309.04276 | Impact of bio-inspired V-formation on flow past arrangements of
non-lifting objects | Inspired by the energy-saving character of group motion, great interest is
directed toward the design of efficient swarming strategies for groups of
unmanned aerial/underwater vehicles. While most of the current research on
drone swarms addresses controls, communication, and mission planning, less
effort is put toward understanding the physics of the flow around the members
of the group. Currently, a large variety of drones and underwater vehicles
consist of non-lifting frames for which the available formation flight
strategies based on lift-induced upwash are not readily applicable. Here, we
explore the V-formations of non-lifting objects and discuss how such a
configuration alters the flow field around each member of the array compared to
a solo flyer and how these changes in flow physics affect the drag force
experienced by each member. Our measurements are made in a water tunnel using a
multi-illumination particle image velocimetry technique where we find that in
formations with an overlap in streamwise projections of the members, all the
members experience a significant reduction in drag, with some members seeing as
much as 45% drag reduction. These findings are instrumental in developing
generalized energy-saving swarming strategies for aerial and underwater
vehicles irrespective of the body shapes. | Prasoon Suchandra, Shabnam Raayai-Ardakani | 2023-09-08T11:53:03Z | http://arxiv.org/abs/2309.04276v1 | # Impact of bio-inspired V-formation on flow past arrangements of non-lifting objects
###### Abstract
Inspired by the energy-saving character of group motion, great interest is directed toward the design of efficient swarming strategies for groups of unmanned aerial/underwater vehicles. While most of the current research on drone swarms addresses controls, communication, and mission planning, less effort is put toward understanding the physics of the flow around the members of the group. Currently, a large variety of drones and underwater vehicles consist of non-lifting frames for which the available formation flight strategies based on lift-induced upwash are not readily applicable. Here, we explore the V-formations of non-lifting objects and discuss how such a configuration alters the flow field around each member of the array compared to a solo flyer and how these changes in flow physics affect the drag force experienced by each member. Our measurements are made in a water tunnel using a multi-illumination particle image velocimetry technique where we find that in formations with an overlap in streamwise projections of the members, all the members experience a significant reduction in drag, with some members seeing as much as 45% drag reduction. These findings are instrumental in developing generalized energy-saving swarming strategies for aerial and underwater vehicles irrespective of the body shapes.
**Keywords:** Flight formation, swarms, drag reduction, particle image velocimetry
Collective behavior is a common pattern observed in nature. Group travel is ubiquitous among swarms of insects [1, 2, 3], formation flights of Northern bald ibises [4], geese [5, 6, 7], pelicans [8], pigeon flocks [9], and schools of fish [10, 11]. The local interactions between the numerous members in the groups are driven by complex leadership and decision-making tactics [12], leading to reduced energy expenditure [4, 8, 11], and lower recorded muscle activities [10]. Additionally, arrangements of vegetation patches in riverfront and coastal areas are able to control flood and prevent soil erosion [13, 14, 15, 16, 17]. Studies of flow past solid arrays are also essential for engineering applications, such as heat exchangers in power plants [18, 19, 20], and designs of marine structures [21, 18]. The benefits of group maneuver have been reported as far back as World War I with higher rates of successful missions among aircraft flying in formations [22], up to recent demonstrations in the commercial aviation [23], as well as drafting techniques used in sports and Formula 1 competitions [24, 25].
Studies of group motion have been mainly focused on the neuro-biological, behavioral, and social aspects such as patterns of decision-making and compromise [26, 27, 28], or motion tracking and
trajectory estimations [29]. Among all, the **V**-shaped flight pattern of migratory birds has inspired the development of flight formation strategies for fixed-wing aircraft where two or more birds/aircraft flying at certain distances from each other require less energy input compared to a solo flyer. Theoretical models of formation flight [30, 31, 32, 33, 34, 35, 36] developed on the basis of potential flow, focus on the wingtip vortices generated by a finite-span lifting body and how the resulting induced upwash outside of the wake can be advantageous to another lifting body positioned at a proper distance or it could turn into a catastrophic horizontal tornado [30, 37] for one in a wrong position. While these theories limit the applicability of the formation flight to lifting bodies, they are not able to explain the benefits of columnar swimming patterns of spiny lobsters [38] or the drafting techniques used in sports [24, 25] which are not lift related.
The recent advances in unmanned aerial vehicles (UAVs) have resulted in a variety of drone swarm strategies, focusing mainly on control and communication [39, 40, 41, 42], and path and mission planning [43, 44]. Drone swarms are important for security and surveillance [45, 46], provision of wireless connectivity [45, 46], and environmental monitoring [47], and with fewer safety hazards, are able to take advantage of tight formations to extend their range. Most vertical (short) take-off and landing (V/STOL) UAVs use propellers for lift and maneuvering, and their frames are mostly non-lifting. This places UAVs in a different situation compared with fixed-wing aircraft and the available theories for formation flight are not fully applicable to these UAVs.
To be able to effectively implement such formation flight strategies for unmanned vehicles, we need a detailed understanding of the physics of flow past general arrays of obstacles. Previous experiments using laser diagnostic techniques such as particle image velocimetry (PIV) have considered the flow on the exterior [14, 17] or in the wake of the arrays [48, 49, 13, 15, 21], with limited access to the inside due to obstructions of illumination paths and only numerical simulations have been able to provide the details of the inside flow [50, 51, 16]. Only a handful of experimental studies have quantitatively looked at the inside of the array [52], using refractive index-matched samples [53, 54, 55].
Here, we focus on the case of non-lifting objects in a **V**-formation to demonstrate the applicability of formation strategies for a wider range of applications. We employ a multi-light sheet, Computer Numerically Controlled (CNC) consecutive-overlapping imaging approach [56, 57] to overcome the limitations of a two-dimensional two-component (2D-2C) PIV experiment in water. We use this procedure to study the physics of the flow field and find the total force experienced by each member of the array as a measure of the enhancement/deterioration of performance compared with a single-member case.
## V-formation of non-lifting bodies
Consider a group of \(\mathcal{N}\) stationary non-lifting objects, cylinders of diameter \(d\) here, arranged in **V**-formations in the flow (Fig. 1). The geometry of this formation is defined by the angle, \(\phi\), of the **V** and the distance between the rows of the members which is kept at \(2.5d\). Here, we focus on the case of 3-, 5-, and 7-member groups, at two formation angles of \(36.87^{\circ}\) and \(67.38^{\circ}\), denoted as "Narrow" (cases N3, N5, and N7) and "Wide" (cases W3, W5, and W7), respectively. In the N-formations, the direct streamwise projections of all the members are partially obstructed by \((1/6)d\) of another member in their front/back (green dashed lines in Fig. 1). These N-formations closely resemble the **V** angles observed in nature for Canada geesee [5]. In the case of the wide or W-formation, the streamwise views of the members are not obstructed. Members are numbered as shown in Fig. 1. Member 1 along with even-numbered members make up the upper echelon/branch and member 1 along with odd-numbered members make up the lower echelon/branch. As a reference, all the flow responses are compared against the solo cylinder case (S1). The free-stream speed for all the cases is \(U_{\infty}\sim 18.3\) cm/s and the Reynolds number is \(\text{Re}_{d}=\rho U_{\infty}d/\mu\approx 1100\) which corresponds to the turbulent wake behind a cylinder [58, 59].
We use 2D-2C PIV (Methods section 2) [60, 61] to capture the velocity field. The key challenge in performing these experiments is the shadows that are inevitable when a single light sheet is used
with non-transparent samples [49, 62]. For a single item in the flow, a dual-light-sheet strategy, where an incoming pulsed laser beam is divided into two beams using a beam splitter, has been demonstrated [56, 57] to be effective in accessing all sides of an opaque sample. This method is used here to measure the velocity field in the S1 case and the mean normalized streamwise and normal velocity fields, \(u/U_{\infty}\) and \(v/U_{\infty}\) respectively, normalized mean vorticity field, \(\omega d/U_{\infty}\), and normalized turbulent kinetic energy, \(k/U_{\infty}^{2}=0.5(\overline{u^{\prime}u^{\prime}+v^{\prime}v^{\prime}})/U_{ \infty}^{2}\) (definitions in supplementary section A.3), are shown in Fig. 2 for reference. As is expected, the flow is symmetric about the line of \(y=0\), with a clear view of the flow slowing down in \(x/d<-0.5\) due to the stagnation point (Fig. 2(a)). The velocity deficit in the wake extends multiple diameters past the member, and the detached shear layers are seen in Figs. 2(a-c). The wake turns turbulent downstream (Fig. 2(d)) starting at about \(1.2d\), and reaching its maximum \(k\) at a vortex formation length [63] of \(L_{f}=2.6d\) from the center of the cylinder which agrees with values of \(L_{f}\) reported in the literature [64]. Lastly, using the velocity fields, we calculate the drag force on the solo cylinder (supplementary section A.4) and find the drag coefficient \(C_{D}=D/(0.5\rho U_{\infty}^{2}d)=1.09\pm 0.05\) which closely matches the \(C_{D}\) values reported in the literature [65, 66, 57, 13, 67].
With multiple members, illumination access to the inside of the arrays gets obstructed [49] and even a dual-light-sheet setup is not sufficient (supplementary Fig. A1). Thus, we expand the technique and employ a quadruple-light-sheet setup [56], where with two additional beam splitters, we illuminate the area around and inside of the arrays (Fig. 8 in the Methods section 2). Contours of the normalized mean streamwise and normal velocities for all the considered formations are shown in Fig. 3(A) and supplementary Fig. A3.
## Interactions between members
The presence of multiple members inevitably leads to interactions between the flow fields past the members, coming down to how the fluid is able to maneuver the obstacles in its way. Overall, there are three main phenomena that regulate the flow (Fig. 3(B)): (i) the _slow-down_ of the flow upstream of any solid object resulting in the stagnation point around the leading edge of the member. (ii) The second phenomenon is the _velocity deficit_ due to the wake behind a solid boundary which happens to all the members. The S1 case also has a wake deficit (Fig. 2(a)), with the difference that this wake is free to develop downstream while for the multi-member formations, the wake deficits turn into the incoming flow upstream of
Figure 1: A summary of (a) Narrow (N) and (b) Wide (W) V-formations of cylinders of diameter \(d\) with angles \(\phi=37^{\circ}\) and \(\phi=67^{\circ}\) respectively. (The black dotted lines show the extent of the V for each of the cases.) The space between all the vertical rows is kept constant at \(2.5d\). Each formation is considered for three cases with 3, 5, and 7 members as denoted below each figure. The green dashed lines are the extent of the edges of the members placing approximately \(1/6d\) of the members in streamwise projections of the upstream/downstream members in the N cases and not in view of the W cases. All members are numbered with the leading member as number 1. Member 1 is also shared with the S1 (solo member case). The upper and lower echolens of the V-formations are also shown, along with the coordinate axes.
another member for all members besides member 1. (iii) Lastly, we have the flow passing through the spacing between members, called the "_bleeding flow_"[16, 68, 21, 50, 69]. The obstructive nature of the formation results in the bleeding flow acting like a jet of faster fluid passing through the space between the members and thus counteracting the slow-downs in the vicinity of the stagnation points and the velocity deficits in the wakes. In general, the larger the bleeding flow around a member, the greater the drag force on it [16, 50]. (Also see supplementary Fig. A2).
When more members are added to the formation, the flow field downstream of member 1 gets altered (Fig. 3(A)). Among the wakes of all the members, only the wakes of the leading members in both N and W-formations maintain a symmetric form similar to that of S1 (Fig. 3(C)). However, in all the N-formations, the vertical extent of the wake of member 1 becomes slightly larger than that of the S1, especially when it gets close to members 2 and 3 where the two upcoming stagnation points enhance this process. These two slow-downs thus strongly oppose the bleeding flow and the bleeding flow moving through the gap between members 2 and 3 has an average velocity (supplementary Eq. A1) of about 70% of the free-stream velocity (Fig. 4). However, in the W-formations, with a larger opening available for the bleeding flow, the wake of member 1 becomes pointed and distinctly separate from the stagnation points of members 2 and 3. Thus, the average velocity of bleeding flow between members 2 and 3 recovers to about 95% of the free-stream velocity (see Fig. 4).
Besides the leading member, we categorize the rest of the members into two groups, the _interior_ members which are guarded in both up/down-stream directions, and the _trailing_ members (\(\mathcal{N}\) and \(\mathcal{N}-1\)) which only see members upstream. In the N3 case (no interior members), a small degree of disparity in the streamwise location of cylinders during experiments leads to flow turning towards member 3 which is slightly downstream of member 2. This is similar to a three-cylinder fluidic pinball [52, 70] undergoing a pitchfork bifurcation [71, 72, 73].
Unlike the leading member, the trailing members of any N-formation experience an asymmetric flow field, where the stagnation points are shifted toward the outside of the array (away from \(y=0\)), and the bodies of the members in the inside of the array experience the bleeding flows moving in between the members (Fig. 3(A)(a-c)). The presence of the slow-moving fluid in the vicinity of the stagnation points on the outside, the faster-moving bleeding flow inside the array, as well as the remnants of the wake of the upstream members all result in the wakes of these members to slightly bend outward (away from \(y=0\)) and then move back inward (toward \(y=0\), Fig. 3(C)). As the two wakes develop downstream, they completely absorb the bleeding flow in between the \(\mathcal{N}-1\) and \(\mathcal{N}\) members and turn into a combined wake.
Similarly, trailing members of W-formations experience a mild asymmetry in the flow with the wake only slightly bending inward (Fig. 3(C)). However, the faster bleeding flow between the trailing member and its closest upstream neighbor along their respective echelons (\(\mathcal{N}\geqslant 5\)) with
Figure 2: Contours of (a) mean streamwise velocity, \(u\), (b) mean normal velocity, \(v\), both normalized with \(U_{\infty}\), (c) mean vorticity, \(\omega\), normalized by \(U_{\infty}/d\), and (d) turbulent kinetic energy, \(k=0.5(\overline{u^{\prime}u^{\prime}}+\overline{v^{\prime}v^{\prime}})\) normalized by \(U_{\infty}^{2}\), for flow past a single cylinder. The value of normalized vortex formation length \(L_{f}/d\) is also shown in part (d). Figures are cropped in order to show relevant flow dynamics.
an average velocity (Fig. 4) of close to 60% of the free-stream velocity, guides the wake to stay nearly streamwise as it develops downstream.
The trailing members (\(\mathcal{N}\) and \(\mathcal{N}-1\)) of N-formations experience larger deviations from symmetry compared with W-formations (compare the bend in the red dash-dotted centerlines Fig. 3(C)). The outer boundaries of the wakes of the trailing members of N-formations spread in a similar manner as the wake of the S1 case but the inner boundary spreads inward (toward \(y=0\)) as the slower bleeding flows with average velocities of about 30% of free-stream velocity are not able to guide the flow as much as in the W-formations (check bleeding flow between echelon members in Fig. 4).
Interior members, placed in between the leading and trailing members, are only present in formations with \(\mathcal{N}\geqslant 5\). In N-formations, the overlap in the projections results in the wake of the upstream member to be in direct sight of the
Figure 3: (A) Contours of normalized mean streamwise velocity \(u(x,y)/U_{\infty}\) for all the experimental cases (a) N3, (b) N5, (c) N7, (d) W3, (e) W5, and (f) W7. The array members are numbered as per Fig. 1. (B) Schematic of the mean streamwise velocity between two array members along the upper echelon, qualitatively showing the three phenomena of (i) flow slow-down due to upcoming stagnation point, (ii) velocity deficit in the wake, and (iii) bleeding flow between the array members, as indicated by dashed red circles. The thin black lines indicate the contours of iso-velocity lines (streamwise). The thick black line and the blue arrows indicate the streamwise velocity profile between the two array members (behavior along the lower echelon is similar but mirrored). (C) Qualitative schematics of the structure of the wake for the leading, interior, and trailing members of the upper echelon of the solid arrays, for both the narrow and wide formations. The black dotted lines indicate iso-velocity contours (streamwise). The arrows indicate the bleeding flows. The dash-dotted red lines denote the centerline of the wakes.
interior members and thus pushing the stagnation points of the interior members outward (away from \(y=0\)). On the other hand, the overlap results in the downstream members also regulating the development of the wake of the interior members and bending the entire wake inward (Fig. 3(C)). However, all these are also bounded by the presence of the sister member in the same row which also experiences a similar flow behavior. These two interior members act nearly as mirrors to each other and limit the extent to which the wakes of the interior members can bend inward. Ultimately, the two wakes from the sister members in a row (for example members 2 and 3 in N5), the bleeding flow between them (\(U_{2-3}^{\text{bled}}\)), and the two bleeding flows between the interior member and their down-stream echelon members (\(U_{2-4}^{\text{bled}}\) and \(U_{3-5}^{\text{bled}}\)) all combine into the bleeding flow moving through the two downstream members, \(U_{4-5}^{\text{bled}}\).
While the general idea is also transferable to the W-formations, the larger distance between the two echelons of this formation and zero-overlap in the projections of the members result in the stagnation points of the interior members to stay at almost the same location as the S1 case, with the iso-velocity contours in the vicinity of the stagnation area being pushed outward. In these formations, the bleeding flow between the interior member and their upstream member (same echelon) is faster than that of the N-formations (Fig. 4) and directs the upstream wake to move away from the interior member. Similarly, on the downstream, the bleeding flow guides the wake of the current member to also be slightly bent inward and not in the sight of the downstream member. As a result, the velocity contours of interior members of W-formations have a closer resemblance to the contours of the S1 case than the N-formations (Fig. 3(A)).
## Turbulence
In the S1 case, the flow with \(\text{Re}_{d}\approx 1100\) stays laminar up to \(x/d=1.2\) where afterward the wake turns turbulent. Similarly, the flow immediately past the leading member 1 of all the arrays stays in a laminar condition (Fig. 5). In N-formations, there are no visible levels of turbulence in the wake of member 1, and turbulence only sets in past members 2 and 3. The significant slow-downs due to the combination of the wake deficit and the upcoming stagnation points result in lower levels of turbulence compared with the S1 case and peak \(k\) values in the wakes of most of the members are about 50% of that of the S1 case. However, in W-formations, the wakes of all members exhibit a pattern of turbulence resembling that observed in the S1 case. Similar decreases in turbulence have been previously observed with increasing the density of circular arrays of cylinders [16, 50]. However, as the number of members increases, even for N-formations, significant wake-to-wake, and wake-to-cylinder interactions lead to high levels of turbulence in the downstream portion of the array (similar to previous reports [16]), with peak \(k\) values resembling that of the S1 case (More details available in the supplementary sections A.2 and A.3).
## Forces on array members
To evaluate the performance of each of the members in the formations and compare it with the
Figure 4: Equivalent “bleeding flow” speeds, normalized by the free-stream speed \(U_{\infty}\), between the array members (in \(i-j\) pairs), for all the experimental cases. ‘N’ denotes narrow formations and ‘W’ denotes wide formations and \(i\) and \(j\) are the respective member numbers. The bleeding flows in between members of each echelon and the space between sister members within one row are grouped together for clarity. Note that identical symbols are used for formations (N or W) with the same number of members and marked by N or W on the plot to differentiate between them. The error bars are derived from the uncertainties in the PIV statistics calculated based on equations presented by Wieneke [74] and Sciacchitano & Wieneke [75] (details in supplementary section A.5).
S1 case, we focus on the drag force experienced by each member of the group, as shown in Fig. 6. The leading member of all formations, both N and W, is able to experience a reduction in the drag force. The blockage caused by all the interior and trailing members of the N-formations results in the drag of member 1 decreasing as \(\mathcal{N}\) is increased (drag reduction of 29% for N3 and 38% for N7 (Fig. 6) (refer to supplementary section A.2 and supplementary Fig. A3 for more details on the effects of blockage caused by array members on mean normal velocity). The reductions experienced by member 1 of W-formations are similar for all cases (about 6-7%). This drag reduction is mostly due to slower incoming flow upstream of the leading member as the multi-body array slows down the flow (see Fig. 3(A) and supplementary Fig. A3). In general, the slower the incoming flow or the bleeding flow around a member, the lower the momentum transfer from the fluid to the solid, and the lower the drag force [16, 50]. The drag reduction is more drastic for the leading member of N-formations because in addition to the slowing of the incoming flow, the presence of interior members 2 and 3 in the path of the leading member's wake (Fig. 1(a)) results in a pressure recovery as the flow slows down approaching the stagnation points of members 2 and 3 (Fig. 3(B), for more details, compare supplementary Figs. A13(d), A14(a), and A15). This leads to a smaller
Figure 5: Contours of turbulent kinetic energy, \(k=0.5(\overline{u^{\prime}u^{\prime}}+\overline{v^{\prime}v^{\prime}})\), normalized by \(U_{\infty}^{2}\) for experimental cases (a) N3, (b) N5, (c) N7, (d) W3, (e) W5, and (f) W7. The array members are numbered as per Fig. 1.
Figure 6: Drag coefficients \(C_{D}\), for all the experimental cases plotted as a function of the member number. The dashed black line represents the drag on the single member in the S1 case for reference. Note that identical symbols are used for formations (N or W) with the same number of members and marked by N or W on the plot to differentiate between them. The error bars denote the variations in \(C_{D}\) with different sizes of control volumes (CV) chosen for drag calculation (details in supplementary section A.4).
difference in pressure on the upstream and downstream portions of the leading member, which in turn results in further reductions in drag. This pressure recovery behind an array member can be equivalently thought of as receiving a "_forward push_" from the downstream member when there is an overlap of streamwise projections, as shown in Fig. 1, leading to drag reduction. Such a forward push is absent for members of W-formations where there is no overlap of streamwise projections.
In N-formations, all the interior and trailing members also experience a considerable reduction in drag, with members 2 and 3 experiencing the most reduction. Members 2 and 3 see a very slow incoming flow (bleeding flows between members 1-2 and members 1-3 have average velocities around 15% of the free-stream velocity; Fig. 4). For N-formations with \(\mathcal{N}>3\), members 2 and 3 also get a forward push from members 4 and 5, respectively. This leads to the largest drag reductions observed for members 2 and 3 of N-formations (reduction of 43-45% compared with S1).
For N5 and N7 formations, members 4 and higher see a faster incoming flow (corresponding bleeding flows being 25-35% of \(U_{\infty}\) - see Fig. 4) and their trailing members don't receive any forward push due to the absence of any downstream members. This leads to \(C_{D}\) for members 4 and higher being larger than that for members 2 and 3. For the case of N7, trailing members 6 and 7 experience a drag force which is about 1.2 times the drag force on member 1 of the same formation.
For each W-formation, \(C_{D}\) increases in going from member 1 to downstream members. We also observe that members 2, 3, 4, and 5 of W5 experience a greater drag than members 2, 3, 4, and 5 of W7, respectively. These can be explained using Fig. 4 where we see that the bleeding flow between the members of each echelon of W-formations increases slightly in going towards the downstream members and the bleeding flow speeds for W5 members along an echelon are greater than those for W7 members. Overall, \(C_{D}\) for the members of W-formations remains close to the \(C_{D}\) for a single cylinder.
## Outlook
As demonstrated, the benefit of formations is not limited to lifting bodies, and arrangements of non-lifting objects, such as \(\mathsf{V}\)-formation can offer substantial reductions in the drag force experienced by each member of the group. This can partially explain the total energy savings of 11-14% achieved by pelicans in \(\mathsf{V}\)-formation [8], or the extreme case of 95% drag reductions observed by a cyclist located deep inside a tightly-packed cycling peloton [24].
The results of this work can guide researchers in controls, robotics, and autonomous systems to develop algorithms for the control and maneuvering of the swarm members where the variations in the drag experienced by different members might make it necessary for such algorithms to include intentional position changes during the flight time for uniform battery usage among the members. In other situations, one might choose to protect one or two members by placing them in the second row of a narrow formation to incur the least drag throughout the travel time. Other scenarios might include actively adjusting the angle of the formation to optimize the flow physics against other objectives of the group.
Clearly, the methods and discussions presented are not limited to the case of formations for vehicles and can readily be applied to other fields. The understanding of the organization and orientations of natural vegetation offers design ideas and solutions for man-made structures to control soil erosion in floodplains and coastal areas. In addition, the results of this study, especially augmented with the introduction of rotary wings, can also be effectively used for both V/STOL vehicles as well as the design of green energy infrastructure such as wind turbines where the placement of the turbines can have significant effects on the energy that can be harvested.
Acknowledgments.This work is supported by the Rowland Fellows program at Harvard University. The authors would like to express gratitude to Richard Christopher Stokes for his support with the electronics and Dr. Shuangjiu Fu for providing assistance during the experiments.
|
2309.14065 | AsymFormer: Asymmetrical Cross-Modal Representation Learning for Mobile
Platform Real-Time RGB-D Semantic Segmentation | Understanding indoor scenes is crucial for urban studies. Considering the
dynamic nature of indoor environments, effective semantic segmentation requires
both real-time operation and high accuracy.To address this, we propose
AsymFormer, a novel network that improves real-time semantic segmentation
accuracy using RGB-D multi-modal information without substantially increasing
network complexity. AsymFormer uses an asymmetrical backbone for multimodal
feature extraction, reducing redundant parameters by optimizing computational
resource distribution. To fuse asymmetric multimodal features, a Local
Attention-Guided Feature Selection (LAFS) module is used to selectively fuse
features from different modalities by leveraging their dependencies.
Subsequently, a Cross-Modal Attention-Guided Feature Correlation Embedding
(CMA) module is introduced to further extract cross-modal representations. The
AsymFormer demonstrates competitive results with 54.1% mIoU on NYUv2 and 49.1%
mIoU on SUNRGBD. Notably, AsymFormer achieves an inference speed of 65 FPS (79
FPS after implementing mixed precision quantization) on RTX3090, demonstrating
that AsymFormer can strike a balance between high accuracy and efficiency. | Siqi Du, Weixi Wang, Renzhong Guo, Ruisheng Wang, Yibin Tian, Shengjun Tang | 2023-09-25T11:57:16Z | http://arxiv.org/abs/2309.14065v7 | AsymFormer: Asymmetrical Cross-Modal Representation Learning for Mobile Platform Real-Time RGB-D Semantic Segmentation
###### Abstract
In the realm of robotic intelligence, achieving efficient and precise RGB-D semantic segmentation is a key cornerstone. State-of-the-art multimodal semantic segmentation methods, primarily rooted in symmetrical skeleton networks, find it challenging to harmonize computational efficiency and precision. In this work, we propose AsymFormer, a novel network for real-time RGB-D semantic segmentation, which targets the minimization of superfluous parameters by optimizing the distribution of computational resources and introduces an asymmetrical backbone to allow for the effective fusion of multimodal features. Furthermore, we explore techniques to bolster network accuracy by redefining feature selection and extracting multi-modal self-similarity features without a substantial increase in the parameter count, thereby ensuring real-time execution on robotic platforms. Additionally, a Local Attention-Guided Feature Selection (LAFS) module is used to selectively fuse features from different modalities by leveraging their dependencies. Subsequently, a Cross-Modal Attention-Guided Feature Correlation Embedding (CMA) module is introduced to further extract cross-modal representations. This method is evaluated on NYUv2 and SUNRGBD datasets, with AsymFormer demonstrating competitive results with 52.0% mIoU on NYUv2 and 49.1% mIoU on SUNRGBD. Notably, AsymFormer achieves an inference speed of 65 FPS and after implementing mixed precision quantization, it attains an impressive inference speed of 79 FPS on RTX3909. This significantly outperforms existing multi-modal methods, thereby demonstrating that AsymFormer can strike a balance between high accuracy and efficiency for RGB-D semantic segmentation.
**Code:**[https://github.com/Fourier7754/AsymFormer](https://github.com/Fourier7754/AsymFormer)
## I Introduction
Real-time semantic segmentation methods are important for robots as they rely on obtaining semantic information in real-time to support navigation and task decision-making. Current Real-Time semantic segmentation networks achieve competitive results and reach over 100 FPS inference speed on simple outdoor environments (Cityscapes 19 classes) [3, 4, 2, 1]. However, these networks perform poorly on complex indoor environments, such as NYU Depth v2 (40 classes) [5] and SUNRGBD (37 classes) [6]. Several studies have explored how to improve indoor scene semantic segmentation performance by integrating RGB-D information [7, 8, 9, 10], which is widely available from RGB-D sensors on indoor robots. RGB-D information consists of RGB (color, texture and shape) and Depth (boundaries and relative location) features, which are somewhat complementary [7, 8].
At present, RGB-D semantic segmentation methods have achieved state-of-the-art performance by employing dual-branch backbone for feature extraction and attention mechanisms for feature selection [8]. However, there is a dearth of discourse on whether these operations can be executed with less redundancy and higher efficiency. Primarily, most methodologies utilize two symmetrical, computationally intensive backbones, which effectively doubles the overall computational complexity [12]. Secondly, numerous methods concentrate on designing complex feature fusion modules to enhance accuracy, but do not exhibit significant advantages than some methods that employ simpler attention modules like ESANet [9], which indicates that as more computational resources are invested, the incremental improvement in accuracy decreases. Lastly, the use of cross-attention modules to jointly utilize multi-modal features has been demonstrated to be effective in improving network accuracy [8, 28]. However, most methods do not discuss how to implement cross-attention more efficiently. Therefore, the reduction of redundancy in dual-branch networks and the construction of more efficient attention modules remain significant topics for discussion.
To address this issue, this study suggests a balanced approach between computational efficiency and feature richness in multi-modal representation learning. RGB images are found to contain more information than Depth images, thus it is not necessary to use the same size backbone for extracting Depth information. The performance improvements achieved by using attention mechanisms for various tasks come with minimal increase in computational cost. While existing modules like SE [34] are efficient for feature selection, they lack the ability to model correlations in spatial dimensions. Increasing parameters can facilitate spatial dimension feature selection, but careful design is needed to maintain computational efficiency. For feature extraction, the commonly used Multi-Head Self Attention (MHSA) [22] is effective for single-modal features despite its small parameter count. Extending this efficient feature extraction to multiple modalities is feasible, but consideration should be given to feature fusion while maintaining computational efficiency.
This paper introduces AsymFormer, a high-performance real-time network for RGB-D integration semantic segmentation that employs an asymmetrical backbone design. This
includes a larger parameter backbone for RGB features and a smaller backbone for the Depth branch. Regarding framework selection, Vision Transformer are always performance better, but slower than same computational complexity CNNs due to a lack of hardware optimization [40]. In order to speed up the main branch, this paper employ a hardware-friendly CNN [16] for the RGB branch and a Transformer [17] with fewer parameters but similar performance for the Depth branch to further compress the parameters. This paper also discuss feature selection modeling and construct the Local Attention-Guided Feature Selection (LASF) module. This module computes the spatial-channel attention weights in parallel to enhance inference speed. Moreover, this paper establishes a novel approach to model spatial attention. This module can estimate the differences in features from different modalities by utilizing learnable channel weights and calculating spatial attention weights for each pixel. Additionally, a Cross-Modal Attention (CMA) module is introduced to embed cross-modal integration learning information into pixel-wise fused features. Finally, we employ a lightweight MLP-Decoder[17] to decode semantic information from shallow features.
This paper evaluates AsymFormer on two classic indoor scene semantic segmentation datasets: NYUv2 and SUNRGBD. Meanwhile, the inference speed test is also preformed on Nvidia RTX 3090 platform. The AsymFormer achieves 52.0% mIoU on NYUv2 and 49.1 mIoU on SUNRGBD, with 65 FPS inference speed. After performing mixed precision quantization on the model using the TensorRT framework, AsymFormer achieves an impressive inference speed of 79 FPS on RTX 3090, which significantly surpasses existing methods. Our experiments highlight AsymFormer's ability to acquire high accuracy and efficiency at the same time. Our main contributions are summarized as follows:
* We employed an asymmetric backbone that compressed the parameters of the Depth feature extraction branch, thus reducing redundancy.
* We introduced the LAFS module for feature selection, utilizing learnable feature weights to calculate spatial attention weights.
* We introduce a novel efficient cross-modal attention (CMA) for modeling of self-similarity in multi-modal features, validating its capability to enhance network accuracy with minimal additional model parameters.
## II Related Works
**RGB-D Representation Learning.** One of the earliest works on RGB-D semantic segmentation, FCN [35], treated RGB-D information as a single input and processed it with a single backbone. However, subsequent works have recognized the need to extract features from RGB and Depth information separately, as they have different properties. Therefore, most of them have adopted two symmetric backbones for RGB and Depth feature extraction [36, 37, 9, 11, 27, 8]. Further, the ACNet [25] proposed a three-stream backbone to process RGB, Depth and Fusion features independently. Moreover, some works have considered the variability of RGB-D features and used asymmetric backbones for their feature extraction. For example, TSNet [39] used ResNet [31] for RGB feature extraction and VGG (without residual connections) [38] for Depth feature extraction. The PSCNet [12] also used an asymmetric backbone and reduced the redundant computational cost by cutting the parameters of the Depth branch.
**RGB-D feature fusion.** The design of RGB-D backbones for feature extraction has been mostly based on two or three stream architectures [8]. However, the performance of different frameworks depends largely on how they fuse RGB and Depth features. Some early works, such as FuseNet[36] and RedNet [37], fused RGB and Depth feature maps pixel-wise in the backbone. Later works, such as ACNet [25], ESANet [9] and EMSANet[11], proposed channel attention to select features from different channels, as RGB and Depth feature maps may not align well on the corresponding channels. PSCNet [12] further extended channel attention to both spatial and channel directions and achieved better performance. Recently, more complex models have been proposed to exploit cross-modal information and select features for RGB-D fusion. For example, SAGate [27] proposed a gated attention mechanism that can leverage cross-modal information for feature selection. CANet [28] extended non-local attention [20] to cross-modal semantic information and achieved significant improvement. CMX [8] extended SA-Gate to spatial and channel directions and proposed a novel cross-modal attention with global receptive field, which can enhance cross-modal complementarity using the similarity of two modalities. However, integrating cross-modal information and learning cross-modal similarity is still an open question in vision tasks.
## III Method
### _Framework Overview_
This paper proposes AsymFormer, a novel network for RGB-D integration semantic segmentation. The network employs a ConvNext [16] based backbone to extract features from RGB images and a Mix-Transformer [17] based backbone to process RGB-D fused features. We introduced the LAFS module for feature selection, utilizing learnable feature weights to calculate spatial attention weights. We integrated modeling of self-similarity in multi-modal features, validating its capability to enhance network accuracy with minimal additional model parameters. The overall framework of AsymFormer is illustrated in Figure 1.
### _Local Attention-Guided Feature Selection_
Attention mechanisms have been shown to be effective in selecting complementary features from RGB and Depth features, thus improving efficiency and performance[25, 1, 12, 8]. Current spatial attention methods extract global information using pixel maximum or average values, neglecting the differences in channels from multiply modalities. In this work, we extend the previous spatial attention by proposing a more flexible spatial attention mechanism. The
core design of LAFS is a trainable spatial attention weight. This weight is learned through an MLP in a similar manner to SE [34].
In specific, Figure 1 (b)-1 illustrates the details of LAFS. The LAFS receives input features of the concatenation of RGB feature \(RGB\) with dimension \((C_{1},H,W)\) and current layer Depth feature \(Depth\) with dimension \((C_{2},H,W)\).
For the computation of the attention weight, we first extract global information vectors \(Avg\) of each feature map by adaptive AvgPool. For the channel attention, we process the global information by a feed-forward network with a squeeze-and-excitation structure.
\[W_{C}=\text{Sigmoid}(\text{FFN}(Avg)) \tag{1}\]
For the spatial attention, we process the global information \(Avg\) by another feed-forward network with a squeeze-and-excitation structure. The output vector is split into \(R_{Max}\) and \(R_{Avg}\), which represent two different descriptions of pixel spatial-similarity.
\[R_{Avg}=\text{FFN}(Avg) \tag{2}\]
Furthermore, the global spatial information \(I_{S}\) of feature map can be extracted by calculating the inner product similarity from \(R_{Avg}\) and the \(Input\) (In computation, this is equivalent to the weighted sum of the features from each channel). Then, the spatial attention weight \(W_{S}\) is computed by sigmoid normalization.
\[W_{S}=\text{Sigmoid}(\frac{\text{Dot}(Input.\text{Reshape}(C,H\times W)^{T}, R_{Avg})}{C^{2}}) \tag{3}\]
Where, the results all divide \(\sqrt{C}\) for preventing sigmoid overflow. Finally, the input features are selected by weight in spatial\(W_{S}\) and channel \(W_{C}\) direction:
\[Output=W_{C}\times W_{S}\times Input \tag{4}\]
### _Cross Modal Attention-Guided Feature Correlation Embedding_
In this paper, we introduce a novel cross-modal attention (CMA) module that incorporates cross-modal self-similarity information into pixel-wise fused features. Current MHSA [21] is limited to learning the self-similarity of a single modality, whereas in multi-modal processing, our goal is to jointly use information from multiple modalities for representation learning. To achieve this, CMA is proposed to learn the self-similarity of multiple modalities. The key insight of CMA is independently learning the self-similarity of two modalities and embeds the sum of the results into the fusion feature, which is different than standard MHSA in theoretical. Specifically, for each pixel \((i,j)\) in feature map, we separately embed its RGB and Depth feature:
\[\begin{split} Key_{RGB}&=(Kr_{1,i,j},Kr_{2,i,j},...,Kr_{N,i,j})\\ Query_{RGB}&=(Qr_{1,i,j},Qr_{2,i,j},...,Qr_{N,i,j}) \\ Key_{Depth}&=(Kd_{1,i,j},Kd_{2,i,j},...,Kd_{N,i,j}) \\ Query_{Depth}&=(Qd_{1,i,j},Qd_{2,i,j},...,Kd_{N,i,j}) \end{split} \tag{5}\]
Fig. 1: Overview of AsymFormer.
The cross-modal similarity of pixel \((i_{0},j_{0})\) and other pixel in feature map are defined as:
\[W(i,j)=\sum_{n=1}^{N}(Kr_{n,i,j}\cdot Qr_{n,i_{0},j_{0}})+\sum_{n=1}^{N}(Kd_{n,i,j} \cdot Qd_{n,i_{0},j_{0}}) \tag{6}\]
Moreover, the CMA embeds cross-modal similarity into the fused feature \(Value\) with same operation as[21, 20, 22]. Following above idea, the CMA first embedding \(Key\), \(Query\) and \(Value\) independently for three modalities features. Then, we extend self-attention to the multimodal case, which contain two design: 1. Information mixing and redistribution of \(Key\) and \(Query\); 2. Learning two different different representation subspaces for the same position. The overall framework of CMA is demonstrated in Figure
**Independent linear embedding:** According to our definition, the \(Key\), \(Query\) and \(Value\) should be embedded independently. Assuming the input \(RGB\) feature has dimension \({}^{\langle}C_{0},H,W\rangle\); the Enhanced Depth feature \(Depth_{En.}\) has dimension \({}^{\langle}C_{1},H,W\rangle\); and the previous fused feature \(Fus_{1}\) has dimension \((C_{2},H,W)\). We first normalize the RGB feature and Depth feature. Then, we use six independent convolution layers to generate self-attention keys and queries from RGB and Depth feature:
\[RGB_{emb}=\left\{\begin{array}{c}K_{RGB}=\text{Conv}_{f\times f}(RGB)\\ Q_{RGB}=\text{Conv}_{1\times 1}(RGB)\end{array}\right. \tag{7}\]
\[Depth_{emb}=\left\{\begin{array}{c}K_{Depth}=\text{Conv}_{f\times f}(Depth _{En.})\\ Q_{Depth}=\text{Conv}_{1\times 1}(Depth_{En.})\end{array}\right. \tag{8}\]
\[Fused_{emb}=\left\{\begin{array}{c}V=\text{Conv}_{f\times f}(Fused_{1})\\ V_{1},V2=\text{Split}(V,2)\end{array}\right. \tag{9}\]
The dimensions of \(K_{RGB}\) and \(K_{Depth}\) are \((\frac{C_{1}}{4},\frac{H}{f},\frac{W}{f})\), where \(f\) is the down-sample rate and equals the convolution kernel-size and stride. This reduces the feature map resolution to \(\frac{1}{f}\). The dimensions of \(Q_{RGB}\) and \(Q_{Depth}\) are \((\frac{C_{1}}{4},H,W)\). The \(V_{1}\) and \(V_{2}\) is reshaped into dimension \((\frac{C_{2}}{4},\frac{H\times W}{f^{2}})\).
**Information mixing and redistribution:** We concatenate the cross-modal keys and queries to obtain the \(Key\) and \(Query\):
\[Key,Query=\left\{\begin{array}{c}Key=\text{Cat}[K_{RGB},K_{Depth}]\\ Query=\text{Cat}[Q_{RGB},Q_{Depth}]\end{array}\right. \tag{10}\]
The first \(\frac{C_{2}}{4}\) channels of \(Key\) and \(Query\) contain RGB information. Similarly, the second half of the \(Key\) and \(Query\) channels contain Depth information. To obtain the \(K1\), \(K2\) and \(Q1\), \(Q2\) for integrating RGB-D information computing the two feature subspaces \(W_{1}\), \(W_{2}\), we perform a channel shuffle of \(Key\) and \(Query\) and split them into required vector:
\[\begin{split} K_{1},K_{2}=\text{Split}(\text{Shuffle}(Key),2) \\ Q_{1},Q_{2}=\text{Split}(\text{Shuffle}(Query),2)\end{split} \tag{11}\]
**Representation subspaces learning:** The \(Q_{1}\), \(Q_{2}\) are reshaped into dimension \((\frac{C_{1}}{4},H\times W)\) and the \(K_{1}\), \(k_{2}\) are reshaped into dimension \((\frac{C_{1}}{4},\frac{H\times W}{f^{2}})\). Then, the two representation subspaces \(W_{1}\) and \(W_{2}\) are computed by dot product of \(K\) and \(Q\) respectively:
\[\begin{split} W_{1}=\text{Softmax}(\frac{Q_{1}\cdot K_{1}^{T}}{ \sqrt{C_{1}/4}})\\ W_{2}=\text{Softmax}(\frac{Q_{2}\cdot K_{2}^{T}}{\sqrt{C_{1}/4}}) \end{split} \tag{12}\]
The \(W_{1}\) and \(W_{2}\) are embedded into \(V_{1}\) and \(V_{2}\) by dot product. The fused feature \(Fused_{2}\) is the concatenation of \(V_{1}\) and \(V_{2}\):
\[Fused_{2}=\text{Cat}[W_{1}\cdot V_{1},W_{2}\cdot V_{2}] \tag{13}\]
Finally, the \(Fused_{2}\) is adapted into output channel \(C_{2}\) and added with residual connection \(Fused_{1}\).
\[Fused_{2}=\text{ConvBN}_{1\times 1}(Fused_{2})+Fused_{1} \tag{14}\]
## IV Experiment Results and Analysis
### _Implementation Details_
To evaluate our Real-Time semantic segmentation network design, we conduct a series of experiments on two widely-used datasets: NYUv2[5] (795 training and 654 testing RGB-D images) and SUNRGBD[6] (5825 training and 5050 testing RGB-D images).
We conduct the model training and testing on different platforms. For the training, we use Nvidia A100-40G GPU. For the evaluation and inference speed testing, we use Nvidia RTX 3090 GPU, Ubuntu 20.04, CUDA 12.0 and Pytorch 2.0.1. We apply data augmentation to all datasets by randomly flipping (p=0.5), random scales [1.0,2.0], random crop 480\(\times\)640 and random HSV. We adopt Mix Transformer[17] and ConvNext[16] backbones pretrained on ImageNet-1k[24]. In the augmented setting, both backbones are pretrained on ADE20k[23]. For the CMA implementation, we set down-sample rate \(f\)=1 in CMA-1 and \(f\)=2 in CMA-2,3. The MLP-decoder in AsymFormer has the same structure as Segformer and an embedding dimension of 256. We choose AdamW optimizer with a weight decay of 0.01. The initial learning rate is \(5e^{-5}\) and we use a poly learning rate schedule \((1-\frac{iter}{max_{iter}})^{0.9r}\) with a warm-up of 10 epochs. We train with a batch size of 8 for NYUv2 (500 epochs) and SUNRGBD (200 epochs). We employ cross-entropy as the loss function and do not use any auxiliary loss during training process. The evaluation metric is mean Intersection over Union (mIoU).
### _Model Assessment_
This paper proposes two different models based on the AsymFormer framework with different backbones. The AsymFormer uses ConvNext-T[16] for RGB representation learning and MiT-B0[17] for fused feature processing. This model has 33.0 million parameters and 36.0 GFLOPs computational cost, and it can achieve 65 FPS inference speed on RTX 3090. Particularly, with mixed precision quantization, AsymFormer achieves a real-time inference speed of 79 FPS
on the RTX 3090 GPU. It also achieves a real-time inference speed of 29.4 FPS on the Tesla T4 GPU, which has a more limited computation resources (65 TFLOPs FP16).
### _Ablation Experiment_
We conduct a series of ablation experiments on NYUv2 dataset to evaluate the effectiveness of the LAFS and CMA module. We set two common feature fusion methods as our comparative baseline: **1. Cat:** This method directly concatenates two features and then uses convolution layers to adjust the channel numbers. Essentially, it is a pixel-wise fusion without feature selection. **2. SE+MHSA-H2:** This method combines the popular SE attention [34] and MHSA attention [21] for feature fusion. Here, SE is used for feature selection in the channel direction, while MHSA is employed for further feature extraction on the fused features. Specifically, we use an MHSA with 2 heads to align with CMA-H2.
In our experiments, the Cat fusion method, used as a baseline, achieved a segmentation accuracy of 47.0 mIoU and an inference speed of 77.5 FPS. When using LAFS for feature selection alone, we achieved a performance improvement of 2.1% while sacrificing only 1.8 FPS of inference speed. This demonstrates that LAFS is a highly efficient feature selection method that provides performance gains without significantly impacting inference speed. In comparison to the other baseline, Cat+MHSA-H2, which resulted in an 11.8 FPS reduction in inference speed, only a 2.9% improvement in segmentation accuracy was achieved. This further highlights the efficiency of LAFS.
Furthermore, we conducted experiments to investigate whether incorporating the mining of self-similarity within the two modalities after feature selection could further improve segmentation accuracy. When using CMA-H2 alone, we observed a 2.6% improvement in segmentation accuracy but encountered a significant decrease in inference speed of 10.1 FPS. Compared to LAFS, CMA showed a more noticeable reduction in inference speed. In comparison to SE+MHSA-H2, CMA-H2 achieved similar accuracy, but it is important to note that no feature selection was performed in this case, indicating that relying solely on mining multi-modal self-similarity can achieve accuracy similar to the traditional feature fusion baseline SE+MHSA.
Finally, we combined LAFS with CMA (LAFS+CMA). Since LAFS had minimal impact on inference speed and served a different purpose than CMA, the network's inference speed only decreased by 2 FPS while achieving a significant improvement of 5.0% compared to the baseline Cat. At this point, the inference speed of LAFS+CMA was similar to SE+MHSA, but with a 2.1% performance improvement. This validates our experimental hypothesis: by re-modeling feature selection and mining cross-modal self-similarity, we can enhance the segmentation performance of the network without sacrificing inference speed compared to existing models. This demonstrates that we have indeed improved the efficiency of the network.
### _Comparison With State-of-The-Arts_
#### Iv-D1 NYUv2 Comparison Results
Table II presents the results of our network on the NYUv2 dataset with different backbone sizes and pretraining settings. Even without ImageNet-1k pretraining, which other methods use for a fair comparison, our method still achieves leading scores. Our Real-Time efficient method, AsymFormer achieves 52.0 \(\%\) mIoU, demonstrating competitive accuracy compared to those high-performance heavy designs. AsymFormer also has faster inference speed than other methods. For instance, AsymFormer outperforms PSCNet-T[12] by 6.6% mIoU and about 30% inference speed improvement. Similarly, AsymFormer is two times faster than ESANet[9] with the same parameters and performance. In particular, AsymFormer has the two times smaller parameters than CMX-B2[8], but has about three times faster than CMX-B2 with a little performance degradation. In terms of semantic segmentation accuracy, AsymFormer does not show a significant disadvantage compared to the methods included in the comparison. Additionally, it demonstrates a substantial lead over other methods in terms of inference speed. This validates the effectiveness of our various efforts in reducing network redundancy parameters and improving inference speed.
#### Iv-D2 SUNRGBD Comparison Results
Table III reports the performance of AsymFormer on the SUNRGBD dataset. AsymFormer achieves competitive accuracy with 49.1 % mIoU. Since the input image size is the same as NYUv2, all methods have the same inference speed as shown in Table II. However, the advantage of AsymFormer is not as significant as in NYUv2 experiment. For example, AsymFormer improves 1.8 mIoU over SA-Gate[27] in NYUv2 dataset (52.0% vs 50.2% mIoU), but decreases 0.3 mIoU in SUNRGBD dataset (49.1% vs 49.4% mIoU). A similar performance degradation can be observed in CMX-B2 result which also uses Transformer based backbone. We conjecture that this phenomenon may be caused by low quality depth images in SUNRGBD dataset. The aim of our research is
not to construct a state-of-the-art method that has a marginal mIoU improvement over other methods, but to construct a method that has a better performance-speed balance and is more suitable for robot platform. Given that AsymFormer still has faster inference speed than other methods, we consider this performance acceptable for AsymFormer.
### _Visualization_
**1) LAFS Attention Map:** As shown in Figure 2, to demonstrate that LAFS performs better than CBAM [18] in selecting features in the spatial dimension, we visualized the spatial attention weights of both methods. It can be observed that LAFS provides better coverage of informative regions in the image while maintaining consistency within objects and preserving the integrity of edges.
**2) Semantic Segmentation Results:** The Figure 3 demonstrates the segmentation results of AsymFormer on the NYUv2 dataset. As observed, while maintaining a significantly faster inference speed compared to other methods, AsymFormer achieves comparable semantic segmentation accuracy to mainstream approaches.
## V Conclusions
In this work, we proposed AsymFormer, which aims to construct a less-redundant real-time indoor scene understanding system. To enhance efficiency and reduce redundant parameters, we implemented the following improvement: 1. We employed an asymmetric backbone that compressed the parameters of the Depth feature extraction branch, thus reducing redundancy. 2. We introduced the LAFS module for feature selection, utilizing learnable feature weights to calculate spatial attention weights. This resulted in a significant performance improvement while almost maintaining the inference speed. 3. Expanding on feature selection, we integrated modeling of self-similarity in multi-modal features, validating its capability to enhance network accuracy with minimal additional model parameters. The experiments demonstrated that the AsymFormer surpassed existing SOTA methods in terms of speed, demonstrating our method achieved a balance between accuracy and speed. Moving forward, we will continue to optimize the modules and address issues such as self-supervised pre-training of the model, aiming for further improvements.
Fig. 3: Visualization of AsymFormer Semantic Segmentation Results.
Fig. 2: Difference between CBAM and LAFS’s spatial attention map. |
2309.12414 | FDTD Full Wave Simulations of Reconfigurable Intelligent Surfaces | This paper presents the analysis of metasurfaces, here called reconfigurable
intelligent surface. The analysis is performed by numerical simulations that
implement the finite-difference time-domain method. The metasurface has been
modeled by metallic patches interconnected by varactor diodes. The
electromagnetic source consists of randomly generated plane wave. This kind of
analysis allows us to investigate the response of the metasurface when it is
hit by a random source. | Emanuel Colella, Luca Bastianelli, Valter Mariani Primiani, Franco Moglie | 2023-09-21T18:29:07Z | http://arxiv.org/abs/2309.12414v1 | # FDTD Full Wave Simulations of Reconfigurable Intelligent Surfaces
###### Abstract
This paper presents the analysis of metasurfaces, here called reconfigurable intelligent surface. The analysis is performed by numerical simulations that implement the finite-difference time-domain method. The metasurface has been modeled by metallic patches interconnected by varactor diodes. The electromagnetic source consists of randomly generated plane wave. This kind of analysis allows us to investigate the response of the metasurface when it is hit by a random source.
+
Footnote †: publicationid: pubid: 979-8-3503-2400-6/23/$31.00 ©2023 IEEE
## I Introduction
Metamaterials are artificially created materials, obtained thanks to the periodic arrangement of dielectric and metallic elements with sub-wavelength dimensions and spacings much smaller than the wavelength [1]. These materials display unique electromagnetic properties and have attracted growing interest in science and technology over the past fifteen years [2, 3, 4, 5, 6]. Thanks to metamaterials it is possible to obtain unconventional light-matter interaction effects, such as a negative refractive index or super lenses. Furthermore, thanks to the correct positioning of the elements constituting the metamaterial, it is possible to manipulate the electromagnetic fields. However, practical 3D applications of metamaterials have been hampered by significant structural challenges [7, 8]. For this reason, interest has focused more on 2D implementations, called metasurfaces, which are much simpler to implement, less expensive and more manageable, being compatible with already established planar machining techniques on silicon. Electromagnetic metasurfaces are based on the idea of radio frequency transmitting/receiving antenna arrays, consisting of an array of 2D resonant elements capable of locally modifying both the phase and the amplitude of an incident electromagnetic wave [9, 10, 11]. Furthermore, thanks to the use of plasmon or high refractive index dielectric effects, the dimensions of the metasurfaces can be much smaller than the wavelength. Starting from the work developed by Yu et al. based on nano-antennas, which is capable of modifying the wavefront of an electromagnetic wave to obtain anomalous reflection/refraction effects [12]. This has led to the creation of various models to increase the bandwidth and efficiency of these devices, opening the door to a new research sector based on 2D surface optics and photonics. Furthermore, it has been demonstrated that it is possible to use reconfigurable dielectric or semiconductor resonators to realize smart screens able to control the propagation of electromagnetic fields in telecommunications. This type of screens with the possibility of automatically reconfiguring themselves according to the environmental radio conditions are called reconfigurable intelligent surfaces RISs [13, 14, 15]. Reconfigurable intelligent surfaces (RIS) are surfaces made up of antennas or radiating elements that can be dynamically modified to control and manipulate the propagation of electromagnetic waves in a wide range of frequencies [16]. RIS can improve the performance of wireless communication, reduce interference, extend coverage, increase the security of wireless networks and improve the energy efficiency of communication devices [17]. RIS represents a promising technology for future wireless communication networks and can be used in various scenarios, presenting a great opportunity to enhance 5G communications in future smart radio environments. Within this paper, we evaluated the performance of RISs in different electromagnetic conditions by computational electrodynamics simulations to analyze their impact on the smart radio environment.
## II Simulation set-up
Electromagnetic simulations involve analyzing the dynamic re-configurations of a RIS to evaluate its focusing performance. To study the electromagnetic response of the RIS, numerical simulations were carried out using the finite difference time domain (FDTD) method [18]. The FDTD method is a numerical technique used to reproduce the behavior of electromagnetic waves using a full wave approach. This technique allows to study complex geometries of different kinds with high precision, allowing to obtain very accurate results [19, 20]. The entire FDTD code has been implemented following the standard mathematical procedure. Our group developed a parallel home made C code which run on supercomputer. The simulations have been performed in a working space of \(220\times 110\times 220\) for a total of \(5.52\cdot 10^{6}\) cubic cells. The cell size is 1 mm. The electromagnetic source that hits the RIS is a plane wave randomly generated. The generation of plane wave(s) is done by a dedicated script and then read by the FDTD code. The whole simulated domain is divided in three domains: i) RIS domain of \(120\times 10\times 120\) cells; ii) the total field domain of \(30\times 30\times 30\) cells; iii) the scattered field domain of \(20\times 20\times 20\) cells, as represented in the Fig. 3. A detailed explanation of the plane wave generation and the implementation of separation plane between the total field and the scattered field region are reported in [21]. To
analyze the electromagnetic behavior of metasurfaces in 5G radio environment the frequency band of 0.8 GHz - 8.4 GHz with \(f_{0}\)=4.6 GHz has been chosen. The angle of incidence in spherical coordinates are: \(\alpha=1.57\), \(\theta=0.78\) and \(\phi=0.78\) for all simulations [21].
The whole RIS is composed by [22]: i) metal patches; ii) varactor diodes; iii) dielectric substrate; iv) ground plane. Figure 2 reports the side view of the simulated RIS. The substrate, i.e. the dielectric support, has been simulated with \(\epsilon_{r}=4.4\) and \(\sigma=0.0025\) S/m whereas the ground plane is simulated as perfect electric conductor (PEC).
RIS consists of \(10\times 10\) resonant structures, i.e. square metal patches modeled by PEC. Each patch is interconnected each other by 180 varactor diodes. The varactor diodes are designed as 1 mm\({}^{3}\) cell connecting two patches as shown in the Fig. 1, while a single patch communicates with two other patches. The capacitance of the varactor is simulated simply by forcing the corresponding constitutive relationship for the fields in the FDTD cell [18]. The RIS is placed in the plane \(xy\). The dielectric material inside the working domain was air and the absorbing boundary conditions for the whole domain are the perfect matched layers (PMLs). The values of the diodes have been 1 pF and 0.1 pF simulating a short circuit or an open circuit. The electric and magnetic fields have been recorded in different points \(P\) as shown in the Fig. 4. The temporal step was 1.5 ps for a total of 100 periods in all simulations. In all simulations the RIS was placed on the \(xz\) plane at the center of the workspace. The simulation set-ups are shown in the Fig. 4 During the analysis we also simulated the RIS as a PEC surface.
## III Results
In the first simulation, to test the effectiveness of the separation between the near field and the far field, we reported the electric field on a floor consisting of \(10\times 10\) points in the case without RIS at a height of 50 cells. The result is shown in the Fig. 5. The reflected field values obtained are lower than 0.2 mV/m. In that plot are reported all the probed points, namely \(100\) points where the three Cartesian
Fig. 1: RIS configuration. Representation of the simulated RIS of \(10\times 10\) PEC patches of 1 cm\({}^{2}\) connected by 180 varactor diodes, vertically and horizontally polarized. The dielectric support of \(\epsilon_{r}=3\) and 1 mm of thickness.
Fig. 4: Simulation set-up. Representation of the simulation configuration with RIS, plane wave and detecting point P.
Fig. 3: FDTD domains. The innermost domain is the RIS domain, the domain in the middle is the total field domain while the outermost domain is the scattered domain.
Fig. 2: Side view of the simulated RIS. On the top are reported the PEC patches interconnected by varactor diodes. The substrate layer is in the middle whereas on the bottom is reported the ground plane.
components of the electric field \((E_{x},E_{y}\) and \(E_{z})\) are collected. In the second case, we reported the electric field value, only the \(E_{y}\) component, collected at the point \(P(10,100,10)\). In this case we compared the electric field when the capacitors of the RIS are set to \(1.0\) pF and \(0.1\) pF respectively. Moreover, in the same plot, the two capacitor configurations are compared to a RIS simulated by a PEC surface, Fig. 6. In this point (\(P(10,100,10)\)) there are no difference between the diodes configurations. We did the same comparison in terms of electromagnetic response of the previous configurations at the point \(P(10,10,100)\), as shown in the Fig. 7. We also investigate the electric field \((E_{y})\) at point \(P(210,10,210)\) with respect to the two different capacitor configurations and by the RIS simulated as PEC surface, Fig. 8. In Figs. 7 and 8 the electric field \((E_{y})\) has some differences based on capacitor configurations while a higher value is with the PEC surface. In Figs. 6, 7 and 8 the main time oscillation corresponds to the central frequency, \(4.6\) GHz, of the spectrum of the exciting pulse. Figure 9 and Fig. 10 report the field distributions for a 2D plane of the 3D simulated domain. The field distribution is evaluated over the RIS surface. In Fig. 9 all capacities are set to \(0.1\) pF whereas in Fig. 10 all of them are set to \(1.0\) pF. In the first configuration, the module of the electric field distribution over the RIS is higher with respect to the case when all capacities are set to \(1.0\) pF.
Fig. 5: First simulation results. Electric field on the plane of \(50\times 50\) cells at high of 50 cells without RIS.
Fig. 6: Second simulation results. Electric field \(P(10,100,10)\) in the case of PEC surface and in the case of RIS with diodes at 1 pF and 0.1 pF configuration.
Fig. 7: Third simulation results. Electric field \(P(10,100,100)\) in the case of PEC surface and in the case of RIS with diodes at 1 pF and 0.1 pF configuration.
Fig. 9: Field distribution of a 2D plane of the 3D domain. All the capacitor are set to 0.1 pF.
## IV Discussion and Conclusion
In this work we reported the preliminary analysis of RIS performed by FDTD simulations. The electromagnetic source that hits the RIS consists of a random generated plane wave. We investigated the capability of RIS to reflect and focus the electromagnetic field in a fixed spatial point within the simulation domain, in particular inside the scattered field domain. Currently, two RIS configurations have been simulated, in particular when all capacitors have set to \(0.1\) or \(1.0\) pF respectively. Those two configurations have also been compared to a RIS made by a PEC surface. Simulations campaign are on progress and next steps are: i) optimize the diodes configurations by implementing optimization algorithm; ii) by considering a set of plane waves that act as random source; iii) consider more capacitors configurations. FDTD simulation is an useful and reliable tool to speed up the analysis and the design of RISs.
## Acknowledgment
This work has been supported by EU H2020 RISE-6G project under the grant number 101017011. We acknowledge PRACE for awarding us access to Joliot-Curie KNL at GENCI@CEA, France.
|
2309.05168 | Multiple Non-radial Solutions for Coupled Schrödinger Equations | The paper deals with the existence of non-radial solutions for an $N$-coupled
nonlinear elliptic system. In the repulsive regime with some structure
conditions on the coupling and for each symmetric subspace of rotation
symmetry, we prove the existence of an infinite sequence of non-radial positive
solutions and an infinite sequence of non-radial nodal solutions. | Xiaopeng Huang, Haoyu Li, Zhi-Qiang Wang | 2023-09-10T23:40:14Z | http://arxiv.org/abs/2309.05168v1 | # Multiple Non-radial Solutions for Coupled Schrodinger Equations
###### Abstract
The paper deals with the existence of non-radial solutions for an \(N\)-coupled nonlinear elliptic system. In the repulsive regime with some structure conditions on the coupling and for each symmetric subspace of rotation symmetry, we prove the existence of an infinite sequence of non-radial positive solutions and an infinite sequence of non-radial nodal solutions.
**Keywords:** coupled Schrodinger equations; non-radial solutions; \(\mathbb{Z}_{p}\) index theory.
**2010 Mathematics Subject Classification:** 35B05, 35B32, 35J50, 58C40.
## 1 Introduction
In this paper, we consider the following \(N\)-coupled system of nonlinear elliptic equations
\[\begin{cases}-\Delta u_{j}+\lambda_{j}u_{j}=\mu_{j}u_{j}^{3}+\sum_{k\neq j} \beta_{jk}u_{j}u_{k}^{2},&\text{in }\Omega,\\ u_{j}=0,&\text{on }\partial\Omega\end{cases}\qquad\text{for each }j=1,\ldots,N. \tag{1}\]
This class of systems arises naturally when seeking standing wave solutions to the time-dependent Schrodinger system (2) which models many physical problems
\[\begin{cases}\mathrm{i}\frac{\partial}{\partial t}\Phi_{j}+\Delta\Phi_{j}+\mu_{j }|\Phi_{j}|^{2}\Phi_{j}+\sum_{i\neq j}\beta_{ij}|\Phi_{i}|^{2}\Phi_{j}=0&\text {for $t>1$, $x\in\mathbb{R}^{n}$,}\\ \Phi_{j}(x,0)=\Phi_{j0}(x),&j=1,\ldots,N.\end{cases} \tag{2}\]
Such equations are found in Kerr-like photorefractive media [1], the Hartree-Fock theory for Bose-Einstein condensates [8], and other physical phenomena. Solutions to the system (1) can also describe the steady states of the distribution of different species of some diffusion systems. In physical terms, when the coupling constant \(\beta_{ij}\) (\(i\neq j\)) is positive, it is referred as attractive, and when the coupling constant \(\beta_{ij}\) is negative, it is said to be repulsive, leading to very different behaviors of solutions for different coupling regimes.
There have been extensive studies in the last twenty years or so on these systems and related systems to reveal a rich set of interesting and important phenomena. See the seminal paper of Lin and Wei [12], the subsequent works [2, 3, 4, 7, 11, 13, 14, 15, 16, 17, 19, 22, 27, 28] and references therein. In particular, for the repulsive cases, there have been works on the existence of multiplicity results of segregation-type solutions, symmetry breaking of solutions, and phase-separations among other things. One comment feature for this class of systems is the existence of many so-called semi-trivial solutions (solutions being non-zero as vectors but containing at least one zero component). In terms of the classification of solutions, Liu and Wang [14, 15] gave the existence of infinitely many non-trivial solutions (solutions with every component non-zero) regardless of the existence of how many semi-trivial solutions. [3, 7, 28] give the existence of an infinite sequence of positive solutions, further showing disparity of qualitative properties solutions with the counterpart of the scale field equation \(-\Delta w+w=w^{3}\) for which uniqueness of positive solutions is well known ([9, 10]). All these works in [3, 7, 28] explore some symmetry structure of the coupling matrix \(\mathcal{B}=(\beta_{ij})\) when writing \(\beta_{jj}=\mu_{j}\) and the work of [7, 28] was done by using variational methods and that of [3] by a global bifurcation approach. The work [7, 28] was further extended to the \(N\)-system for any \(N\geq 2\) in [23] (see also related work in [22]). When the domain is radially symmetric (including the case of the entire space \(\mathbb{R}^{n}\)), the above results of [3, 7, 28] also give the existence of an infinite sequence of radial positive solutions. A natural question is to study non-radial solutions in the setting of radially symmetric domains for which little work has been done so far. Some numeric work was done in [5] indicating the existence of a rich variety of different types of non-radial solutions. In [27], Wei and Weth constructed non-radial ground state solutions in some symmetric subspaces and the bifurcation results in [3] also pointed out various type of non-radial positive solutions. Motivated by these works, one major goal of our current paper is to investigate the existence of multiple non-radial positive solutions. More precisely, for bounded radially symmetric domains, within each symmetric subspace we construct an infinite sequence of non-radial positive solutions. Furthermore, our ideas can also be adapted partially to the study of non-radial
nodal (sign-changing) solutions. For nodal solutions, there have been quite some works in the literature such as [18], [6], [13], [21], [11], etc. In [13], the existence of an infinite sequence of nodal solutions was constructed by using minimax method in the presence of invariant sets of a negative pseudo gradient flow, which gives a sequence of radial nodal solutions when the domain is radially symmetric. Further classification was done in [11] in which with prescribed component-wise nodal numbers an infinite sequence of radial nodal solutions was constructed with the whole sequence of solutions sharing the same nodal data. Another goal of the current paper is to study multiple non-radial nodal solutions.
Now let us describe the main results of the paper for system (1). Here, \(\Omega\) is a radially symmetric domain in \(\mathbb{R}^{n}\) with \(n=2\) or \(3\), i.e., \(\Omega\) is either a ball or an annulus.
Let \(p\) be a prime factor of \(N\) and write \(N=pB\). We assume that \(\lambda_{j}\), \(\mu_{j}\), \(\beta_{ij}\) satisfy
* \(\lambda_{pb-p+1}=\lambda_{pb-p+2}=\cdots=\lambda_{pb}>0\) for \(b=1,\ldots,B\).
* For \(i,j=1,\ldots,N\) and \(i\neq j,\beta_{ij}=\beta_{ji}\leq 0\) and \(\mu_{j}>0\).
* For \(b=1\ldots,B,\mathcal{B}=\left(\beta_{ij}\right)_{N\times N}\) is invariant under the action of \[\prod_{i=1}^{p-1}C_{pb-p+i,pb-p+i+1}\circ R_{pb-p+i,pb-p+i+1},\] where \(R_{ij}\) is the transformation of exchanging the \(i\)-th row and the \(j\)-th row of a matrix, and \(C_{ij}\) is the counterpart for column exchanging.
* For \(b=1,\ldots,B\) and \(pb-p+1\leq j\leq pb\), it holds \[\mu_{j}+\sum_{pb-p+1\leq i\leq pb;i\neq j}\beta_{ij}\leq 0.\]
We define
\[\mathscr{R}_{\theta}u(x)=u(R_{-\theta}x),\]
where
\[R_{\theta}=\begin{cases}\begin{pmatrix}\cos\theta&-\sin\theta\\ \sin\theta&\cos\theta\end{pmatrix}&\text{if }n=2,\\ \begin{pmatrix}\cos\theta&-\sin\theta&0\\ \sin\theta&\cos\theta&0\\ 0&0&1\end{pmatrix}&\text{if }n=3.\end{cases} \tag{3}\]
For \(U=(u_{1},\ldots,u_{N})\in(H_{0}^{1}(\Omega))^{N}\), set \(\mathscr{R}_{\theta}U=(\mathscr{R}_{\theta}u_{1},\ldots,\mathscr{R}_{\theta}u _{N})\).
**Theorem 1**.: _Fix an integer \(k\geq 1\). Then under the assumptions (A)-(D), Problem (1) admits an unbounded sequence of positive solutions \(\{(u_{1,l},\cdots,u_{N,l})\colon l\in\mathbb{N}\}\) such that each \(u_{i,l}\) is \(\mathscr{R}_{2\pi/k}\)-invariant but not \(\mathscr{R}_{2\pi/(pk)}\)-invariant. In particular, \(u_{i,l}\) is non-radial._
By taking \(k=p^{j}\), we have the following corollary.
**Corollary 1**.: _Fix an integer \(j\geq 0\). Then under the assumptions (A)-(D), Problem (1) admits an unbounded sequence \(S^{(j)}=\{(u_{1,l}^{(j)},\cdots,u_{N,l}^{(j)})\colon l\in\mathbb{N}\}\) of positive solutions such that each \(u_{i,l}^{(j)}\) is \(\mathscr{R}_{2\pi/p^{j}}\)-invariant but not \(\mathscr{R}_{2\pi/p^{j+1}}\)-invariant. In particular, \(u_{i,l}^{(j)}\) is non-radial. Furthermore, for all \(j_{1}\neq j_{2}\), we have \(S^{(j_{1})}\cap S^{(j_{2})}=\varnothing\)._
**Remark 1**.: _We remark that in [27] for two equations it was proved that in the entire space setting there is a ground state solution in each symmetric subspace corresponding to the integer \(k\) which is a non-radial positive solution of (1), while our result above shows that within each symmetric subspace corresponding to the integer \(k\) there exists an infinite sequence of non-radial positive solutions. Furthermore, with \(k=p^{j}\) for different \(j\geq 0\), the infinite sequences \(S^{(j_{1})}\) and \(S^{(j_{2})}\) are mutually different when \(j_{1}\neq j_{2}\). We do not know whether our multiplicity results would hold for the setting of the entire space._
Our methods can be adapted to the studies of nodal (sign-changing) solutions. As an analogue of Theorem 1, we have the following results for nodal solutions.
**Theorem 2**.: _Fix an integer \(k\geq 1\). Then under the assumptions (A)-(D), Problem (1) admits an unbounded sequence of nodal solutions \(\{(u_{1,l},\cdots,u_{N,l})\colon l\in\mathbb{N}\}\) such that each \(u_{i,l}\) is \(\mathscr{R}_{2\pi/k}\)-invariant but not \(\mathscr{R}_{2\pi/(2k)}\)-invariant. In particular, \(u_{i,l}\) is non-radial._
By taking \(k=2^{j}\), we have the following corollary.
**Corollary 2**.: _Fix an integer \(j\geq 0\). Then under the assumptions (A)-(D), Problem (1) admits an unbounded sequence \(S^{(j)}=\{(u_{1,l}^{(j)},\cdots,u_{N,l}^{(j)})\colon l\in\mathbb{N}\}\) of nodal solutions such that each \(u_{i,l}^{(j)}\) is \(\mathscr{R}_{2\pi/2^{j}}\)-invariant but not \(\mathscr{R}_{2\pi/2^{j+1}}\)-invariant. In particular, \(u_{i,l}^{(j)}\) is non-radial. Furthermore, for all \(j_{1}\neq j_{2}\), we have \(S^{(j_{1})}\cap S^{(j_{2})}=\varnothing\)._
For \(u\in H^{1}_{0}(\Omega)\) and \(\omega\in(0,+\infty)\), we say that \(u\) is periodic with period \(\omega\) if \(\mathscr{R}_{\omega}u=u\) a.e. in \(\Omega\). If \(\omega\) is the minimal positive number with this property, it is said to be the _minimal period_ of \(u\).
We have proved that (1) has infinitely many solutions with period \(2\pi/k\) for each positive integer \(k\). It is of interest to know whether (1) admits solutions with minimal period \(2\pi/k\). For the 2-coupled system
\[\begin{cases}-\Delta u+\lambda u=u^{3}+\beta uv^{2}&\text{in }\Omega,\\ -\Delta v+\lambda v=v^{3}+\beta vu^{2}&\text{in }\Omega,\\ u=v=0&\text{on }\partial\Omega,\end{cases} \tag{4}\]
we have the following result.
**Theorem 3**.: _Assume \(\lambda>0\) and \(\beta<0\). For each positive integer \(k\), the problem (4) admits a positive solution \((u,v)\) such that \(u,v\) has minimal period \(2\pi/k\)._
The paper is organized as follows. In Section 2, we prove the results on non-radial positive solutions. Section 3 is devoted to the proofs on non-radial nodal solutions. In Section 4, we study the minimal period of the solutions in the angular variable. Finally, in Section 5, we discuss some further extensions of our main results.
## 2 Multiple Non-radial Positive Solutions
### The Variational Framework
For \(u\in H^{1}_{0}(\Omega)\), we set \(\|u\|_{i}^{2}=\int_{\Omega}(|\nabla u|^{2}+\lambda_{i}u^{2})\). The functional corresponding to (1) is
\[E^{+}(u_{1},\ldots,u_{N})=\sum_{i=1}^{N}\left(\frac{1}{2}\|u_{i}\|_{i}^{2}- \frac{\mu_{i}}{4}\int_{\Omega}(u_{i}^{+})^{4}\right)-\frac{1}{2}\int_{\Omega} \sum_{i\neq j}^{N}\beta_{ij}u_{i}^{2}u_{j}^{2}\]
where \(u^{+}=\max(u,0)\). It is known (see for instance [23, Lemma 2.1]) that \(E^{+}\) is a \(C^{2}\) functional and that every nontrivial critical point of \(E^{+}\) is a positive classical solution of (1).
Set for each positive integer \(k\) the subspace
\[\mathcal{M}_{k}^{+}=\{U=(u_{1},\ldots,u_{N})\in(H^{1}_{0}(\Omega) )^{N}\colon u_{i}=\mathscr{R}_{2\pi/k}u_{i}\text{ for }i=1,2,\ldots,N\] \[\text{ and }u_{j+1}=\mathscr{R}_{2\pi/(pk)}u_{j}\text{ for }p \nmid j\}.\]
For example, when \(p=2\) and \(B=2\), the element \((u_{1},u_{2},u_{3},u_{4})\in\mathcal{M}_{k}^{+}\) is of the form
\[(u_{1},\mathscr{R}_{2\pi/(2k)}u_{1},u_{3},\mathscr{R}_{2\pi/(2k)}u_{3})\]
and each \(u_{j}\) is \(\mathscr{R}_{2\pi/k}\)-invariant.
Next we define the Nehari-type manifold in \(\mathcal{M}_{k}^{+}\) by
\[\mathcal{N}_{k}^{+}=\{U=(u_{1},\ldots,u_{N})\in\mathcal{M}_{k}^{+}\colon u_{j} \neq 0,\partial_{j}E^{+}(U)u_{j}=0,\forall j=1,2,\ldots,N\},\]
where
\[\partial_{j}E^{+}(U)u_{j}=\int_{\Omega}(|\nabla u_{j}|^{2}+\lambda_{j}u_{j}^{2 })-\mu_{j}\int_{\Omega}(u_{j}^{+})^{4}-\int_{\Omega}\sum_{\begin{subarray}{c }k=1\\ k\neq j\end{subarray}}^{N}\beta_{jk}u_{j}^{2}u_{k}^{2}.\]
Under the assumptions (A)-(D), the functional \(E^{+}\) possesses a \(\mathbb{Z}_{p}=\langle\sigma\mid\sigma^{p}=\mathrm{id}\rangle\) symmetry in the sense that
\[E^{+}(\sigma u)=E^{+}(u)\]
for each \(u\in(H^{1}_{0}(\Omega))^{N}\). Here and throughout the paper, \(\sigma\) denotes the permutation given by
\[\begin{split}&\sigma(u_{1},u_{2},\ldots,u_{p};\ldots;u_{N-p+1},u_{N -p+2},\ldots,u_{N})\\ =&(u_{2},u_{3},\ldots,u_{p},u_{1};\ldots;u_{N-p+2},u_ {N-p+3},\ldots,u_{N},u_{N-p+1}).\end{split} \tag{5}\]
**Lemma 1**.: _The subspace \(\mathcal{M}^{+}_{k}\) and the submanifold \(\mathcal{N}^{+}_{k}\) are natural constraints, i.e., every constrained critical point of \(E^{+}\) on them is also a critical point of \(E^{+}\)._
Proof.: It is easy to see that \(\mathcal{M}^{+}_{k}\) is a fixed point space of an isometric representation of some group. Indeed, we can define the action of \(\mathbb{Z}_{pk}=\langle g\mid g^{pk}=\mathrm{id}\rangle\) on \((H^{1}_{0}(\Omega))^{N}\) as
\[g\circ(u_{1},\ldots,u_{N})=\mathscr{R}_{2\pi/pk}\sigma(u_{1},\ldots,u_{N}).\]
Note that \(\sigma^{p}=\mathrm{id}\) and hence that \(g^{p}\circ(u_{1},\ldots,u_{N})=\mathscr{R}_{2\pi/k}(u_{1},\ldots,u_{N})\), so the fixed point space of the action is precisely \(\mathcal{M}^{+}_{k}\). Then it follows from the symmetric criticality principle that \(\mathcal{M}^{+}_{k}\) is a natural constraint. The rest of the proof is similar to that in [7] and [23].
This lemma makes it legitimate to reduce the problem to seeking critical points on \(\mathcal{N}^{+}_{k}\).
The Palais-Smale condition also holds for \(E^{+}\).
**Lemma 2**.: _The restricted functional \(E^{+}\mid_{\mathcal{N}^{+}_{k}}\) satisfies the Palais-Smale condition, i.e., each sequence \((u_{j})\subset\mathcal{N}^{+}_{k}\) such that \(E(u_{j})\) is bounded and \(\nabla E^{+}\mid_{\mathcal{N}^{+}_{k}}\to 0\) has a convergent subsequence._
The proof of this lemma is exactly same as that in [23].
### A \(\mathbb{Z}_{p}\) Index
In this section, we introduce a \(\mathbb{Z}_{p}\) index theory, which is used in the estimate of the number of critical points. The classical works here are [25, 26].
Now we define an index associated with \(\mathbb{Z}_{p}\), where the action of \(\mathbb{Z}_{p}=\langle\sigma\mid\sigma^{p}=\mathrm{id}\rangle\) is defined by \(\sigma\) as (5).
**Definition 1**.: _For any closed \(\sigma\)-invariant subset \(A\subset\mathcal{N}^{+}_{k}\), define the index_
\[\gamma(A)=\min\Big{(}\{\infty\}\cup\{m\in\mathbb{N}\colon\exists h\in C(A; \mathbb{C}^{m}\setminus\{0\})\text{ satisfying }h(\sigma U)=\mathrm{e}^{2\pi\mathrm{i}/p}h(U)\}\Big{)}.\]
In particular, \(\gamma(\varnothing)=0\) and \(\gamma(A)=\infty\) if \(A\) contains a fixed point of \(\sigma\).
In order to use the \(\mathbb{Z}_{p}\) index theory, we need the following lemma to exclude the existence of fixed point under the action of \(\mathbb{Z}_{p}\).
**Lemma 3**.: _Under the assumptions (A)-(D), for \(p\mid j\), \(u_{j+1}=u_{j+2}=\cdots=u_{j+p}\) cannot hold for \((u_{1},\ldots,u_{N})\in_{k}^{\mathcal{N}}+\). Therefore, there is no fixed point on \(\mathcal{N}_{k}^{+}\) under the action of \(\mathbb{Z}_{p}\)._
Proof.: Suppose the assertion of the lemma is false. Without loss of generality, assume that \(U=(u_{1},\ldots,u_{N})\in\mathcal{N}_{k}^{+}\) and \(u_{1}=u_{2}=\cdots=u_{p}\). By the assumption (D) and the definition of \(\mathcal{N}_{k}^{+}\), we obtain
\[0 < \,\int_{\Omega}(|\nabla u_{1}|^{2}+\lambda_{1}u_{1}^{2})\] \[= \,\mu_{1}\int_{\Omega}(u_{1}^{+})^{4}+\int_{\Omega}\sum_{k=2}^{N }\beta_{jk}u_{1}^{2}u_{k}^{2}\] \[\leq \,\left(\mu_{1}+\sum_{k=2}^{N}\beta_{ik}\right)\int_{\Omega}u_{1} ^{4}\] \[\leq \,0,\]
which is impossible.
**Remark 2**.: _As \(u_{j+1}=\mathscr{R}_{2\pi/(pk)}u_{j}\) for \(p\nmid j\) in \(\mathcal{N}_{k}^{+}\), this lemma implies that functions in \(\mathcal{N}_{k}^{+}\) cannot be \(\mathscr{R}_{2\pi/(pk)}\)-invariant._
Let \(\mathcal{N}_{k}^{c}=\{U\in\mathcal{N}_{k}^{+}\colon E^{+}(U)\leq c\}\). The Palais-Smale condition ensures the validity of the following deformation lemma.
**Proposition 1**.: _Let \(c\in\mathbb{R}\), and let \(\mathcal{O}\) be a \(\sigma\)-invariant open neighborhood of \(K_{c}\) in \(\mathcal{N}_{k}^{+}\). Then there exists \(\varepsilon>0\) and a \(C^{1}\)-deformation \(\eta\colon[0,1]\times\mathcal{N}_{k}^{c+\varepsilon}\setminus\mathcal{O} \rightarrow\mathcal{N}_{k}^{c+\varepsilon}\) such that_
* \(\eta(0,\cdot)=\mathrm{id}\)_,_
* \(\eta(1,\mathcal{N}_{k}^{c+\varepsilon}\setminus\mathcal{O})\subset\mathcal{N }_{k}^{c-\varepsilon}\)_,_
* \(\eta(t,\sigma(\cdot))=\sigma\eta(t,\cdot)\) _for each_ \(t\in[0,1]\)_._
With the deformation lemma, one can prove the following elementary properties of the index.
**Proposition 2**.: _Let \(A,B\subset\mathcal{N}_{k}^{+}\) be closed and \(\sigma\)-invariant._
* _If_ \(A\subset B\)_, then_ \(\gamma(A)\leq\gamma(B)\)_._
* \(\gamma(A\cup B)\leq\gamma(A)+\gamma(B)\)_._
_._
3. _If_ \(g\colon A\to\mathcal{N}_{k}^{+}\) _is continuous and_ \(\sigma\)_-equivariant, i.e.,_ \[g(\sigma(U))=\sigma g(U),\qquad\text{for all }U\in A,\] _then_ \(\gamma(A)\leq\gamma(\overline{g(A)})\)_._
4. _If_ \(\gamma(A)>1\) _and_ \(A\) _does not contain fixed points of_ \(\sigma\)_, then_ \(A\) _is an infinite set._
5. _If_ \(A\) _is compact and_ \(A\) _does not contain fixed points of_ \(\sigma\)_, then_ \(\gamma(A)<\infty\)_, and there exists a relatively open and_ \(\sigma\)_-invariant neighborhood_ \(N\) _of_ \(A\) _in_ \(\mathcal{N}_{k}^{+}\) _such that_ \(\gamma(A)=\gamma(\bar{N})\)_._
6. _If_ \(S\) _is the boundary of a bounded and_ \(\sigma\)_-invariant neighborhood of zero in an_ \(m\)_-dimensional complex normed vector space and_ \(\Psi\colon S\to\mathcal{M}\) _is a continuous map satisfying_ \(\Psi(e^{2\pi i/p}U)=\sigma(\Psi(U))\)_, then_ \(\gamma(\Psi(S))\geq m\)_._
Define
\[c_{j}=\inf\{c\in\mathbb{R}\colon\gamma(\mathcal{N}_{k}^{c})\geq j\}\]
for each positive integer \(j\).
**Proposition 3**.: _For all \(c\in\mathbb{R}\), we have \(\gamma(K_{c})<\infty\), and there exists \(\varepsilon>0\) such that_
\[\gamma(\mathcal{N}_{k}^{c+\varepsilon})\leq\gamma(\mathcal{N}_{k}^{c- \varepsilon})+\gamma(K_{c}).\]
**Proposition 4**.: _If \(c:=c_{j}=c_{j+1}=\cdots=c_{j+d}<\infty\), then \(\gamma(K_{c})\geq d+1\)._
The proofs of these propositions are similar to those in [23] and [7], and we omit them.
### Multiplicity Result
We can derive the existence of infinitely many solutions by showing each \(c_{j}<\infty\).
**Proposition 5**.: _Denote by \(\mathbb{S}^{2m-1}\) the unit sphere in \(\mathbb{C}^{m}\). For each positive integer \(m\), there exists a continuous map \(\psi\colon\mathbb{S}^{2m-1}\to\mathcal{N}_{k}^{+}\), such that_
\[\psi(e^{2\pi i/p}z)=\sigma\psi(z).\]
Proof.: Let
\[\Omega_{\mathrm{sec}}=\begin{cases}\{(r\cos\theta,r\sin\theta)\in\Omega\colon 0<\theta<\frac{2\pi}{pk}\}&\text{if }n=2,\\ \{(r\cos\theta,r\sin\theta,z)\in\Omega\colon 0<\theta<\frac{2\pi}{pk}\}&\text{if }n=3. \end{cases}\]
For each \(i\in\{1,\ldots,m\}\), \(j\in\{1,\ldots,B\}\), choose a radially symmetric domain \(\Omega_{i,j}\subset\Omega\) and a function \(\hat{U}_{i}^{(j)}\in C_{0}^{\infty}(\Omega_{\mathrm{sec}}\cap\Omega_{i,j}) \setminus\{0\}\) such that all the \(\Omega_{i,j}\)-s are pairwise disjoint.
Next we put
\[U_{i}^{(j)}=\sum_{s=1}^{k}\mathscr{R}_{2\pi s/k}\hat{U}_{i}^{(j)}\qquad\text{for $i= 1,\dots,m$, $j=1,\dots,B$.}\]
Thus, we have
1. \(U_{i}^{(j)}\) is \(\mathscr{R}_{2\pi/k}\)-invariant,
2. \(\operatorname{supp}\mathscr{R}_{\theta_{1}}U_{i_{1}}^{(j_{1})}\cap \operatorname{supp}\mathscr{R}_{\theta_{2}}U_{i_{2}}^{(j_{2})}=\varnothing\) for all \(\theta_{1},\theta_{2}\in\mathbb{R}\) and \((i_{1},j_{1})\neq(i_{2},j_{2})\).
3. \(\operatorname{supp}\mathscr{R}_{\frac{2\pi s}{pk}}U_{i}^{(j)}\cap \operatorname{supp}\mathscr{R}_{\frac{2\pi t}{pk}}U_{i}^{(j)}=\varnothing\) if \(s\neq t\) and \(s,t\in\{0,1,\dots,p-1\}\),
Define \(\psi_{0}\colon\mathbb{S}^{2m-1}\to(H_{0}^{1}(\Omega))^{N}\) by
\[\psi_{0}(z)=\psi_{0}(r_{1}\mathrm{e}^{\mathrm{i}\theta_{1}}, \dots,r_{m}\mathrm{e}^{\mathrm{i}\theta_{m}})\] \[= \left(U^{(1)},\mathscr{R}_{\frac{2\pi}{pk}}U^{(1)},\dots, \mathscr{R}_{\frac{2\pi(p-1)}{pk}}U^{(1)};\quad\dots;\quad U^{(B)},\mathscr{R }_{\frac{2\pi}{pk}}U^{(B)},\dots,\mathscr{R}_{\frac{2\pi(p-1)}{pk}}U^{(B)}\right)\]
where
\[U^{(j)}=\sum_{i=1}^{m}r_{i}\mathscr{R}_{\theta_{i}/k}U_{i}^{(j)},\qquad\text{ for $j=1,\dots,B$.}\]
Since \(U_{i}^{(j)}\) are \(\mathscr{R}_{2\pi/k}\)-invariant, the functions \(U^{(j)}\) are well-defined. Then we have
\[\psi_{0}(\mathrm{e}^{2\pi\mathrm{i}/p}z) =\Phi(r_{1}\mathrm{e}^{\mathrm{i}(\theta_{1}+2\pi\mathrm{i}/p)}, \dots,r_{m}\mathrm{e}^{\mathrm{i}(\theta_{m}+2\pi\mathrm{i}/p)})\] \[=\mathscr{R}_{2\pi/(pk)}\psi_{0}(z)\] \[=\sigma\psi_{0}(z).\]
From (U1)-(U3), it follows that all components of \(\psi_{0}(z)\) are \(\mathscr{R}_{2\pi/k}\)-invariant and their supports are pairwise disjoint, and hence that \(\psi_{0}(\mathbb{S}^{2m-1})\subset\mathcal{M}_{k}^{+}\setminus\{0\}\).
Next we construct a \(C^{1}\) map \(\Lambda\colon\psi_{0}(\mathbb{S}^{2m-1})\to\mathcal{N}_{k}^{+}\) such that \(\Lambda\sigma=\sigma\Lambda\). For each \((U_{1},\dots,U_{N})\in\psi_{0}(\mathbb{S}^{2m-1})\), note that the supports of each component are pairwise disjoint, so by a direct computation we have
\[\Lambda(U_{1},\dots,U_{N}):=\left(\frac{\|U_{1}\|_{1}}{\|U_{1}^{+}\|_{L^{4}( \Omega)}^{2}}U_{1},\dots,\frac{\|U_{N}\|_{N}}{\|U_{N}^{+}\|_{L^{4}(\Omega)}^{2 }}U_{N}\right)\in\mathcal{N}_{k}^{+}.\]
Since the diagram
commutes, the proof is completed by setting \(\psi=\Lambda\circ\psi_{0}\)
Together with Lemma 3 and (f) of Proposition 2, this proposition ensures the existence of the set with an arbitrarily large index, and hence we have \(c_{j}<\infty\) for each positive integer \(j\). Now we can complete the proof of Theorem 1.
Proof of Theorem 1.: Recall that every nontrivial critical point of \(E^{+}\) is a positive classical solution of (1). Combining Proposition 2 and Proposition 4 gives the existence of infinitely many positive solutions in \(\mathcal{N}_{k}^{+}\) for each positive integer \(k\). By Remark 2 and the definition of \(\mathcal{N}_{k}^{+}\), we conclude that these solutions \(\mathscr{R}_{2\pi/k}\)-invariant but not \(\mathscr{R}_{2\pi/(pk)}\)-invariant, and the proof is completed.
By taking \(k=p^{j}\) for \(j=0,1,2,...\), we obtain Corollary 1.
## 3 Multiple Non-radial Nodal Solutions
In this section, we set
\[E(u_{1},\ldots,u_{N})=\sum_{i=1}^{N}\left(\frac{1}{2}\|u_{i}\|_{ i}^{2}-\frac{\mu_{i}}{4}\int_{\Omega}(u_{i})^{4}\right)-\frac{1}{2}\int_{ \Omega}\sum_{i\neq j}^{N}\beta_{ij}u_{i}^{2}u_{j}^{2},\] \[\mathcal{M}_{k}^{\mathrm{nod}}=\{U=(u_{1},\ldots,u_{N})\in(H_{0}^ {1}(\Omega))^{N}\colon u_{j}=-\mathscr{R}_{2\pi/2k}u_{j}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\text{for $j=1,2,\ldots,N$}\},\]
and the Nehari-type manifold
\[\mathcal{N}_{k}^{\mathrm{nod}}=\{U=(u_{1},\ldots,u_{N})\in\mathcal{M}_{k}^{ \mathrm{nod}}\colon u_{j}\neq 0,\partial_{j}E(U)u_{j}=0,\forall j=1,2,\ldots,N\},\]
where
\[\partial_{j}E(U)u_{j}=\int_{\Omega}(|\nabla u_{j}|^{2}+\lambda_{j}u_{j}^{2})- \mu_{j}\int_{\Omega}(u_{j})^{4}-\int_{\Omega}\sum_{\begin{subarray}{c}k=1\\ k\neq j\end{subarray}}^{N}\beta_{jk}u_{j}^{2}u_{k}^{2},\qquad\forall j=1,2, \ldots,N.\]
The functional \(E\) is of \(C^{2}\) and every nontrivial critical point of \(E\) is a classical solution of (1).
**Remark 3**.: _According to the definition of \(\mathcal{M}_{k}^{\mathrm{nod}}\), every component of \(u\in\mathcal{M}_{k}^{\mathrm{nod}}\) is \(\mathscr{R}_{2\pi/k}\)-invariant, since \(u_{j}=-\mathscr{R}_{2\pi/2k}u_{j}\) implies \(u_{j}=\mathscr{R}_{2\pi/k}u_{j}\)._
Similarly to that in the previous section, since \(\mathcal{M}_{k}^{\mathrm{nod}}\) is a fixed point space of an isometric representation of group \(\mathbb{Z}_{2k}=\langle g\mid g^{2k}=\mathrm{id}\rangle\) defined as
\[g\circ(u_{1},\ldots,u_{N})=-\mathscr{R}_{2\pi/2k}(u_{1},\ldots,u_{N}),\]
we have the following lemma.
**Lemma 4**.: _The subspace \(\mathcal{M}_{k}^{\mathrm{nod}}\) and the submanifold \(\mathcal{N}_{k}^{\mathrm{nod}}\) are natural constraints, i.e., every constrained critical point of \(E\) on them is also a critical point of \(E\)._
To use the \(\mathbb{Z}_{p}\) index theory, we need the counterparts of Proposition 1-5. These counterparts can be derived using the same method, except for Proposition 5, where the situation is more delicate. We present the details separately as follows.
**Proposition 6**.: _Denote by \(\mathbb{S}^{2m-1}\) the unit sphere in \(\mathbb{C}^{m}\). For any positive integer \(m\), there exists a continuous map \(\psi\colon\mathbb{S}^{2m-1}\to\mathcal{N}_{k}^{\mathrm{nod}}\), such that_
\[\psi(e^{2\pi i/p}z)=\sigma\psi(z).\]
Proof.: For each \(i\in\{1,\ldots,m\}\), \(j\in\{1,\ldots,B\}\) and \(\theta\in\mathbb{R}/(2\pi\mathbb{Z})\), one can find \(U_{\theta}^{i,j}\in C_{0}^{\infty}(\Omega)\setminus\{0\}\) satisfying the following conditions.
* The map \(\mathbb{R}/(2\pi\mathbb{Z})\to C_{0}^{\infty}(\Omega)\colon\theta\mapsto U_{ \theta}^{i,j}\) is continuous and \(U_{\theta}^{i,j}=-\mathscr{R}_{2\pi/2k}U_{\theta}^{i,j}\).
* \(\operatorname{supp}U_{\theta_{1}}^{i_{1},j_{1}}\cap\operatorname{supp}U_{ \theta_{2}}^{i_{2},j_{2}}=\varnothing\) for all \(\theta_{1},\theta_{2}\in\mathbb{R}/(2\pi\mathbb{Z})\) and \((i_{1},j_{1})\neq(i_{2},j_{2})\).
* \(\operatorname{supp}U_{\theta}^{i,j}\cap\operatorname{supp}U_{\theta+\frac{2 \pi s}{p}}^{i,j}=\varnothing\) if \(\theta\in\mathbb{R}/2\pi\mathbb{Z}\) and \(s\in\{1,\ldots,p-1\}\).
To be more precise, for each \(i\in\{1,\ldots,m\}\), \(j\in\{1,\ldots,B\}\) and \(\ell\in\{1,2,\ldots,2p\}\), find a nonzero function \(\varphi_{\ell}^{i,j}\in C_{0}^{\infty}(\Omega)\) such that \(\varphi_{\ell}^{i,j}=-\mathscr{R}_{2\pi/2k}\varphi_{\ell}^{i,j}\) and all the supports of \(\varphi_{\ell}^{i,j}\) are pairwise disjoint. Then choose \(2p\) functions \(\eta_{1},\ldots,\eta_{2p}\in C_{0}^{\infty}(\mathbb{R}/(2\pi\mathbb{Z}))\) such that for each \(\ell\in\{1,2,\ldots,2p\}\)
\[\eta_{\ell}>0\text{ in }\bigg{(}\frac{(\ell-1)\pi}{p},\frac{(\ell+1)\pi}{p} \bigg{)}+2\pi\mathbb{Z}\subset\mathbb{R}/(2\pi\mathbb{Z}),\]
and vanishes outside. Then we set
\[U_{\theta}^{i,j}=\sum_{\ell=1}^{2p}\eta_{\ell}(\theta)\varphi_{\ell}^{i,j}.\]
Thus, \(U_{\theta}^{i,j}\) satisfies (U1\({}^{\prime}\))-(U3\({}^{\prime}\)). The validity of (U1\({}^{\prime}\)) and (U2\({}^{\prime}\)) are obvious, while (U3\({}^{\prime}\)) follows from our choice of \(\operatorname{supp}\eta_{\ell}\). In fact, when \(\theta\in(\frac{t\pi}{p},\frac{(t+1)\pi}{p})\) for some \(t\in\{0,1,\ldots,2p-1\}\), \(\eta_{\ell}\neq 0\) if and only if \(\ell\equiv t\) or \(t+1\pmod{2p}\). With setting \(\varphi_{\ell+2p}^{i,j}=\varphi_{\ell}^{i,j}\) for \(\ell=1,2,\ldots,2p\), we conclude that \(U_{\theta}^{i,j}\neq 0\) and that
\[\operatorname{supp}U_{\theta}^{i,j}\subset\left(\operatorname{supp}\varphi_{t} ^{i,j}\cup\operatorname{supp}\varphi_{t+1}^{i,j}\right).\]
Similarly, for \(s\in\{1,2,\ldots,p-1\}\), we conclude that
\[\operatorname{supp}U_{\theta+\frac{2\pi s}{p}}^{i,j}\subset\left(\operatorname {supp}\varphi_{t+2s}^{i,j}\cup\operatorname{supp}\varphi_{t+2s+1}^{i,j}\right),\]
and hence that
\[\operatorname{supp}U_{\theta}^{i,j}\cap\operatorname{supp}U_{\theta+\frac{2\pi \mathfrak{s}}{p}}^{i,j}=\varnothing.\]
The case of \(\theta=\frac{t\pi}{p}\) for some \(t\) is similar, and we omit it.
Hence, we can define \(\psi_{0}\colon\mathbb{S}^{2m-1}\to(H_{0}^{1}(\Omega))^{N}\) by
\[\psi_{0}(z)=\psi_{0}(r_{1}\mathrm{e}^{\mathrm{i}\theta_{1}},\dots,r_{m}\mathrm{e}^{\mathrm{i}\theta_{m}})\] \[= \left(U_{1}^{(1)},\dots,U_{p}^{(1)};\quad\dots;\quad U_{1}^{(B)},\dots,U_{p}^{(B)}\right)\]
where
\[U_{\ell}^{(j)}=\sum_{i=1}^{m}r_{i}U_{\theta_{i}+\frac{2\pi\mathfrak{s}}{p}}^{ i,j},\qquad\text{for $\ell=1,\dots,p$, $j=1,\dots,B$.}\]
We thus have
\[\psi_{0}(\mathrm{e}^{2\pi\mathrm{i}/p}z) =\Phi(r_{1}\mathrm{e}^{\mathrm{i}(\theta_{1}+2\pi\mathrm{i}/p)}, \dots,r_{m}\mathrm{e}^{\mathrm{i}(\theta_{m}+2\pi\mathrm{i}/p)})\] \[=\sigma\psi_{0}(z).\]
From (U1\({}^{\prime}\))-(U3\({}^{\prime}\)), it follows that \(\psi_{0}(z)\in\mathcal{M}_{k}^{\mathrm{nod}}\), and that the supports of all components of \(\psi_{0}(z)\) are pairwise disjoint. Since \(r_{i}\)-s are not all zero for \((r_{1}\mathrm{e}^{\mathrm{i}\theta_{1}},\dots,r_{m}\mathrm{e}^{\mathrm{i} \theta_{m}})\in\mathbb{S}^{2m-1}\), each component of \(\psi_{0}(z)\) is nonzero by the definition. These facts allow one to construct a \(C^{1}\) map \(\Lambda\colon\psi_{0}(\mathbb{S}^{2m-1})\to\mathcal{N}_{k}\) such that \(\Lambda\sigma=\sigma\Lambda\). Then the rest part is similar to that in Proposition 5.
Now we can finish the proof of Theorem 2.
Proof of Theorem 2.: The existence part of the theorem can be obtained using the same method. By the definition of \(\mathcal{M}_{k}^{\mathrm{nod}}\), \(u\in\mathcal{M}_{k}^{\mathrm{nod}}\) cannot be \(\mathscr{R}_{2\pi/(2k)}\)-invariant provided \(u\neq 0\), and this gives the last part of the theorem.
By taking \(k=2^{j}\) for \(j=0,1,2,...\), we obtain Corollary 2.
## 4 Minimal Period
In this section, we will restrict ourselves to \(2\)-coupled system (4). Without loss of generality, we assume \(\lambda=1\). Then the system becomes
\[\begin{cases}-\Delta u+u=u^{3}+\beta uv^{2}\\ -\Delta v+v=v^{3}+\beta vu^{2}.\end{cases} \tag{6}\]
We follow the notations used in Section 2. We will show that, for \(\beta<0\), each component of the least energy solution of (6) on \(\mathcal{N}_{k}\) has a minimal period of \(2\pi/k\).
### The pull-back on \(H^{1}_{0}(\Omega)\) of the Functional
Our proof starts with the following observation.
**Lemma 5**.: _The subspace \(\mathcal{M}^{+}_{k}\) is toplinear isomorphic to \(H^{1}_{0}(\Omega)\) in the sense that there is a continuous bijective linear map from \(H^{1}_{0}(\Omega)\) onto \(\mathcal{M}^{+}_{k}\)._
Proof.: Using polar coordinate system, we define \(\Psi_{k}\colon H^{1}_{0}(\Omega)\to\mathcal{M}^{+}_{k}\) by
\[(\Psi_{k}u)(r,\theta)=(u(r,k\theta),u(r,k\theta+\pi/k)).\]
It is easy to check that the map \(\Psi_{k}\) is well-defined and has the desired properties.
From now on, \(\Psi_{k}\) denotes the map given above. Let \(\hat{E}^{+}_{k}\), the pullback of \(E^{+}\) by \(\Psi_{k}\), be defined by \(\hat{E}_{k}=E\circ\Psi_{k}\). A direct computation gives
\[\hat{E}^{+}_{k}(u)=\int_{0}^{\infty}\int_{0}^{2\pi}\left(u_{r}^{2}+\frac{k^{2 }}{r^{2}}u_{\theta}^{2}\right)r\,\mathrm{d}\theta\,\mathrm{d}r+\int_{\Omega}u ^{2}-\frac{1}{2}\int_{\Omega}(|u^{+}|^{4}+\beta u^{2}(x)u^{2}(-x))\,\mathrm{d}x,\]
Besides, it is easily seen that, for every critical point \(U\) of \(E^{+}\), the function \(\hat{u}:=\Psi_{k}^{-1}(U)\) is a critical point of \(\hat{E}_{k}\), and that \(\hat{u}\) satisfies
\[-\Delta_{k}\hat{u}(x)+\hat{u}(x)=\hat{u}^{3}(x)+\beta\hat{u}(x)\hat{u}^{2}(-x), \tag{7}\]
where
\[\Delta_{k}u(r,\theta):=\frac{1}{r}\frac{\partial}{\partial r}\left(r\frac{ \partial u}{\partial r}\right)+\frac{k^{2}}{r^{2}}\frac{\partial^{2}u}{ \partial\theta^{2}},\qquad\forall u\in C^{2}(\Omega), \tag{8}\]
writing in polar coordinates.
Let \(\hat{\mathcal{N}}^{+}_{k}=\Psi_{k}^{-1}(\mathcal{N}^{+}_{k})\) be the Nehari-type manifold associated with \(\hat{\Psi}_{k}\). We have the following lemma.
**Lemma 6**.: _Assume \(\beta<0\). Let \(u\in H^{1}_{0}(\Omega)\) satisfy_
\[\int_{\Omega}|u^{+}|^{4}+\beta\int_{\Omega}u^{2}(x)u^{2}(-x)\,\mathrm{d}x>0. \tag{9}\]
_Then there is a unique \(\lambda>0\) such that \(\lambda u\in\hat{\mathcal{N}}^{+}_{k}\) and_
\[\hat{E}^{+}_{k}(\lambda u)=\sup_{t>0}\hat{E}^{+}_{k}(tu).\]
Proof.: Inequality (9) implies
\[\lim_{t\to\infty}\hat{E}^{+}_{k}(tu)=-\infty.\]
On the other hand, by Sobolev inequality and Cauchy-Schwarz inequality,
\[\frac{1}{2}\int_{\Omega}(|u^{+}|^{4}+\beta u^{2}(x)u^{2}(-x))\,\mathrm{d}x\leq C \left(\int_{\Omega}\lvert\nabla u\rvert^{2}\,\mathrm{d}x\right)^{2}.\]
This implies \(0\) is an isolated local minimum of \(\hat{E}\). Hence, the function \(t\mapsto\hat{E}(tu)\) achieves its maximum at some \(\lambda>0\). An easy computation shows that
\[\lambda=\sqrt{\frac{\int_{0}^{1}\int_{0}^{2\pi}\left(u_{r}^{2}+\frac{k^{2}}{r^ {2}}u_{\theta}^{2}\right)r\,\mathrm{d}\theta\,\mathrm{d}r+\int_{\Omega}u^{2}} {\int_{\Omega}\lvert u^{+}\rvert^{4}+\beta\int_{\Omega}u^{2}(x)u^{2}(-x)\, \mathrm{d}x}},\]
which is the unique critical point of \(t\mapsto\hat{E}(tu)\) on \((0,+\infty)\), and thus \(\lambda u\in\hat{\mathcal{N}}_{k}\).
### The Symmetry of the Minimizer
We begin with the existence of the minimizer of \(\hat{E}_{k}^{+}\) on \(\hat{\mathcal{N}}_{k}^{+}\).
**Proposition 7**.: _Assume \(\beta<0\). For each positive integer \(k\), there exists \((u,v)\in\mathcal{N}_{k}^{+}\) such that \(E^{+}(u,v)=\inf_{\mathcal{N}_{k}}E^{+}\), i.e.,_
\[\hat{E}_{k}^{+}(\Psi^{-1}(u,v))=\inf_{\hat{\mathcal{N}}_{k}^{+}}\hat{E}_{k}^{+}.\]
Proof.: The proof is similar to that of [20, Theorem 1.1] and will be omitted.
Recall that a function \(u\colon\Omega\to\mathbb{R}\) is said to be _foliated Schwarz symmetry_ with respect to \(e\in\partial B_{1}(0)\) if for a.e. \(r>0\) such that \(\partial B_{r}(0)\subset\Omega\) and for every \(c\in\mathbb{R}\) the set \(\{x\in\partial B_{r}(0)\colon u(x)\geq c\}\) is either equal to \(\partial B_{r}(0)\) or to a geodesic ball in \(\partial B_{r}(0)\) centered at \(rp\). In other words, \(u\) is foliated Schwarz symmetry with respect to \(e\in\partial B_{1}(0)\) if \(u(x)\) depends only on \((r,\theta)=(\lvert x\rvert,\arccos(x\cdot e)/\lvert x\rvert)\) and is non-increasing in \(\theta\).
We will show the minimizer of \(\hat{E}_{k}^{+}\) on \(\hat{\mathcal{N}}_{k}^{+}\) is foliated Schwarz symmetric. For this, we introduce the notions about polarization, following Tavares and Weth [20]. The classical works here are [24, 29]. Define
\[\mathcal{H}_{0}=\{H\subset\mathbb{R}^{n}\colon\text{ $H$ is a closed half-space in $\mathbb{R}^{n}$ and $0\in\partial H$}\}.\]
For each \(H\in\mathcal{H}_{0}\), we use \(\sigma_{H}\) to denote the reflection with respect to \(\partial H\), and for \(u\in H^{1}(\Omega)\) we define \(u_{H}\) by
\[u_{H}(x)=\begin{cases}\max\{u(x),u(\sigma_{H}(x))\}&x\in\Omega\cap H,\\ \min\{u(x),u(\sigma_{H}(x))\}&x\in\Omega\setminus H.\end{cases}\]
We can now state our main tool, which is an analogue of [20, Theorem 4.3].
**Lemma 7**.: _Assume \(\beta<0\). Let \(\hat{u}\in C^{2}(\Omega)\cap C^{1}(\bar{\Omega})\) be a classical solution of Equation (7). If \(\hat{u}_{H}\) is still a classical solution of (7) for every \(H\in\mathcal{H}_{0}\), then \(u\) is foliated Schwarz symmetric._
One can rewrite Equation (7) as
\[\begin{cases}-\Delta_{k}\hat{u}+\hat{u}=\hat{u}^{3}+\beta\hat{u} \hat{v}^{2},\\ -\Delta_{k}\hat{v}+\hat{v}=\hat{v}^{3}+\beta\hat{v}\hat{u}^{2},\\ \hat{u}(x)=\hat{v}(-x)\qquad\text{for all }x\in\Omega.\end{cases}\]
Since the operator \(\Delta_{k}\) given by (8) is still a uniform elliptic operator for which the strong maximum principle holds, the same proof of [20, Theorem 4.3] still works.
**Lemma 8**.: _Assume \(\beta<0\). For each \(u\in H^{1}_{0}(\Omega)\) and \(H\in\mathcal{H}_{0}\), \(\hat{E}^{+}_{k}(u_{H})\leq\hat{E}^{+}_{k}(u)\)._
Proof.: We write
\[G(u)=2\int_{0}^{1}\int_{0}^{2\pi}\left(u_{r}^{2}+\frac{k^{2}}{r^ {2}}u_{\theta}^{2}\right)r\,\mathrm{d}\theta\,\mathrm{d}r,\] \[P(u)=\int_{\Omega}2u^{2}-\frac{1}{2}|u^{+}|^{4}\,\mathrm{d}x, \qquad\text{and }Q(u)=-\beta\int_{\Omega}u^{2}(x)u^{2}(-x)\,\mathrm{d}x.\]
Without loss of generality, we assume \(H=\{(x,y)\colon x\geq 0\}\). Using polar coordinate system, we have
\[u_{H}(r,\theta)=\begin{cases}\max\{u(r,\theta),u(r,-\theta)\}& \theta\in[0,\pi],\\ \min\{u(r,\theta),u(r,-\theta)\}&\theta\in[-\pi,0].\end{cases}\]
Thus, we have \(G(u_{H})=G(u)\) by a direct computation. It follows from a change of variables formula that \(P(u_{H})=P(u)\). It remains to show \(Q(u_{H})\leq Q(u)\). Indeed, by the rearrangement inequality and the definition of \(u_{H}\), we have
\[\int_{\Omega}u_{H}^{2}(x)u_{H}^{2}(-x)\,\mathrm{d}x\] \[= \int_{\Omega\cap H}u_{H}^{2}(x)u_{H}^{2}(-x)+u_{H}^{2}(\sigma_{H} (x))u_{H}^{2}(-\sigma_{H}(x))\,\mathrm{d}x\] \[= \int_{\Omega\cap H}u_{H}^{2}(x)u_{H}^{2}(-x)+u_{H}^{2}(\sigma_{H} (x))u_{H}^{2}(-\sigma_{H}(x))\,\mathrm{d}x\] \[\leq \int_{\Omega\cap H}u^{2}(x)u^{2}(-x)+u^{2}(\sigma_{H}(x))u^{2}(- \sigma_{H}(x))\,\mathrm{d}x\] \[= \int_{\Omega}u^{2}(x)u^{2}(-x)\,\mathrm{d}x.\]
Combining these gives \(\hat{E}_{k}(u_{H})\leq\hat{E}_{k}(u)\)
**Lemma 9**.: _Assume \(\beta<0\). Let \(\hat{u}\) be the minimizer of \(\hat{E}_{k}\) on \(\hat{\mathcal{N}}_{k}\). Then \(u_{H}\) is still on \(\hat{\mathcal{N}}_{k}\), i.e., \(\Psi_{k}u_{H}\in\mathcal{N}_{k}\)._
Proof.: The condition \(\hat{u}\in\hat{\mathcal{N}}_{k}\) implies that
\[\int_{\Omega}\lvert\hat{u}^{+}\rvert^{4}+\beta\int_{\Omega}\hat{u}^{2}(x)\hat{ u}^{2}(-x)\,\mathrm{d}x=\int_{\Omega}k^{2}\lvert\nabla\hat{u}\rvert^{2}+\hat{u}^{2 }>0\]
From the third part of the proof of Lemma 8, we conclude that
\[\int_{\Omega}\hat{u}_{H}^{2}(x)\hat{u}_{H}^{2}(-x)\,\mathrm{d}x\leq\int_{ \Omega}\hat{u}^{2}(x)\hat{u}^{2}(-x)\,\mathrm{d}x,\]
and hence that
\[\int_{\Omega}\lvert\hat{u}_{H}^{+}\rvert^{4}+\beta\int_{\Omega}\hat{u}_{H}^{ 2}(x)\hat{u}_{H}^{2}(-x)\,\mathrm{d}x>0.\]
Thus, by Lemma 6, there is a \(\lambda\) such that \(\lambda\hat{u}_{H}\in\hat{\mathcal{N}}_{k}\) and
\[\hat{E}_{k}(\lambda u_{H})=\sup_{t>0}\hat{E}_{k}(tu_{H}).\]
Write \(m=\hat{E}_{k}(\hat{u})=\inf_{\hat{\mathcal{N}}_{k}}\hat{E}\). Since \((\lambda\hat{u})_{H}=\lambda u_{H}\), we have
\[m\leq\hat{E}_{k}(\lambda\hat{u}_{H})\leq\hat{E}_{k}(\lambda\hat{u})\leq\hat{E }_{k}(\hat{u})=m.\]
By the uniqueness given by Lemma 6, we have \(\lambda=1\). Hence, \(\hat{u}_{H}\in\hat{\mathcal{N}}_{k}\).
**Theorem 4**.: _The minimizer \(\hat{u}\) of \(\hat{E}_{k}\) on \(\hat{\mathcal{N}}_{k}\) is foliated Schwarz symmetry._
Proof.: By Lemmas 8 and 9, we see that, for every \(H\in\mathcal{H}_{0}\), \(u_{H}\) is still a classical solution of (6). Then from Lemma 7, we conclude that \(u\) is foliated Schwarz symmetry with respect to some point.
**Corollary 3**.: _The minimizer \(\hat{u}\) of \(\hat{E}_{k}\) on \(\hat{\mathcal{N}}_{k}\) has minimal period \(2\pi\) and each component of \(U=\Psi_{k}u\) has minimal period \(2\pi/k\)._
## 5 Extensions
In the preceding sections for clarity and simplicity of presentations we have focused on radially symmetric domains and solutions invariant under rotations symmetries. Our methods can be adapted for more general domains, as well as solutions invariant under more general group actions. We list some here with the proofs omitted or just sketched.
**Remark 4**.: _Checking through the proofs in Sections 2 and 3 we see that, in Theorem 1 and 2, for dimension \(n=3\), we only need \(\Omega\) to be invariant under rotation symmetries with respect to some fixed axis, e.g., \(\Omega\) can be cylinder-type or cone-type domains. We leave the statements to interested readers._
**Remark 5**.: _The non-radial solutions we have constructed so far all are invariant under some rotation type symmetries. One would wonder whether this can be done for some group invariant functions under other type groups (as subgroups of \(O(n)\)). It turns out that this issue is more delicate than it might be thought of. We can do this for positive non-radial solutions but not sure about for nodal solutions. We discuss this next, which was motivated by the work of [27] where again within each symmetric class of functions they prove the existence of a ground state which is a non-radial positive solution. Our goal is to show the existence of an infinite sequence of non-radial positive solutions that share the same group invariance. We sketch a proof here._
Let \(\mathcal{G}\) be a nontrivial subgroup of \(O(n)\). We denote by \(\mathcal{G}x\) the set \(\{gx\in\mathbb{R}^{n}\colon g\in\mathcal{G}\}\), the orbit of \(x\) under \(\mathcal{G}\). A function \(u\colon\Omega\to\mathbb{R}\) is said to be \(\mathcal{G}\)-symmetric if \(u=u\circ g\) for all \(g\in\mathcal{G}\).
**Definition 2**.: _Let \(\mathfrak{b}\in O(n)\), and let \(\mathcal{G}\) be a nontrivial compact subgroup of \(O(n)\). We call the pair \((\mathcal{G},\mathfrak{b})\) admissible if_
1. \(\mathfrak{b}\) _is contained in the normalizer of_ \(\mathcal{G}\)_, and_ \(\mathfrak{b}^{p}\in\mathcal{G}\)_._
2. _There exists_ \(x_{0}\in\mathbb{R}^{n}\setminus\{0\}\) _such that_ \[\mathcal{G}(\mathfrak{b}^{s}x_{0})\cap\mathcal{G}(\mathfrak{b}^{t}x_{0})= \varnothing\qquad\text{for $s\not\equiv t$}\pmod{p}.\]
**Remark 6**.: _In [27], Wei and Weth were the first to introduce the definition of admissible pair to study the existence of solution with invariance under group \(\mathcal{G}\) of a coupled system in \(\mathbb{R}^{N}\). Our assumptions are somewhat weaker than that in [27], but this is sufficient for our purpose._
Condition (b) implies that \(\mathfrak{b},\mathfrak{b}^{2},\ldots,\mathfrak{b}^{p-1}\notin\mathcal{G}\) and \(\mathfrak{b}^{p}\in\mathcal{G}\). Condition (a) ensures that \(\mathcal{G}_{\mathfrak{b}}=\mathcal{G}\cup\mathfrak{b}\mathcal{G}\cup\cdots \cup\mathfrak{b}^{p-1}\mathcal{G}\) is a group and therefore makes it legitimate to define an action \(\star\) of \(\mathcal{G}_{\mathfrak{b}}\) on \((H^{1}_{0}(\Omega))^{N}\) as
\[\mathfrak{b}\star(u_{1},\ldots,u_{N})=\sigma(u_{1}\circ\mathfrak{ b}^{-1},\ldots,u_{N}\circ\mathfrak{b}^{-1}),\] \[\mathfrak{g}\star(u_{1},\ldots,u_{N})=(u_{1}\circ\mathfrak{g}^{- 1},\ldots,u_{N}\circ\mathfrak{g}^{-1})\qquad\text{for $\mathfrak{g}\in\mathcal{G}$}.\]
The fixed point space of \(\star\) is precisely
\[\mathcal{M}_{(\mathcal{G},\mathfrak{b})}=\{(u_{1},\ldots,u_{N}) \in(H^{1}_{0}(\Omega))^{N} \colon u_{i}\text{ is $\mathcal{G}$-symmetric for $i=1,2,\ldots,N$}\] \[\text{ and $u_{j+1}=u_{j}\circ\mathfrak{b}^{-1}$ for $p\nmid j$}\}.\]
We proceed by defining the Nehari-type manifold in \(\mathcal{M}_{(\mathcal{G},\mathfrak{b})}\) by
\[\mathcal{N}_{(\mathcal{G},\mathfrak{b})}=\{U=(u_{1},\ldots,u_{N})\in\mathcal{M}_{ (\mathcal{G},\mathfrak{b})}\colon u_{j}\neq 0,\partial_{j}E^{+}(U)u_{j}=0, \forall j=1,2,\ldots,N\}.\]
**Example 1**.:
1. _Let_ \(k\) _be a positive integer and let_ \(\mathfrak{b}=R_{2\pi/(pk)}\) _(we recall that_ \(R_{\theta}\) _is defined by (_3_)). Put_ \(\mathcal{G}_{0}=\{\mathrm{id},\mathfrak{b}^{p},\ldots,\mathfrak{b}^{(k-1)p}\}\)_. Then_ \(\mathcal{G}_{0}\) _is a finite subgroup of_ \(O(n)\) _and_ \((\mathcal{G}_{0},\mathfrak{b})\) _clearly satisfies the admissibility condition (a). Taking_ \(x_{0}=(1,0,0)\)_, one can easily check that (b) also holds. In this case,_ \(\mathcal{M}_{(\mathcal{G}_{0},\mathfrak{b})}\) _is precisely the subspace_ \(\mathcal{M}_{k}^{+}\) _we have defined in Section_ 2_._
2. _We use the notation from the previous example. Let_ \(\mathcal{G}\) _be the group generated by the reflection_ \(F\colon(x,y,z)\mapsto(x,y,-z)\) _and elements in_ \(\mathcal{G}_{0}\)_. Since_ \(\mathfrak{b}\) _commutes with_ \(F\)_, and_ \(\mathcal{G}x_{0}=\mathcal{G}_{0}x_{0}\)_,_ \((\mathcal{G},\mathfrak{b})\) _is also an admissible pair._
3. _Let_ \(n=3\) _and_ \(N=2\)_. Set_ \(\mathcal{G}=\{R_{\theta}\colon\theta\in[0,2\pi)\}\) _and let_ \(\mathfrak{b}\) _be the reflection_ \((x,y,z)\mapsto(x,y,-z)\)_. Since_ \(\mathfrak{b}R_{\theta}=R_{\theta}\mathfrak{b}\)_, the admissibility condition (a) is satisfied for_ \((\mathcal{G},\mathfrak{b})\)_. Taking_ \(x_{0}=(0,0,1)\)_, we have_ \(\mathcal{G}x_{0}=\{(0,0,1)\}\)_,_ \(\mathcal{G}(\mathfrak{b}x_{0})=\{(0,0,-1)\}\) _and hence_ \(\mathcal{G}x_{0}\cap\mathcal{G}(\mathfrak{b}x_{0})=\varnothing\)_. Therefore,_ \((\mathcal{G},\mathfrak{b})\) _is an admissible pair._
4. _(An example from_ _[_27_]__) Let_ \(n=3\) _and_ \(N=2\)_. Consider the tetrahedral group_ \(\mathcal{G}\) _generated by the coordinate permutations_ \((x_{1},x_{2},x_{3})\mapsto(x_{\pi_{1}},x_{\pi_{2}},x_{\pi_{3}})\) _and the map_ \((x_{1},x_{2},x_{3})\mapsto(x_{1},-x_{2},-x_{3})\)_. Let_ \(\mathfrak{b}\) _be the reflection_ \(x\mapsto-x\)_. Then_ \(\mathfrak{b}\) _commutes with each elements of_ \(\mathcal{G}\)_. Taking_ \(x_{0}=(1,1,1)\) _we have_ \[\mathcal{G}x_{0}=\{(1,1,1),(-1,-1,1),(1,-1,-1),(-1,1,-1)\},\] _and_ \[\mathcal{G}(\mathfrak{b}x_{0})=\{(-1,-1,-1),(1,1,-1),(-1,1,1),(1,-1,1)\}.\] _Hence_ \(\mathcal{G}x_{0}\cap\mathcal{G}(\mathfrak{b}x_{0})=\varnothing\) _and_ \((\mathcal{G},\mathfrak{b})\) _is an admissible pair._
Similarly to that in Section 2, we have the following lemma.
**Lemma 10**.: _The subspace \(\mathcal{M}_{(\mathcal{G},\mathfrak{b})}\) and the submanifold \(\mathcal{N}_{(\mathcal{G},\mathfrak{b})}\) are natural constraints, i.e., every constrained critical point of \(E\) on them is also a critical point of \(E\)._
Likewise, we only need to prove the following proposition.
**Proposition 8**.: _Let \((\mathcal{G},\mathfrak{b})\) be an admissible pair. Denote by \(\mathbb{S}^{2m-1}\) the unit sphere in \(\mathbb{C}^{m}\). For each positive integer \(m\), there exists a continuous map \(\psi\colon\mathbb{S}^{2m-1}\to\mathcal{N}_{(\mathcal{G},\mathfrak{b})}\), such that_
\[\psi(e^{2\pi i/p}z)=\sigma\psi(z).\]
Proof.: For each \(i\in\{1,\ldots,m\}\), \(j\in\{1,\ldots,B\}\) and \(\theta\in\mathbb{R}/(2\pi\mathbb{Z})\), one can find \(U_{\theta}^{i,j}\in C_{0}^{\infty}(\Omega)\setminus\{0\}\) satisfying the following conditions.
* The map \(\mathbb{R}/(2\pi\mathbb{Z})\to C^{\infty}_{0}(\Omega)\colon t\mapsto U^{i,j}_{t}\) is continuous and \(U^{i,j}_{t}\) is \(\mathcal{G}\)-symmetric.
* \(\operatorname{supp}U^{i_{1},j_{1}}_{\theta_{1}}\cap\operatorname{supp}U^{i_{2},j_{2}}_{\theta_{2}}=\varnothing\) for all \(\theta_{1},\theta_{2}\in\mathbb{R}/(2\pi\mathbb{Z})\) and \((i_{1},j_{1})\neq(i_{2},j_{2})\).
* \(\operatorname{supp}U^{i,j}_{\theta}\cap\operatorname{supp}U^{i,j}_{\theta+ \frac{2\pi s}{p}}=\varnothing\) and \(U^{i,j}_{\theta+\frac{2\pi s}{p}}=U^{i,j}_{\theta}\circ(\mathfrak{b}^{-1})^{s}\) if \(\theta\in\mathbb{R}/2\pi\mathbb{Z}\) and \(s\in\{1,\ldots,p-1\}\).
To be more precise, for each \(i\in\{1,\ldots,m\}\), \(j\in\{1,\ldots,B\}\) and \(\ell\in\{1,2,\ldots,p\}\), choose radially symmetric domains \(\Omega_{i,j},\hat{\Omega}_{i,j}\subset\Omega\) such that all the \(\Omega_{i,j}\)-s and \(\hat{\Omega}^{\prime}_{i,j}\)-s are pairwise disjoint. Let \(x_{0}\) be given by (b) in Definition 2. Choose \(\lambda^{i,j},\hat{\lambda}^{i,j}\in\mathbb{R}\) such that \(\lambda^{i,j}x_{0}\in\Omega_{i,j}\) and \(\hat{\lambda}^{i,j}x_{0}\in\hat{\Omega}_{i,j}\). Since \(\mathcal{G}\subset O(n)\), \(\mathfrak{b}\in O(n)\), it follows that \(\mathcal{G}(\mathfrak{b}^{s}\lambda^{i,j}x_{0})\subset\Omega^{i,j}\) and \(\mathcal{G}(\mathfrak{b}^{s}\hat{\lambda}^{i,j}x_{0})\subset\hat{\Omega}^{i,j}\) for \(s=0,1,\ldots,p\). The set \(\mathcal{G}x_{0}\) is compact, and, in consequence, there exists a \(\delta>0\) such that
\[N_{\delta}(\mathcal{G}(\mathfrak{b}^{s}\lambda^{i,j}x_{0})) \cap N_{\delta}(\mathcal{G}(\mathfrak{b}^{t}\lambda^{i,j}x_{0}))=\varnothing, \tag{10}\] \[\text{and }N_{\delta}(\mathcal{G}(\mathfrak{b}^{s}\hat{\lambda}^{i,j}x_{0}))\cap N_{\delta}(\mathcal{G}(\mathfrak{b}^{t}\hat{\lambda}^{i,j}x_{0} ))=\varnothing\]
for \(s\not\equiv t\pmod{p}\) and all \(i,j\), by the assumption (b) of admissible pair in Definition 2. Here and subsequently,
\[N_{\delta}(A)=\{x\in\mathbb{R}^{n}\colon d(x,A)<\delta\}\qquad\text{for }A \subset\mathbb{R}^{n}.\]
We can also choose \(\delta\) sufficiently small such that
\[N_{\delta}(\mathcal{G}(\mathfrak{b}^{s}\lambda^{i,j}x_{0}))\subset\Omega_{i,j }\text{ and }N_{\delta}(\mathcal{G}(\mathfrak{b}^{s}\hat{\lambda}^{i,j}x_{0})) \subset\hat{\Omega}_{i,j}.\]
Then we can find a nonzero function \(\varphi^{i,j}\in H^{1}_{0}(N_{\delta}(\mathcal{G}\lambda^{i,j}x_{0}))\) and \(\hat{\varphi}^{i,j}\in H^{1}_{0}(N_{\delta}(\mathcal{G}\hat{\lambda}^{i,j}x_{ 0}))\) such that \(\varphi^{i,j}\) and \(\hat{\varphi}^{i,j}\) are \(\mathcal{G}\)-symmetric. Indeed, let \(\varphi_{0}\in C^{\infty}_{0}(B_{\delta}(\lambda^{i,j}x_{0}))\) and let
\[\varphi^{i,j}(x)=\sup_{\mathfrak{g}\in\mathcal{G}}\varphi_{0}(\mathfrak{g}^{- 1}x)\qquad\forall x\in\Omega.\]
Since the upper envelope (pointwise maximum) of functions preserves the property of being Lipschitz, \(\varphi^{i,j}\) is at least Lipschitz and hence in \(H^{1}_{0}(N_{\delta}(\mathcal{G}\lambda^{i,j}x_{0}))\). The construction for \(\hat{\varphi}^{i,j}\) is same.
Let \(\eta\in C^{\infty}_{0}(\mathbb{R}/(2\pi\mathbb{Z}))\) be a function such that
\[\eta>0\text{ in }\left(-\frac{\pi}{p},\frac{\pi}{p}\right)+2\pi\mathbb{Z} \subset\mathbb{R}/(2\pi\mathbb{Z}),\]
and vanishes outside. Set
\[\eta_{\iota}(\theta)=\eta\left(\theta-\frac{\iota\pi}{p}\right)\]
for each \(\iota\in\{1,2,\ldots,2p\}\). Thus,
\[\operatorname{supp}\eta_{1}=\left[0,\frac{2\pi}{p}\right]+2\pi\mathbb{Z},\ \ \operatorname{supp}\eta_{2}=\left[\frac{\pi}{p},\frac{3\pi}{p}\right]+2\pi \mathbb{Z},\ldots,\ \operatorname{supp}\eta_{2p}=\left[-\frac{\pi}{p},\frac{\pi}{p}\right]+2\pi \mathbb{Z}.\]
Then we set \(\varphi_{\ell}^{i,j}=\varphi^{i,j}\circ(\mathfrak{b}^{-1})^{\ell}\), \(\hat{\varphi}_{\ell}^{i,j}=\hat{\varphi}^{i,j}\circ(\mathfrak{b}^{-1})^{\ell}\) and
\[U_{\theta}^{i,j}=\sum_{\ell=1}^{p}\eta_{2\ell-1}(\theta)\varphi_{\ell}^{i,j}+ \eta_{2\ell}(\theta)\hat{\varphi}_{\ell}^{i,j}.\]
Thus, \(U_{\theta}^{i,j}\) satisfies (U1\({}^{\prime\prime}\))-(U3\({}^{\prime\prime}\)). The validity of (U1\({}^{\prime\prime}\)) and (U2\({}^{\prime\prime}\)) are obvious, while (U3\({}^{\prime\prime}\)) follows from our choice of \(\eta_{\ell}\) and \(\varphi_{\ell}^{i,j}\). In fact, when \(\theta\in(\frac{t\pi}{p},\frac{(t+1)\pi}{p})\) for some \(t\in\{0,1,\ldots,2p-1\}\), \(\eta_{\iota}\neq 0\) if and only if \(\iota\equiv t\) or \(t+1\ (\operatorname{mod}\,2p)\). With setting \(\eta_{t+2p}=\eta_{\iota}\), \(\varphi_{\ell+p}=\varphi_{\ell}\) and \(\hat{\varphi}_{\ell+p}=\hat{\varphi}_{\ell}\), we conclude that
\[U_{\theta}^{i,j}=\eta_{t}(\theta)\varphi_{\ell_{1}}^{i,j}+\eta_{t+1}(\theta) \hat{\varphi}_{\ell_{2}}^{i,j}\neq 0,\]
where \(\ell_{1}=\lceil t/2\rceil\) and \(\ell_{2}=\lceil(t+1)/2\rceil\). Similarly, for \(s\in\{1,2,\ldots,p-1\}\), we conclude that
\[U_{\theta+\frac{2\pi s}{p}}^{i,j} =\eta_{t+2s}\left(\theta+\frac{2\pi s}{p}\right)\varphi_{\ell_{1} }^{i,j}+\eta_{t+2s+1}\left(\theta+\frac{2\pi s}{p}\right)\hat{\varphi}_{\ell_{ 2}}^{i,j},\] \[=\eta_{t}(\theta)\varphi_{\ell_{1}+s}^{i,j}+\eta_{t+1}(\theta) \hat{\varphi}_{\ell_{2}+s}^{i,j}\] \[=\left(\eta_{t}(\theta)\varphi_{\ell_{1}}^{i,j}+\eta_{t+1}(\theta )\hat{\varphi}_{\ell_{2}}^{i,j}\right)\circ(\mathfrak{b}^{-1})^{s}.\]
From (10), it follows that
\[\operatorname{supp}U_{\theta}^{i,j}\cap\operatorname{supp}U_{\theta+\frac{2 \pi s}{p}}^{i,j}=\varnothing.\]
The case that \(\theta=\frac{t\pi}{p}\) for some \(t\) is similar, and we omit it.
Hence, we can define \(\psi_{0}\colon\mathbb{S}^{2m-1}\to(H_{0}^{1}(\Omega))^{N}\) by
\[\psi_{0}(z)=\psi_{0}(r_{1}\mathrm{e}^{\mathrm{i}\theta_{1}}, \ldots,r_{m}\mathrm{e}^{\mathrm{i}\theta_{m}})\] \[= \left(U_{1}^{(1)},\ldots,U_{p}^{(1)};\quad\ldots;\quad U_{1}^{(B) },\ldots,U_{p}^{(B)}\right)\]
where
\[U_{\ell}^{(j)}=\sum_{i=1}^{m}r_{i}U_{\theta+\frac{2\pi\ell}{p}}^{i,j},\qquad \text{for }\ell=1,\ldots,p,\,j=1,\ldots,B.\]
We thus have
\[\psi_{0}(\mathrm{e}^{2\pi\mathrm{i}/p}z) =\Phi(r_{1}\mathrm{e}^{\mathrm{i}(\theta_{1}+2\pi\mathrm{i}/p)}, \ldots,r_{m}\mathrm{e}^{\mathrm{i}(\theta_{m}+2\pi\mathrm{i}/p)})\] \[=\sigma\psi_{0}(z).\]
From (U1\({}^{\prime\prime}\))-(U3\({}^{\prime\prime}\)), it follows that \(\psi_{0}(z)\in\mathcal{M}_{(\mathcal{G},\mathfrak{b})}\), and that the supports of all components of \(\psi_{0}(z)\) are pairwise disjoint. Since \(r_{i}\)-s are not all zero for \((r_{1}\mathrm{e}^{\mathrm{i}\theta_{1}},\ldots,r_{m}\mathrm{e}^{\mathrm{i} \theta_{m}})\in\mathbb{S}^{2m-1}\), each component of \(\psi_{0}(z)\) is nonzero by the definition. These facts allow one to construct a \(C^{1}\) map \(\Lambda\colon\psi_{0}(\mathbb{S}^{2m-1})\to\mathcal{N}_{(\mathcal{G},\mathfrak{ b})}\) such that \(\Lambda\sigma=\sigma\Lambda\). Then the rest of the proof runs as before.
With these preparations and using the framework in Section 2, we can prove the following theorem.
**Theorem 5**.: _Let \((\mathcal{G},\mathfrak{b})\) be an admissible pair. Then under the assumptions (A)-(D), Problem (1) admits infinitely many positive solutions of the form \((u_{1},\ldots,u_{N})\) that \(u_{i}\) is \(\mathcal{G}\)-symmetric but not \(\mathfrak{b}\)-symmetric for \(i=1,2,\ldots,N\) and \(u_{j+1}=u_{j}\circ\mathfrak{b}^{-1}\) for \(p\nmid j\). In particular, each \(u_{i}\) is non-radial._
Using the third example in Example 1, we have the following result.
**Corollary 4**.: _Assume \(n=3\) and \(N=2\). Then under the assumptions (A)-(D), Problem (1) admits an unbounded sequence \(S=\{(u_{1,l},\cdots,u_{N,l})\colon l\in\mathbb{N}\}\) of positive solutions such that each \(u_{i,l}\) is radial in \((x_{1},x_{2})\) and even in \(x_{3}\), but is not radial in \((x_{1},x_{2},x_{3})\)._
**Data availability statement.** Data sharing not applicable to this article as no datasets were generated or analysed during the current study.
|
2305.20015 | AI for Low-Code for AI | Low-code programming allows citizen developers to create programs with
minimal coding effort, typically via visual (e.g. drag-and-drop) interfaces. In
parallel, recent AI-powered tools such as Copilot and ChatGPT generate programs
from natural language instructions. We argue that these modalities are
complementary: tools like ChatGPT greatly reduce the need to memorize large
APIs but still require their users to read (and modify) programs, whereas
visual tools abstract away most or all programming but struggle to provide easy
access to large APIs. At their intersection, we propose LowCoder, the first
low-code tool for developing AI pipelines that supports both a visual
programming interface (LowCoder_VP) and an AI-powered natural language
interface (LowCoder_NL). We leverage this tool to provide some of the first
insights into whether and how these two modalities help programmers by
conducting a user study. We task 20 developers with varying levels of AI
expertise with implementing four ML pipelines using LowCoder, replacing the
LowCoder_NL component with a simple keyword search in half the tasks. Overall,
we find that LowCoder is especially useful for (i) Discoverability: using
LowCoder_NL, participants discovered new operators in 75% of the tasks,
compared to just 32.5% and 27.5% using web search or scrolling through options
respectively in the keyword-search condition, and (ii) Iterative Composition:
82.5% of tasks were successfully completed and many initial pipelines were
further successfully improved. Qualitative analysis shows that AI helps users
discover how to implement constructs when they know what to do, but still fails
to support novices when they lack clarity on what they want to accomplish.
Overall, our work highlights the benefits of combining the power of AI with
low-code programming. | Nikitha Rao, Jason Tsay, Kiran Kate, Vincent J. Hellendoorn, Martin Hirzel | 2023-05-31T16:44:03Z | http://arxiv.org/abs/2305.20015v1 | # AI for Low-Code for AI
###### Abstract.
Low-code programming allows citizen developers to create programs with minimal coding effort, typically via visual (e.g. drag-and-drop) interfaces. In parallel, recent AI-powered tools such as Copilot and ChatGPT generate programs from natural language instructions. We argue that these modalities are complementary: tools like ChatGPT greatly reduce the need to memorize large APIs but still require their users to read (and modify) programs, whereas visual tools abstract away most or all programming but struggle to provide easy access to large APIs. At their intersection, we propose LowCoder, the first low-code tool for developing AI pipelines that supports both a visual programming interface (LowCoder) and an AI-powered natural language interface (LowCoderNJL). We leverage this tool to provide some of the first insights into whether and how these two modalities help programmers by conducting a user study. We task 20 developers with varying levels of AI expertise with implementing four ML pipelines using LowCoder, replacing the LowCoderNJL component with a simple keyword search in half the tasks. Overall, we find that LowCoder is especially useful for (i) Discoverability: using LowCoderNJL, participants discovered new operators in 75% of the tasks, compared to just 32.5% and 27.5% using web search or scrolling through options respectively in the keyword-search condition, and (ii) Iterative Composition: 82.5% of tasks were successfully completed and many initial pipelines were further successfully improved. Qualitative analysis shows that AI helps users discover _how_ to implement constructs when they know _what_ to do, but still fails to support novices when they lack clarity on what they want to accomplish. Overall, our work highlights the benefits of combining the power of AI with low-code programming.
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
FootnoteFootnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
FootnoteFootnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote †: copyrighted: none
+
FootnoteFootnote: copyrighted: none
+
Footnote †: copyrighted: none
+
FootnoteFootnote: copyrighted: none
+
FootnoteFootnote: copyrighted: none
+
Footnote: copyrighted: none
+
FootnoteFootnote: copyrighted: none
+
FootnoteFootnote: copyrighted: none
+
FootnoteFootnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
FootnoteFootnote: copyrighted: none
+
FootnoteFootnote: copyrighted: none
+
Footnote: copyrighted: none
natural language interface helped both novice and non-novice users to successfully compose pipelines (85% of tasks) and then further refine their pipelines (72.5% of tasks) during the study when using LowCodernl. Additionally, LowCodernl helped users discover previously-unknown operators in 75% of the tasks compared to just 32.5% using other methods like web search when LowCodernl was not available. In addition, despite being trained on a different dataset, LowCodernl accurately answered real user queries. In summary, his paper makes three main contributions:
1. Low-Code for AI: We introduce LowCoder, a new low-code tool that combines visual programming and PBNL to help develop AI pipelines.
2. AI for Low-Code: We benchmark various AI models and develop a novel task formulation to develop an AI powered natural language interface to LowCoder.
3. User Study: We analyze the trade-offs between the two modalities and study the effects of using AI for low-code programming through a user study involving 20 participants with varying levels of AI expertise using LowCoder.
## 2. Related Work
AI for **Low-code** for AI: In adopting a visual programming approach to low-code, we follow a long tradition (Dal
sklearn operators for tabular data that includes both visual programming (VP) and natural language (NL) modalities, which complement each other by mitigating the limitations of either modality separately. Building this tool provided us with the opportunity to examine the impact of both modalities on users. Figure 2 highlights the main features and inputs of LowCoder.
To support multiple low-code modalities, we follow the lead of projectional code editors (Srivastava et al., 2017) by adopting the model-view-controller pattern. Specifically, we treat visual programming as a read-write view, PBNL as a write-only view, and let users inspect data in a read-only view. The tool keeps these three views in sync by representing the program in a domain-specific language (DSL). The domain for the DSL is AI pipelines. A corresponding, practical desideratum is that the DSL is compatible with sklearn (Krishnan et al., 2017), the most popular library for building AI pipelines, and is a subset of the Python language, in which sklearn is implemented, which also enables us to use AI models pretrained on Python code. The open-source Lale library (Bordes et al., 2017) satisfies these requirements, and in addition, describes hyper-parameters in JSON schema format (Krishnan et al., 2017), which our tool also uses. The current version of our tool supports 143 sklearn operators. LowCoder uses a client-server architecture with a Python Flask back-end server and front-end based on the Blockly (Krishnan et al., 2017) meta-tool for creating block-based visual programming tools. The front-end converts the block-based representation to Lale which is then sent to the back-end. The back-end validates the given Lale pipeline using internal schemans, then evaluates the pipeline against a given dataset. The results of this evaluation (including any error messages) are returned to the front-end and presented to the user.
### Visual Programming Interface
LowCodervp is our block-based visual programming interface for composing and modifying AI pipelines. One goal that this tool shares with other block-based visual tools such as Scratch (Scratch, 2017) is to encourage a highly interactive experience. The block visual metaphor allows for blocks that correspond to sklearn operators to be snapped together to form an AI pipeline. The shape of the blocks suggest how operators can connect. Their color indicates how they affect data: red for operators that transform data (with a _transform()_ method) and purple for other operators that make predictions, such as classifiers and regressors (with a _predict()_ method).
A _palette (1)_ on the left side of the interface contains all of the available operator blocks. Blocks can be dragged-and-dropped from the palette to the _canvas (2)_. For ease of execution, our tool only allows for one valid pipeline at a time, so blocks must be attached downstream of the pre-defined _Start_ block to be considered part of the active pipeline. Figure 2 shows an example of blocks defining a pipeline where the SimpleImputer, StandardScaler, and DecisionTreeClassifier blocks are connected to the _Start_ block and each other. Input data are transformed by the first two operators (SimpleImputer and StandardScaler) and then sent to DecisionTreeClassifier for training and then scoring. Blocks not attached to the _Start_ block are disabled but can be left on the canvas without affecting the execution of the active pipeline. Selected operator blocks also display a _hyper-parameter configuration pane_ (3) on the right. The pane lists each hyper-parameter for an operator along with a description (when hovering over the hyper-parameter name) and default values along with input boxes to modify each hyper-parameter.
Our tool provides a _stage (4)_ with _Before_ and _After_ tables to give immediate feedback with every input on how the current pipeline affects the given dataset. When a tabular dataset is loaded, the _Before_ table displays its target column on the left and feature columns on the right. When a pipeline that transforms input data is executed, the _After_ table shows the results of the transformations. At any time, a pipeline can be executed on the given dataset by pressing the "Run Pipeline" button. Executing a pipeline will attempt to train the given pipeline on the training portion of the given dataset and then return a preview of all data transformations on the training data in a second table. For instance, in the example shown in Figure 2, executing the pipeline with SimpleImputer and StandardScaler transforms data from the _Before_ table by imputing missing values and standardizing all feature values in the _After_ table. If training is successful, then the trained pipeline is scored against the test set and the score (usually accuracy) is displayed. LowCodervp also encourages liveness (Love et al., 2019) by executing the pipeline when either the active pipeline is modified or hyper-parameters are configured. For example, adding a PCA operator and setting the n_components hyper-parameter to 2 for the prior example will reduce the feature columns in the _After_ table to 2. This gives the user immediate feedback on the effect that pipeline changes have on the dataset without requiring separate training or scoring steps. This liveness encourages a high degree of interactivity (Srivastava et al., 2017).
### Natural Language Interface
A potential weakness of visual low code tools is that users have trouble discovering the right components to use (Krishnan et al., 2017). For instance, the palette of LowCodervp contains more than a hundred operator blocks. Rather than requiring users to know the exact name of the operator or scroll through so many operators, we provide LowCodervnl, which allows users to describe a desired operation in the _NL interface (5)_ text box and press the "Predict Pipeline" button. The tool then infers relevant operator(s) and any applicable
Figure 2. LowCoder interface with labeled components, described in the text.
hyper-parameters using an underlying natural language to code translation model and automatically adds the most relevant operator to the end of the pipeline. The palette is also filtered to only display any relevant operator(s) such as in Figure 2. Pressing the "Reset Palette" button will undo filtering (so the palette shows all available operators again) without clearing the active pipeline or canvas. Depending on the NL search, the automatically added operator may either have hyper-parameters explicitly defined or potentially relevant hyper-parameters highlighted. As an example, the NL search _"PCA with 2 components"_ will automatically add the PCA operator where the n_components hyper-parameter is set to 2 and may highlight other hyper-parameters such as random_state for the user to consider setting. Section 4 describes the design and implementation of this model in detail. A potential weakness of natural language low-code tools is that the generated programs can be incorrect, due to a lack of clarity, or ambiguity, in the query, or a lack of context for the model providing inferences (Bahdan et al., 2015). In comparison, visual inputs and representations are unambiguous (Krishnan et al., 2017), requiring no probabilistic interpretation, so users can easily understand and manipulate the results returned by LowCodeRNL.
To ground our evaluation of LowCodeRNL, we also provide a version of the tool without a trained language model to users in our study (described in Section 5). In this setting, the _NL interface (5)_ text box becomes a simple substring keyword search that matches the query against operator names. For example, inputting _"classifier"_ filters the palette to only display sklearn operators that contain _'classifier'_ in the name such as RandomForestClassifier (but notably not all classifiers such as SVC).
## 4. AI for Low-Code
This section discusses the AI that went into LowCodeRNL.
### Data Collection
Our goal is to make a large API accessible through a low-code tool by allowing users to describe _what_ they want to do when they do not know _how_. More specifically, we want to enable users to build sklearn pipelines in a low-code setting, using a natural language interface that can be used as an _intelligent search_ tool. This problem can be solved using language models that can be trained to translate a natural language query into the corresponding line of code (Krishnan et al., 2017). However, such models heavily rely on data to learn such behaviour and would need to be trained on an aligned dataset of natural language queries and the corresponding sklearn line(s) of code demonstrating how a user would want to use such an intelligent search tool. Naturally, we cannot collect such a dataset without this tool, creating a circular dependency. To overcome this challenge, we curate a _proxy dataset_ using 140K Python Kaggle notebooks that were collected as part of the Google AI4Code challenge.1 From these notebooks, we extracted aligned Natural Language (NL) & Code cells related to machine learning and data science tasks. While the distribution of the NL in the markdown cells is not completely representative of the NL queries that users would enter in the low-code setting, they provide the model with a broad range of such examples. Results in Section 5.1.5 show that this is indeed effective.
Footnote 1: [https://www.kaggle.com/competitions/AI4Code](https://www.kaggle.com/competitions/AI4Code)
### Data Preprocessing
We first filter out notebooks that do not contain any sklearn code. This leaves 84,783 notebooks - evidently, many notebooks involve sklearn. We further filter out notebooks with non-English descriptions in all of the markdown cells, resulting in 59,569 notebooks. We then create a proxy dataset by extracting all code cells containing sklearn code and pairing these with their preceding NL cell to get a total of 211,916 aligned NL-code pairs. We remove any duplicate NL-code pairs, leaving 102,750 unique pairs. For each code cell, we then extract the line(s) of code corresponding to an sklearn operation invocation statement.
We discard any code cells that do not include sklearn operation invocation statements but include other sklearn code
leaving a final total of 79,372 NL-Code pairs. We separate these into train/validation/test splits resulting in 64,779 train samples, 7,242 validation samples, and 7,351 test samples. See Section B in the appendix for more details.
### Tasks
Given the NL query, our model aims to generate a line of sklearn code corresponding to an operation invocation that can be used to build the next step of the pipeline. We consider a range of formulations of the task with different levels of details, as illustrated in Table 1. Additional examples can be found in Section A of the appendix.
#### 4.3.1. Operator Name Generation
The simplest task is generating only the operator name from the NL query. This alone can significantly help a developer with navigating the extensive sklearn API. We process the aligned dataset to map the query to the name(s) of operator(s) invoked in the code cell, discarding any other information such as hyper-parameters.
#### 4.3.2. Complete Operator Invocation Generation
At the other extreme, we task the model with synthesizing the complete operation invocation statement, including all the hyper-parameter names and values. Preliminary results (discussed in Section 5.1.4) show that the model often makes up arbitrary hyper-parameter values, resulting in lines of code that can rarely be used directly by developers.
#### 4.3.3. Masked Operator Invocation Generation
In this scenario, we mask out all the hyper-parameter values from the invocation statement, keeping only their names. The goal of this formulation is to ensure that the model learns to predict the specific invocation signature, even if it is unaware of the values to provide for the hyper-parameters.
#### 4.3.4. Hybrid Operator Invocation Generation (HOI)
Manual inspection of the NL-code pairs revealed that the queries sometimes explicitly describe a subset of the hyper-parameter names and values to be used in the code. When this is the case, the model has the necessary context to predict at least those hyper-parameter values. Supporting this form of querying enables users to express the most salient hyper-parameters up-front. Therefore, we formulated a new hybrid task, where we keep the hyper-parameter values if they are explicitly stated in the NL query and mask them otherwise. This gives the model an opportunity to learn the hyper-parameter names and values if they are explicitly stated in the description,
and unburdens it from making up values that it lacks the context to predict by allowing it to generate placeholders (masks) for them.
**Evaluation:** To evaluate the feasibility of predicting code using the different task formulations, we train a simple sequence-to-sequence model (detailed in Section 4.4.1) and compare the results for the various training tasks in Section 5.1.4. We find HOI to be the most accurate/reliable formulation for our setting. We therefore proceed to use this task formulation for training the models.
### Modeling
All tasks from Section 4.3 are sequence-to-sequence tasks. We compare and contrast three different deep learning paradigms for this type of task, illustrated in Figure 3: 1) train a standard sequence-to-sequence transformer _from scratch_, 2) fine-tune (calibrate) a pretrained "medium" sized model, 3) query a Large Language Model (LLM) by means of few-shot prompting (Srivastava et al., 2015). We elaborate on these models below. Note that we use top-k sampling for our top-5 results. (A comparison of results with other decoding strategies can be found in Section 3 and 4 in the supplementary material).
#### 4.4.1. Transformer (from scratch)
We train a sequence-to-sequence Transformer model (Vaswani et al., 2017) with randomly initialized parameters on the training data. Our relatively small dataset of ca. 70K training samples limits the size of a model that can be trained in this manner. We use a standard model size, with 6 encoder and decoder layers and 512-dimensional attention across 8 attention heads and a batch size of 32 sequences with up to 512 tokens each. We use a sentence piece tokenizer (trained on Python code) with a vocabulary size of 50K tokens. The model uses an encoder-decoder architecture that jointly learns to encode (extract a representation of) the natural language sequence and decode (generate) the corresponding sklearn operator sequences.
#### 4.4.2. Fine-tuning CodeT5
CodeT5 is a pretrained encoder-decoder transformer model (Krizhevsky et al., 2014) that has shown strong results when fine-tuned (calibrated) on various code understanding and generation tasks (Krizhevsky et al., 2014). CodeT5 was pretrained on a corpus of six programming languages from the CodeSearchNet dataset (Krizhevsky et al., 2014) and fine-tuned on several tasks from the CodeXGLUE benchmark(Krizhevsky et al., 2014) in a multi-task learning setting, where the task type is prepended to the input string to inform the model of the task. We fine-tune CodeT5 on the HOI generation task by adding the 'Generate Python' prefix to all NL queries. We experiment with different size CodeT5 models: codeT5-small (60M parameters), base (220M) and large (770M).
#### 4.4.3. Few-Shot Learning With CodeGen
Lastly, we explore large language models (LLMs) that are known to perform well in a task-agnostic few-shot setting (Krizhevsky et al., 2014). More specifically, we look at CodeGen, a family of LLMs that are based on standard transformer-based autoregressive language modeling (Krizhevsky et al., 2014). Pretrained CodeGen models are available in a broad range of sizes, including 350M, 2.7B, 6.1B and 16.1B parameters. These were all trained on three different datasets, starting with a large, predominantly English corpus, followed by a multi-lingual programming language corpus, and concluding with fine-tuning on just Python data, which we use in this work. The largest model trained this way was shown to be competitive with Codex (Krizhevsky et al., 2014) on a Python benchmark (Krizhevsky et al., 2014).
Models at this scale are expensive to fine-tune and are instead commonly used for inference by means of "few-shot prompting" (Srivastava et al., 2015). LLMs are remarkably capable of providing high-quality completions given an expanded prompt containing examples demonstrating the task (Krizhevsky et al., 2014). We prompt our model with 5 such NL-code examples. Figure 4 illustrates an example prompt with 3 such pairs. The model learns from the examples in the prompt and completes the sequence task which results in generating the HOI code.
## 5. Evaluation
This section describes the evaluations for the AI modeling that enables LowCoderNL along with the user studies that we conducted to analyze the benefits and challenges of using low-code for developing AI pipelines using LowCoder.
\begin{table}
\begin{tabular}{l|l} \hline
**Task Formulation** & **Code for the NL query:**_Random forest with balanced class weight_ \\ \hline Operator Name & RandomForestClassifier \\ Complete Operator Invocation & RandomForestClassifier (n\_estimators = 100, class\_weight = 'balanced’) \\ Masked Operator Invocation & RandomForestClassifier (n\_estimators = MASK, class\_weight = MASK) \\ Hybrid Operator Invocation & RandomForestClassifier (n\_estimators = MASK, class\_weight = 'balanced’) \\ \hline \end{tabular}
\end{table}
Table 1. Task formulations highlighting the code components: mask, operator name, hyper-parameter name, hyper-parameter name, hyper-parameter value. The Hybrid Operator Invocation setting does not mask ‘balanced’ as it appears in the query.
Figure 3. Overview of the “trifecta” of training approaches used in contemporary deep learning: smaller models are directly trained from scratch on downstream task data; medium sized models (100M-1B parameters) are pretrained with a generic training signal and then fine-tuned on task data; large models (>1B parameters) are only pretrained on very large datasets and are prompted with examples from the training data as demonstration followed by the query.
### Modeling
#### 5.1.1. Experimental Setup
All of our models are implemented using PyTorch transformers and the HuggingFace interface. We use the latest checkpoints of the CodeT5 (Zhu et al., 2017) and CodeGen (Zhu et al., 2018) models. Our models were trained on a single machine with multiple 48 GB NVIDIA Quadro RTX 8000 GPUs until they reached convergence on the validation loss. We clip input and output sequence lengths to 512 tokens, but reduce the latter to 64 when using the model in LowCoder to reduce inference time. We find in additional experiments that since few predictions are longer than this threshold, this incurs no significant decrease in accuracy, but speeds up inference by 34%. We use a batch size of 32 for training and fine-tuning all of our Transformer and CodeT5 models, except for CodeT5-large, for which we used a batch size of 64 to improve stability during training.
#### 5.1.2. Test Datasets
To ensure a well-rounded evaluation, we look at two different test datasets.
**(i) Test data (from notebooks)** - We use the NL-code pairs from the Kaggle notebooks we created in Section 4.2 containing 7,351 samples. These are noisy - some samples contain vague and underspecified Natural Language (NL) queries, such as - _"Data pre-processing"_, _"Build a model"_, _"Using a clustering model"_. Others contain multiple operator invocation statements corresponding to a single NL query, even though the NL description only mentions one of them, e.g., _"Model # 2 - Decision Trees"_ corresponds to DecisionTreeClassifier() and confusion_matrix(y_true, y_pred). Furthermore, these samples were collected from Kaggle notebooks, so the distribution of the NL queries collected from the markdown cells are not necessarily representative of NL queries that real users may enter into LowCoderNL.
**(ii) Real user data** - We log all the NL queries that users searched for in LowCoder during the user studies along with the list of operators that the model returned. This gives us a more accurate distribution of NL queries that developers use to search for operators in LowCoderNL. We obtained a total of 218 samples in this way, which we then manually annotated to check whether (i) the predictions were accurate, that is, if the operators in any of the predictions matches the inferred intent in the query and (ii) the NL query was clear, with an inter-rater agreement of 97.7% and a negotiated agreement (Kumar et al., 2018) of 100%. (See Section E in appendix for details on annotation guidelines.)
#### 5.1.3. Test Metrics
We use both greedy (top-1) and top-K (top-5) decoding (see Section C in appendix) when generating the operator invocation sequences for each NL query. We evaluate the models' ability to generate just the operator name as well as the entire operator invocation (including all the hyper-parameter names and values) based on the hybrid formulation.
#### 5.1.4. Task Comparison
We first train a series of randomly initialized 6-layer Transformer models from scratch on each task formulation from Section 4.3. We compare the model's ability to correctly generate the operator name and the operator invocation based on the formulation corresponding to the training task using top-1 and top-5 accuracy as shown in Figure 5. We find that the hybrid formulation of the operation invocation task, while challenging, is indeed feasible and allowed the model to achieve reasonably strong performance when generating the entire operation invocation statement. Contrary to the other task formulations, a model trained with the HOI signal also achieved comparable performance to the model trained solely on operator names when evaluated purely on operator name prediction (ignoring the generated hyper-parameter string). These results highlight that the hybrid representation helps the model learn by unburdening it from inferring values that it lacks the context to predict.
#### 5.1.5. Model Comparison
We next evaluate the performance of the trifecta of modeling strategies from Section 4.4 on the task of Hybrid Operation Invocation (HOI) generation. We benchmark across different model sizes and compare the performance for both operator name and operator invocation generation using top-5 accuracy in Figure 6. (See Section D in the appendix for additional results and ablation studies.) The results show that the 0.77B parameter fine-tuned CodeT5 is the best performing model with an accuracy of 73.57% and 41.27% on the test data for the operation name and operation invocation generation respectively. The 0.22B parameter fine-tuned CodeT5 model has comparable performance, but its inference time is approximately 2-3 seconds faster than the 0.77B fine-tuned CodeT5 model, making it more desirable for integration with the tool.
#### 5.1.6. Performance in Practice
Up to this point, all our evaluations have been based on the proxy dataset from Kaggle. To get a better
Figure 4. Example of a few (3) shot prompting template for querying a large language model in our study.
Figure 5. Accuracy of Transformer models trained from scratch on various task formulations. ‘Invocation’ test results refer to the specific invocation formulation of the training task, while ‘Names only’ just considers whether the generated code starts with the correct operator name. Only the Hybrid Operator Invocation setting yields useful quality on both tasks.
idea of the model's performance in the real world, we further evaluate the performance of the fine-tuned 0.22B parameter CodeT5-base from the tool on real user data that was collected during the user studies. The distribution of NL queries collected from the user studies represents the "true" distribution of queries that can be expected from users in a low-code setting. Out of the 218 samples that were collected, we found only one sample in which a user explicitly specified a hyper-parameter value in their query.
We therefore only compute the accuracy of the operation name generated rather than the entire operation invocation (as they would use default values anyway and so the scores remain the same except for that one sample).
Out of 218 query requests, the fine-tuned CodeT5-base model that was used in our tool answered 150 queries correctly, which would suggest an overall accuracy of 68.8%. However, 33 of these requests targeted actions that are not supported by the sklearn API, such as dropping a column (commonly the territory of the Pandas library). Disregarding such unsupported usage, LowCodenNL answered 141 out of 185 queries correctly for an overall accuracy of **76.2%**. For 33 additional samples, neither annotator could infer a reasonable ground truth since the prompt was unclear (e.g.: "empty"). Leaving these out, i.e., when the prompt is both clear _and_ the operator is supported by the tool, LowCodenNL was accurate in over **90%** (137/152) of completions (refer to Section F in appendix for additional results).
### User Study
We conducted a user study with 20 participants with varying levels of AI expertise to create AI pipelines using LowCoder across four tasks, replacing LowCodenNL with a simple keyword search in half the tasks. We collect and analyze data to investigate the following research questions:
* How do LowCodenNL and other features help participants discover previously-unknown operators?
* Are participants able to compose and then iteratively refine AI pipelines in our tool?
* What are the benefits and challenges of using low-code for AI?
#### 5.2.1. Study Methodology
We recruited 20 participants within the same large technology company via internal messaging channels. We expect that citizen developers without formal programming training may also have varying levels of AI expertise and intentionally solicited participants of all backgrounds. Potential participants filled out a short pre-study survey to self-report experience in the following: machine learning, data preprocessing, and sklearn using a 1 (no experience) to 5 (expert) scale. Participants include a mix of roles including developers, data scientists, and product managers working in a variety of domains such as AI, business informatics, quantum computing, and software services. 25% of the participants are female and the remaining 75% are male. 40% of the participants self-reported being novices in machine learning by indicating a 1 or 2 in the pre-study survey.
The study design is within-subjects (Krishna et al., 2017) where each participant was exposed to two conditions: using LowCoder with (_NL condition_) and without (_keyword condition_) the natural language (NL) interface powered by LowCodenNL. The keyword condition used a simple substring filter for operator names. Each participant performed four tasks total across the two conditions. For each task, participants were instructed to create AI pipelines with data pre-processing and classifier steps on a sample dataset with as high a score (accuracy on the test set) as possible during a time period of five to ten minutes. Each sample dataset was split beforehand into separate train and test sets. Tasks were open-ended with no guidance on what preprocessing steps or classifiers should be used.
There were four sample datasets in total and each participant was exposed to all four. The sample datasets are public tabular datasets from the UCI Machine Learning Repository (Krishna et al., 2017). Two of the tasks (A and D) require a specific data preprocessing step in order to successfully create a pipeline while two (B and D) technically do not require preprocessing to proceed. For each participant, the order of the conditions and the order of the tasks were shuffled such that there is a uniform distribution of the order of conditions and tasks.
As our study included machine learning novices, we gave each participant a short overview of the basics of machine learning with tabular datasets and data preprocessing. We avoided using specific terms or names of operators in favor of more general descriptions of data-related problems.
We then gave each participant an overview of LowCoder. To mitigate potential biasing or priming, the tool overview used a fifth dataset from the UCI repository (Krishna et al., 2017). To avoid operators that were potentially useful in user tasks, the overview used both a non-sklearn operator that was not available in the study versions of the tool as well as sklearn's DummyClassifier that generates predictions without considering input features. Participants were allowed to use external resources such as web search engines or documentation pages. Nudges were given by the study administrators after five minutes if necessary to help participants progress in a task. Nudges were in the form of reminders to use tool features such as the NL
Figure 6. Accuracy vs. model size based on top-5 sampling. (“The 16B CodeGen uses top-3 due to memory constraints.) We compare the three modeling paradigms, namely training transformer from scratch, finetuning CodeT5, and fewshot prompting CodeGen, on both Operator Name generation and Hybrid Operator Invocation generation.
interface, external resources, or to include missing steps such as data preprocessing or classifiers. Nudges did not mention specific operator names nor guidance on specific actions to take.
For each version of the tool, study administrators would describe the unique features of the particular version and then have participants perform tasks using two out of four sample datasets. After performing tasks using both versions of the tool and all four sample datasets, participants were asked to provide open-ended feedback and/or reactions for both LowCoder and the comparison between the NL and keyword conditions.
#### 5.2.2. Data Collection and Analysis
To answer our research questions, for each participant, we collect and analyze both quantitative and qualitative data. For quantitative data, we report on the incidence of participants discovering a previously-unknown operator (RQ1) and the incidence of completing the task and iterating or improving the pipeline (RQ2). We consider an operator 'previously-unknown' if the participant found and used the operator without using the exact or similar name. For example, using an NL query such as _"deal with missing values"_ to find the SimpleImputer operator is considered discovering a previously-unknown operator while a query such as _"simpleimpute"_ is not. We report discovery using the following methods: through LowCoderNL generic web search engine (Google), and scrolling through the palette. Participants may discover multiple unknown operators during the same task, possibly using different methods. For each participant's task, we consider it 'complete' if the composed pipeline successfully trains against the dataset's training set and returns a score against the test set. We consider the pipeline iterated if a participant modifies an already-complete pipeline. More specifically, we consider the following forms of iteration: a preprocessing operator block is added or swapped, a classifier block is swapped, or hyper-parameters are tuned. We report each of these as separate types of pipeline iteration. Participants may perform multiple types of iteration during the same task. Both sets of quantitative metrics are counted per task (80 tasks total for 20 participants, 40 tasks per condition).
We use qualitative data to answer RQ3. This data focuses on the participants' actions in LowCoder, commentary while using the tool and performing tasks, and answers to open-ended questions after the study. Specifically, the same two authors that administered the user study analyzed the notes generated by the study along with the audio and screen recordings when the notes were insufficient, using discrete actions and/or quotations as the unit of analysis. The first round of analysis performed open coding (Krishnan et al., 2017) on data from 16 studies to elicit an initial set of 73 themes. The two authors then iteratively refined the initial themes through discussion along with identifying 13 axial codes which are summarized in Figure 7. The same authors then performed the same coding process on a hold-out set of 4 studies. No additional themes were derived from the hold-out set of studies, suggesting saturation.
#### 5.2.3. Study Results
We answer RQ1 and RQ2 using quantitative data collected from observing participant actions per task and answer RQ3 through open coding of qualitative data.
**RQ1: How do LowCoderNL and other features help participants discover previously-unknown operators?**
Table 2 reports how often participants discovered previously-unknown operators during their tasks. 80% of the participants discovered an unknown operator across 63.8% of all 80 tasks in the study. Participants discovered unknown operators in 82.5% of the 40 NL condition tasks compared to 45% of the 40 keyword condition tasks. The odds of discovering an unknown operator are significantly greater in the NL condition than keyword (\(p\ll 0.001\)) using Barnard's exact test. We examine the methods of discovery in more detail, noting that LowCoderNL is only available in the NL condition whereas web search and scrolling through the operator palette are available in both conditions. We note that the participants were not able to use the keyword search to discover unknown operators due to needing at least part of the exact name. Using LowCoderNL, participants discovered unknown operators in 75% of tasks in the NL condition as opposed to an average of 22.5% using web search engines (12.5% in the NL condition and 32.5% in the keyword condition) and an average of 20% by scrolling through the operator palette (12.5% in the NL condition and 27.5% in the keyword condition). Within the NL condition, the odds of an unknown operator being discovered are significantly greater using LowCoderNL as opposed to both web search (\(p\ll 0.001\)) and scrolling (\(p\ll 0.001\)). When splitting on the experience of the participant, we find statistically greater chances of novices discovering operators in the NL condition using LowCoderNL as opposed to web search (p=0.013) but not scrolling (p=0.086). Non-novices were significantly more likely to discover operators using LowCoderNL compared to web search or scrolling (\(p\ll 0.001\), \(p\ll 0.001\)). Results do not change if considering web searches or scrolling across all 80 tasks. These results suggest that LowCoderNL is particularly helpful in discovering previously-unknown operators, especially compared to web search, but novices still face some challenges. We discuss these challenges in RQ3.
**RQ2: Are participants able to compose and then iteratively refine AI pipelines in our tool?**
Table 3 reports how often participants iterated on pipelines. Participants completed 82.5% of the 80 tasks in the study and further iterated their pipelines in 72.5% of the tasks. Splitting on condition, the NL condition has 85% task completion and 72.5% further iteration while the keyword condition has 80% task completion and 72.5% iteration rate. Swapping classifiers was the most common form of iteration at 48.8%, followed by adding or swapping preprocessors at 43.8% and setting hyper-parameters at 30%. Comparing
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{**Condition**} & \multirow{2}{*}{**Participant**} & \multicolumn{4}{c|}{**Method of Discovery**} \\ \cline{3-5} & & LowCoderNL & Web search & Palette \\ \hline \multirow{4}{*}{NL} & All & 30 (75.0\%) & 5 (12.5\%) & 5 (12.5\%) \\ & Novice & 8 (50.0\%) & 2 (12.5\%) & 4 (25.0\%) \\ & Non-Novice & 22 (91.7\%) & 3 (12.5\%) & 1 (4.2\%) \\ \hline \multirow{4}{*}{Keyword} & All & \multirow{2}{*}{_Not available in this condition_._} & 13 (32.5\%) & 11 (27.5\%) \\ & Non-Novice & & 3 (18.8\%) & 5 (31.3\%) \\ \hline \end{tabular}
\end{table}
Table 2. Incidence of tasks where participants find previously-unknown operators per condition (40 tasks for all, 16 tasks by novices, and 24 by non-novices). Note that rows may not sum to 100% as participants can use multiple methods to discover operators for a given task or not discover operators at all.
novices to non-novices, both types of participants are mostly successful in iterating pipelines with no significant differences in iteration rate using Barnard's exact test (p=0.109). This result holds when iterating preprocessors (p=0.664) but not classifiers (p=0.038) nor hyper-parameters (p=0.005). Non-novices are more likely to complete the task than novices (p=0.002). Regardless of experience, both novices and non-novices are able to iteratively refine their pipelines, but novices face some challenges compared to non-novices regarding actually completing the task. These challenges are discussed in the next research question.
**RQ3: What are the benefits and challenges of using low-code for AI?**
Figure 7 shows our 13 axial codes for answering RQ3. These codes broadly represent three overarching themes regarding working with low-code and machine learning: 1) _Discovery_ of machine learning operators relevant for the task at hand, 2) _Iterative Composition_ of the operators in the tool, and 3) _Challenges_ that participants, particularly novices, face regarding working with machine learning and/or using low-code tools. We also collect _Feedback_ from participants to inform future development of LowCoder. Due to space limitations, we only report on a selection of the 13 axial codes and 73 codes derived from open coding (refer to Section G in the appendix for the full list of codes).
For the first category of **Discovery**, our analysis derived two axial codes related to the participants' goal while attempting to discover operators: 1) _Know "What" Not "How"_ where participants have a desired action in mind but do not know the exact operator that performs that action (19 out of 20 participants experienced this axial code) and 2) _Know "What" And "How"_ where participants have a particular action and operator in mind (18/20). We dive deeper into _Know "What" Not "How"_ which includes the code where participants _Discover a previously-unknown operator using NL_ (16/20). We found in RQ1 that LowCoder\({}_{\text{NL}}\) was helpful in finding unknown operators compared to other methods. The qualitative data suggests that participants were able to find unknown operators using LowCoder\({}_{\text{NL}}\) during cases where they have an idea of the action to perform but do not know the exact operator name for a variety of reasons. For example, when discovering SimpleImputer with LowCoder\({}_{\text{NL}}\), P11 noted that they _"never used SimpleImputer but had an idea of what I wanted to do, even though I generally remove NaNs in Pandas._" Another example is P16 who _"preferred the [NL version of LowCoder ], even when I was doing Google searches, they... didn't give me options, your tool at least returns some options that I can try out and swap out._" As a novice, P16 had difficulties finding the names of useful operators from web search results as opposed to the LowCoder\({}_{\text{NL}}\) which directly returned actionable operators. We note that challenges regarding general web search is also an axial code.
For the second category of **Iterative Composition**, we derived four axial codes related to participant behaviors while attempting to compose and iterate on pipelines: 1) _General Exploratory_ (13/20) iteration, 2) Exploratory iteration but where participants will select operators or hyper-parameters seemingly at _Random_ (18/20), 3) _Targeted_ (19/20) iteration where participants select operators or hyper-parameters with a particular intent, and 4) _Seeking Documentation_ (15/20) where participants search for documentation to inform iteration decisions. We note that for both forms of Exploratory iteration and Targeted iteration, we find examples of participants iterating classifiers, preprocessors, and hyper-parameters. For the axial code of seemingly _Random_ iteration, participants, especially (but not exclusively) novices, when unsure of how to proceed, tended to try out arbitrary preprocessors or classifiers. This was more common for more difficult tasks that required particular data preprocessing to proceed. For example, non-novice P9 remarked _'T'm not familiar enough with it, so do I Google it or brute force it? [...] I don't even know what to Google to figure this out... I guess I'll do some light brute-forcing"_ and proceeded to swap in and out preprocessors from the palette. In contrast, the axial code of _Targeted_ (19/20) iteration has codes that reflect particular intentions that participants derived from observations within the tool, such as _Noticing error messages_ (10/20) or _Making use of data tables in task_ (14/20). As an example of the data tables case, P11 realized through the _Before_ data table that the given dataset had _"too many columns"_ and added the IncrementalPCA operator along with setting its n_components hyper-parameter to 5. Upon seeing the change in data in the _After_ data table, they remarked, _"Wow... I really like that I can see all the hyper-parameters that I can play with"_ and proceeded to tune various hyper-parameters.
The third category is the variety of **Challenges** that participants faced while using LowCoder and performing the machine learning tasks where we derive six axial codes: 1) _General_ challenges (10/20) faced by participants that are not particular to our tool or tasks, 2)
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Iteration Type** & \begin{tabular}{c} _Total Tasks_ \\ **(80)** \\ \end{tabular} & \begin{tabular}{c} _Novice_ \\ **(32)** \\ \end{tabular} &
\begin{tabular}{c} _Non-Novice_ \\ **(48)** \\ \end{tabular} \\ \hline Task Completion & 66 (82.5\%) & 21 (65.6\%) & 45 (93.8\%) \\ \hline Swap Classifier & 39 (48.8\%) & 11 (34.4\%) & 28 (58.3\%) \\ Add/Swap Preprocessors & 35 (43.8\%) & 15 (46.9\%) & 20 (41.7\%) \\ Set Hyper-parameters & 24 (30.0\%) & 4 (19.0\%) & 20 (41.7\%) \\ All Iterations & 58 (72.5\%) & 20 (62.5\%) & 38 (79.2\%) \\ \hline \end{tabular}
\end{table}
Table 3. Incidence of tasks where participants complete and iterate on preprocessors, classifiers, and hyper-parameters.
Figure 7. Axial codes from our qualitative analysis.
Not Knowing "What"_ (15/20) where participants experienced difficulties due to knowing neither "what" nor "how" to begin, 3) _General Discovery_ challenges (15/20), 4) Discovery challenges around using _Web search_ (14/20), 5) Discovery challenges when using _Tool search_ (17/20) or specifically using LowCodernNL, and 6) _Tool Functionality_ (19/20) which describes challenges participants faced using (or not using) LowCoder features. We dive deeper into the axial code of _Not Knowing "What"_ and note its contrast to the _Know "What" Not "How"_ axial code where participants may have intentions but not know how to execute them or the _Exploratory_ iteration axial code where participants may not have specific intentions but know how to iterate. All novices (8/8) and most non-novices (7/12) experienced this challenge. The primary code is that participants _Did not know "what" they wanted to do_ (11/20). One possible cause of this lack of progression is choice paralysis, for example on P17's first task, _"first things first, I don't even know where to begin... right now it's super overwhelming, I guess'll start throwing stuff in there."_ We also describe the axial code of _Tool search_ (17/20) where participants had difficulties forming search queries for LowCodernNL.
Participants noted that despite the interface being intended for general natural language, the interface still _Needed a specific vocabulary_ (8/20). As P19, a novice, described it, _"I get the idea of how it's supposed to work but it's hit and miss... even if I use very layman's terms... it expects a non-naive explanation of what needs to be done."_ Part of this challenge may be due to a mismatch in the natural language in Kaggle notebooks used to train LowCodernNL and the language used by novices.
## 6. Discussion
Our results show that both the LowCoderv and LowCodernNL components were helpful with aspects like operator discovery (RQ1) or iteratively composing pipelines (RQ2), even for novice participants. This is useful for citizen developers who have an idea of _what_ they would like to do but do not fully know _how_ to accomplish that, perhaps due to a lack of formal programming training. In fact, our qualitative analysis (RQ3) reveals that a number of our participants (including all novices who participated) struggled with knowing _what_ to do. End-users writing software face similar "design barriers" (Krishnan et al., 2017), where it is difficult for a non-programmer to even conceptualize a solution. In contrast to other popular low-code domains such as traditional software (Krishnan et al., 2017), the domain of developing machine learning systems is particularly difficult in this regard due to its experimental nature where progress has a high degree of uncertainty (Krishnan et al., 2017). This uncertainty then requires an abundance of judgment calls that rely heavily on prior machine learning experience (Krishnan et al., 2017) that novices lack. Some participants in our studies echo this, identifying that some ML knowledge is necessary to use our tool. That suggests that citizen developers who have some data science knowledge but lack programming training, such as statisticians, may benefit the most from our tool. A further improved low-code machine learning tool could thus be made more suitable towards novice citizen developers by guiding them to discover the _what_ along with the _how_, i.e., by helping developers acquire the necessary ML knowledge.
A potential extension, offered by a study participant, is to provide suggestions in the form of templates or recipes for pipelines. These suggestions could also be contextual to the given dataset or active pipeline, for example automatically suggesting encoders when detecting categorical features. Ko et al. (Kolley et al., 2017) also suggest templates as a possible solution for design barriers. A related suggestion made by a number of our study participants is to implement data visualization and summarization tools for the given dataset, such as plots, charts, confusion matrices, etc. These visualizations could themselves inform contextual suggestions - a histogram detecting a non-standard distribution may suggest the need for a StandardScaler. These contextual suggestions may also help in guiding developers in _what_ to do, making for a more generally useful low-code tool for both citizen and experienced developers alike.
**Threats to Validity:** The user study for LowCoder has several limitations. The study focused on relatively small, public tabular datasets and scikit-learn operators and may not be indicative of other machine learning tasks such as deep learning on large datasets. Participants also all come from the same large technology company and may not be representative of general users. However, we did intentionally elicit participation from a variety of groups and experience levels to mitigate this. As our user study has a within-subjects design, there may be potential learning effects between tasks and conditions. In fact, we observed some cases of this (8/20), with some participants explicitly mentioning selecting particular operators due to the previous task. We mitigated this learning effect by randomizing the order of tasks and conditions, as well as by having two tasks (A and D) require the use of preprocessing operators that were not applicable to other tasks.
## 7. Conclusion
We developed LowCoder, a low-code tool that combines visual programming, LowCoderv, and programming by natural language (PBNL), LowCodernNL, to help developers of all backgrounds create AI pipelines. We used LowCoder to provide some of the first insights into whether and how visual programming and PBNL help programmers by conducting user studies across four tasks with (NL condition) and without (keyword condition) LowCodernNL. Overall, LowCoder helped developers compose (85% of tasks) and iterate (72.5% of tasks) over AI pipelines. Furthermore, LowCodernNL helped users discover previously-unknown operators in 75% of tasks, compared to just 22.5% (12.5% in the NL condition and 32.5% in the keyword condition) using web search. Our qualitative analysis showed that PBNL helped users discover _how_ to implement various parts of the pipeline when they know _what_ to do. However, it failed to support novices when they lacked clarity on what they want to accomplish, which may suggest a worthwhile target for improving AI-based program assistants. Our work demonstrates the promise of combining both an AI-powered natural language interface and a visual interface for helping developers of all backgrounds create AI pipelines without writing code.
## 8. Data Availability
The implementation of LowCoder, datasets for training and evaluating LowCodernNL, results of additional experiments, as well as the material from the user study, incl. the full set of (axial) codes & anonymized quantitative and qualitative data are available at: [https://doi.org/10.5281/zenodo.7042296](https://doi.org/10.5281/zenodo.7042296). |
2302.14317 | Shock formation for 2D Isentropic Euler equations with self-similar
variables | We study the 2D isentropic Euler equations with the ideal gas law. We exhibit
a set of smooth initial data that give rise to shock formation at a single
point near the planar symmetry. These solutions are associated with non-zero
vorticity at the shock and have uniform-in-time 1/3-H\"older bound. Moreover,
these point shocks are of self-similar type and share the same profile, which
is a solution to the 2D self-similar Burgers equation. Our proof, following the
3D shock formation result of Buckmaster, Shkoller and Vicol, is based on the
stable 2D self-similar Burgers profile and the modulation method. | Wenze Su | 2023-02-28T05:16:58Z | http://arxiv.org/abs/2302.14317v1 | # Shock formation for 2D isentropic Euler equations
###### Abstract.
We study the 2D isentropic Euler equations with the ideal gas law. We exhibit a set of smooth initial data that give rise to shock formation at a single point near the planar symmetry. These solutions are associated with non-zero vorticity at the shock and have uniform-in-time \(1/3\)-Holder bound. Moreover, these point shocks are of self-similar type and share the same profile, which is a solution to the 2D self-similar Burgers equation. Our proof, following Buckmaster, Shkoller and Vicol [13], is based on the stable 2D self-similar Burgers profile and the modulation method.
Key words and phrases:2D Isentropic Euler Equations; Shock Formation; Self-similar solution. 2010 Mathematics Subject Classification: 35Q31, 35L67, 35B44
## 1. Introduction
The two-dimensional compressible isentropic Euler equations read
\[\begin{cases}\partial_{t}\rho+\nabla_{\mathrm{x}}\cdot(\rho u)=0\\ \partial_{t}(\rho u)+\mathrm{div}_{\mathrm{x}}(\rho u\otimes u)+\nabla_{ \mathrm{x}}p=0\\ p=\frac{1}{\gamma}\rho^{\gamma},\end{cases} \tag{1.1}\]
where \(\mathrm{x}=(\mathrm{x}_{1},\mathrm{x}_{2})\in\mathbb{R}^{2}\) and \(\mathrm{t}\in\mathbb{R}\) are space and time coordinates respectively, the unknown scalar \(\rho\) is the fluid density, \(u=(u_{1},u_{2})\) is the velocity field of fluid, \(p=\frac{1}{\gamma}\rho^{\gamma}\) is the pressure with adiabatic index \(\gamma>1\). This system describes the evolution of a two-dimensional compressible ideal gas without viscosity.
We define the vorticity \(\omega=\partial_{\mathrm{x}_{1}}u_{2}-\partial_{\mathrm{x}_{2}}u_{1}\) and the specific vorticity \(\zeta=\omega/\rho\) at those points where \(\rho>0\). One can deduce from (1.1) that \(\zeta\) is purely transported by the velocity field:
\[\partial_{t}\zeta+u\cdot\nabla_{\mathrm{x}}\zeta=0. \tag{1.2}\]
Our main result can be stated roughly as follows:
**Theorem 1.1** (Rough statement of the main theorem).: _There exists a set of initial data \((u_{0},\rho_{0})\) with \(|\nabla(u_{0},\rho_{0})|=\mathcal{O}(1/\varepsilon)\), such that their corresponding solutions to (1.1) develop a shock-type singularity within time \(\mathcal{O}(\varepsilon)\)._
It is well known that governing by compressible Euler equations, shock can develop from smooth initial data. In the one-dimensional case, this fact can be studying the dynamics of the Riemann variables, which were first introduced in Riemann's foundational work [32]. See the discussion in John [20], Liu [21], and Majda [24].
In multi-dimensional cases, Sideris [33] first proved a blow-up result. However, the shock formation remained open. In 2007, Christodoulou [14] studied the relativistic fluids and he found an open set of irrotational initial data that will eventually develop shock-type singularity, this was considered to be the first proof of shock formation for the compressible Euler equations in multi-dimensional cases. Later in Christodoulou-Miao [17] the authors established
the shock formation for non-relativistic and irrotational flow. In the case of irrotational flow, one can rewrite the isentropic Euler equations as a scalar second-order quasilinear wave equation. Alinhac [2, 3] proved the first blow-up result for 2D quasilinear wave equations, which do not satisfy Klainerman's null condition. Using geometric method, shock formation for the 3D quasilinear wave equations was studied in Speck-Holzegel-Luk-Wong [35], Speck [34], Miao-Yu [29]. The first result on shock formation that admits non-zero vorticity for the compressible Euler equations was given by Luk-Speck [22]. They use the geometric framework and developed new methods to study the vorticity transport. Later in [23], they proved shock formation for full compressible Euler equations in 3D with non-trivial vorticity and variable entropy. In An-Chen-Yin [4, 5, 6, 7, 8], the authors proved the low regularity ill-posedness for elastic waves and MHD equations and showed that the ill-posedness is driven by shock formation. As to the shock development problem for the compressible Euler equations, one could refer to the 'discussions in Christodoulou-Lisibach [16], Christodoulou [15], Abbrescia-Speck [1] and Buckmaster-Drivas-Shkoller-Vicol [9].
In [12], Buckmaster, Shkoller, and Vicol utilized the modulation method to construct shock solutions to the 2D Euler equations with azimuthal symmetry. Later in [13], they extend this method to the 3D case with no symmetry assumptions. After a dynamical rescaling, the solutions they constructed are close to a profile \(\overline{W}\), which solves the self-similar Burgers equation. By a singular coordinate transformation controlled by several modulation variables, proving shock formation is equivalent to showing global existence in the self-similar coordinate. This approach, known as the modulation method or dynamical rescaling, was successfully applied in [25, 26, 27] for the blow-up of Schrodinger equations and in [28] for the nonlinear heat equation. The proof in [12] is \(L^{\infty}\) based since there is no derivative in the forcing term, whereas in [13], an additional \(L^{2}\) based energy estimate was used to overcome the derivative loss in the \(L^{\infty}\)-type argument. They also analyzed the non-isentropic case in [11].
Following the work in [13], we utilize the self-similar Burgers ansatz to construct shock solutions. To keep track of the curvature of shock formation while maintaining the solution's stationarity in the far field, we make a minor modification to the construction in [13]. Different from the construction in [12], we consider shock solutions without any symmetry.
The shock we attempt to construct is of self-similar type. We introduce a self-similar coordinate transformation \((\mathrm{t},\mathrm{x})\mapsto(s,y)\), where \((\mathrm{t},\mathrm{x})\) is the original Cartesian coordinate and \((s,y)\) is the self-similar coordinate. The new coordinate is aligned to the shock formation and will become singular when \(t\) approaches the blow-up time \(T_{*}\). Roughly speaking, we have that
\[y_{1}\approx(T_{*}-t)^{-3/2}\mathrm{x}_{1},\ \ y_{2}\approx(T_{*}-t)^{-1/2} \mathrm{x}_{2}.\]
Thus \(y\) is a zoom-in version of \(\mathrm{x}\). In self-similar coordinates, the Riemann invariant \(W\)(will be defined in the next subsection) will converge to a profile \(\overline{W}\), uniformly on any compact set of \(y\). Moreover, \(\overline{W}\) solves the self-similar Burgers equation:
\[-\frac{1}{2}\overline{W}+\left(\frac{3}{2}y_{1}+\overline{W}\right)\partial_{ y_{1}}\overline{W}+\frac{1}{2}y_{2}\partial_{y_{2}}\overline{W}=0.\]
In this sense, the constructed blow-up solution of the Euler equations is close to a fixed shape on a smaller and smaller scale.
To better understand what happens, we shall examine the simplest 1D inviscid Burgers model, whose well-localized solutions are proved to become singular in finite time. It is pointed out explicitly in [18, 19] that as we are approaching the blow-up point, the blow-up solution can be well modeled by a dynamically rescaled version of a fixed profile, which belongs to a countable family \(\mathcal{F}\) of functions, and the members in \(\mathcal{F}\) are solutions to the self-similar Burgers equation. The choice of profile only depends on the derivatives of initial data at the point that achieves the minimum negative slope. Thus the family \(\mathcal{F}\) of solutions to the self-similar Burgers equation plays an important role in the blow-up phenomenon of the Burgers equation. For a detailed discussion, see [18], or the toy model in appendix A.
After the asymptotic blow-up behavior of the inviscid Burgers equation was clarified systematically in [18], the self-similar Burgers profiles have been used to explore blow-up phenomena in various systems, see [10, 13, 30, 31, 36]. The modulation method that was developed in the context of nonlinear dispersive equations, is the suitable way to utilize the self-similar Burgers profiles.
## 2. Preliminaries
In this section we introduce a series of coordinate transformations and the Riemann variables.
### Coordinates adapted to the shock
We introduce the sound speed \(\sigma=\frac{1}{\alpha}\rho^{\alpha}\), where \(\alpha=\frac{\gamma-1}{2}>0\), then the system of \((u,\rho)\) is transformed into a system of \((u,\sigma)\), which reads
\[\begin{cases}\partial_{t}\sigma+u\cdot\nabla_{\mathrm{x}}\sigma+\alpha\sigma \nabla_{\mathrm{x}}\cdot u=0\\ \partial_{t}u+(u\cdot\nabla)u+\alpha\sigma\nabla_{\mathrm{x}}\sigma=0,\end{cases} \tag{2.1}\]
By defining \(t=\frac{1+\alpha}{2}\mathrm{t}\), the equations become
\[\begin{cases}\frac{1+\alpha}{2}\partial_{t}\sigma+u\cdot\nabla_{ \mathrm{x}}\sigma+\alpha\sigma\nabla_{\mathrm{x}}\cdot u=0\\ \frac{1+\alpha}{2}\partial_{t}u+(u\cdot\nabla)u+\alpha\sigma\nabla_{ \mathrm{x}}\sigma=0,\end{cases} \tag{2.2}\]
The vorticity is defined as
\[\omega=\partial_{\mathrm{x}_{1}}u_{2}-\partial_{\mathrm{x}_{2}}u_{1}. \tag{2.3}\]
We also introduce the specific vorticity \(\zeta:=\omega/\rho\), which satisfies
\[\frac{1+\alpha}{2}\partial_{t}\zeta+u\cdot\nabla_{\mathrm{x}}\zeta=0. \tag{2.4}\]
To keep track of shock formation, we introduce six time-dependent modulation variables \(\xi=(\xi_{1},\xi_{2})\in\mathbb{R}^{2},n=(n_{1},n_{2})\in\mathbb{S}^{1},\tau \in\mathbb{R},\phi\in\mathbb{R}\), and \(\kappa\in\mathbb{R}\). \(\xi\) records the location of the shock; \(n\) records the direction of the shock; \(\tau\) records the slope of the Riemann invariant \(w\); \(\phi\) measures the "curvature" of the shock; \(\kappa\) records the value of the Riemann invariant at \(\mathrm{x}=\xi(t)\).
Using modulation variables \(\xi,n,\tau,\phi\), we define the coordinate adapted to the shock formation.
#### 2.1.1. Tracing the location and direction of the shock
With the time-dependent vector \(\xi(t)=(\xi_{1}(t),\xi_{2}(t))\), and the normal vector \(n(t)=(n_{1}(t),n_{2}(t))\), we define a coordinate transformation \(\tilde{x}=R(t)^{T}(\mathrm{x}-\xi(t))\), where
\[R(t)=\begin{bmatrix}n_{1}&-n_{2}\\ n_{2}&n_{1}\end{bmatrix}\in SO(2) \tag{2.5}\]
The origin of the \(\tilde{x}\) coordinate coincides with \(\xi(t)\), which dynamically tracks the spatial location of the shock formation, and \(\tilde{e}_{1}\) aligns with \(n(t)\), direction of the shock.
The functions should also be rewritten in the new coordinate:
\[\begin{cases}\tilde{u}(\tilde{x},t)=R(t)^{T}u(\mathrm{x},t)\\ \tilde{\rho}(\tilde{x},t)=\rho(\mathrm{x},t)\\ \tilde{\sigma}(\tilde{x},t)=\sigma(\mathrm{x},t)\\ \tilde{\zeta}(\tilde{x},t)=\zeta(\mathrm{x},t).\end{cases} \tag{2.6}\]
Then \((\tilde{u},\tilde{\sigma})\) satisfies
\[\begin{cases}\frac{1+\alpha}{2}\partial_{t}\tilde{\sigma}+(\tilde{u}+\tilde{v}) \cdot\nabla_{\tilde{x}}\tilde{\sigma}+\alpha\tilde{\sigma}\nabla_{\tilde{x}} \cdot\tilde{u}=0\\ \frac{1+\alpha}{2}\partial_{t}\tilde{u}-\frac{1+\alpha}{2}Q\tilde{u}+\left[( \tilde{u}+\tilde{v})\cdot\nabla_{\tilde{x}}\right]\tilde{u}+\alpha\tilde{ \sigma}\nabla_{\tilde{x}}\tilde{\sigma}=0,\end{cases} \tag{2.7}\]
where \(Q(t)=\frac{dR(t)^{T}}{dt}R(t)=\dot{R}(t)^{T}R(t)\), and \(\tilde{v}(\tilde{x},t)=\frac{1+\alpha}{2}(Q\tilde{x}-R^{T}\dot{\xi})\).
The equation of specific vorticity is transformed into
\[\frac{1+\alpha}{2}\partial_{t}\tilde{\zeta}+(\tilde{u}+\tilde{v})\cdot\nabla_{ \tilde{x}}\tilde{\zeta}=0. \tag{2.8}\]
#### 2.1.2. Tracking the curvature of shock front
In order to track the curvature of the shock, we introduce a time-dependent scalar function \(\tilde{f}(\tilde{x}_{1},\tilde{x}_{2},t)\).
We denote \(\phi(t)\in\mathbb{R}\) as the "curvature" of the "wavefront of the shock formation" at the origin, and we assume that \(\tilde{f}\) satisfies \(\partial_{\tilde{x}_{2}}^{2}\tilde{f}(0,0,t)=\phi(t)\). In particular, we construct \(\tilde{f}\) as follows. Let \(\theta\in C_{c}^{\infty}(-\frac{5}{4},\frac{5}{4})\) be a bump function such that \(\theta(\tilde{x}_{2})\equiv 1\) when \(|\tilde{x}_{2}|\leq 1\), then we define
\[\tilde{f}(\tilde{x}_{1},\tilde{x}_{2},t)=\theta(\varepsilon^{-\frac{1}{2}} \tilde{x}_{1})\int_{0}^{\tilde{x}_{2}}\phi(t)\tilde{x}_{2}^{\prime}\theta( \varepsilon^{-\frac{1}{6}}\tilde{x}_{2}^{\prime})d\tilde{x}_{2}^{\prime}, \tag{2.9}\]
where \(\varepsilon\) is a small constant to be specified. Note that \(\tilde{f}(\tilde{x}_{1},\tilde{x}_{2},t)=\frac{1}{2}\phi\tilde{x}_{2}^{2}\) when \(|\tilde{x}|\) is small. This gurantees that in the forcing terms of \(W,Z,A\) (to be defined in (2.24)) those related to the coordinate transformation vanish when \(y\) is far from the origin, while not affecting the computation near the origin.
Now we introduce the coordinate transformation that adapted to the shock front:
\[\begin{cases}x_{1}=\tilde{x}_{1}-\tilde{f}(\tilde{x}_{1},\tilde{x}_{2},t)\\ x_{2}=\tilde{x}_{2}.\end{cases} \tag{2.10}\]
Let \(f(x_{1},x_{2},t):=\tilde{f}(\tilde{x}_{1},\tilde{x}_{2},t)\), then we have
\[\begin{cases}\tilde{x}_{1}=x_{1}+f(x_{1},x_{2},t)\\ \tilde{x}_{2}=x_{2}.\end{cases} \tag{2.11}\]
We define
\[J(\tilde{x}_{1},\tilde{x}_{2},t)=|\nabla_{\tilde{x}}x_{1}|=\sqrt{(1-\tilde{f }_{\tilde{x}_{1}})^{2}+\tilde{f}_{\tilde{x}_{2}}^{2}}=\frac{\sqrt{1+f_{x_{2}}^ {2}}}{1+f_{x_{1}}}, \tag{2.12}\]
\[N=J^{-1}\nabla_{\tilde{x}}x_{1}=\frac{(1-\tilde{f}_{\tilde{x}_{1}},-\tilde{f} _{\tilde{x}_{2}})}{\sqrt{(1-\tilde{f}_{\tilde{x}_{1}})^{2}+\tilde{f}_{\tilde{ x}_{2}}^{2}}}=\frac{1}{\sqrt{1+f_{x_{2}}^{2}}}(1,-f_{x_{2}}), \tag{2.13}\]
\[T=N^{\perp}=\frac{(\tilde{f}_{\tilde{x}_{2}},1-\tilde{f}_{\tilde{x}_{1}})}{ \sqrt{(1-\tilde{f}_{\tilde{x}_{1}})^{2}+\tilde{f}_{\tilde{x}_{2}}^{2}}}=\frac {1}{\sqrt{1+f_{x_{2}}^{2}}}(f_{x_{2}},1). \tag{2.14}\]
Note that \(\{N,T\}\) forms an orthonormal basis.
\(J,N,T\) can also be viewed as functions of \((x_{1},x_{2},t)\) and we overload their names for the sake of convenience. One can verify that
\[\operatorname{supp}_{x}(N-\tilde{e}_{1},T-\tilde{e}_{2})\subset\left\{|x_{1}| \leq\frac{3}{2}\varepsilon^{\frac{1}{2}},|x_{2}|\leq\frac{3}{2}\varepsilon^{ \frac{1}{6}}\right\}. \tag{2.15}\]
Now the functions are redefined as
\[\begin{cases}\hat{u}(x,t)=\hat{u}(\tilde{x},t)\\ \hat{\rho}(x,t)=\tilde{\rho}(\tilde{x},t)\\ \hat{\sigma}(x,t)=\tilde{\sigma}(\tilde{x},t)\\ \hat{\zeta}(x,t)=\tilde{\zeta}(\tilde{x},t)\\ v(x,t)=\tilde{v}(\tilde{x},t),\end{cases}\]
and the system can be written as
\[\begin{cases}\partial_{t}\hat{u}-Q\hat{u}+\left[-\frac{\partial_{t}f}{1+f_{x_{ 1}}}+2\beta_{1}(\hat{u}+v)\cdot JN\right]\partial_{x_{1}}\hat{u}+2\beta_{1}( \hat{u}_{2}+v_{2})\partial_{x_{2}}\hat{u}=-2\beta_{3}JN\hat{\sigma}\partial_{x _{1}}\hat{\sigma}-2\beta_{3}\hat{\sigma}\partial_{x_{2}}\hat{\sigma}\tilde{e}_ {2}\\ \partial_{t}\hat{\sigma}+\left[-\frac{\partial_{t}f}{1+f_{x_{1}}}+2\beta_{1}( \hat{u}+v)\cdot JN\right]\partial_{x_{1}}\hat{\sigma}+2\beta_{1}(\hat{u}_{2}+ v_{2})\partial_{x_{2}}\hat{\sigma}=-2\beta_{3}\hat{\sigma}JN\cdot\partial_{x _{1}}\hat{u}-2\beta_{3}\hat{\sigma}\partial_{x_{2}}\hat{u}_{2},\end{cases} \tag{2.16}\]
where
\[\beta_{1}=\frac{1}{1+\alpha},\ \beta_{2}=\frac{1-\alpha}{1+\alpha},\ \beta_{3}=\frac{ \alpha}{1+\alpha}. \tag{2.17}\]
We can also deduce he equation governing the evolution of \(\hat{\zeta}\):
\[\partial_{t}\hat{\zeta}+\left[-\frac{\partial_{t}f}{1+f_{x_{1}}}+2\beta_{1}( \hat{u}+v)\cdot JN\right]\partial_{x_{1}}\hat{\zeta}+2\beta_{1}(\hat{u}_{2}+ v_{2})\partial_{x_{2}}\hat{\zeta}=0. \tag{2.18}\]
#### 2.1.3. Riemann variables
We define the Riemann variables by
\[\begin{cases}w(x,t)=\hat{u}(x,t)\cdot N+\hat{\sigma}(x,t)\\ z(x,t)=\hat{u}(x,t)\cdot N-\hat{\sigma}(x,t)\\ a(x,y)=\hat{u}(x,t)\cdot T.\end{cases} \tag{2.19}\]
Then the system of \((\hat{u},\hat{\sigma})\) can be rewritten in terms of \((w,z,a)\) as
\[\begin{split}\partial_{t}& w+\left(-\frac{\partial_{t}f}{1+f_{x_{1}}}+2\beta_{1}v\cdot JN+Jw+\beta_{2}Jz \right)\partial_{x_{1}}w+(2\beta_{1}v_{2}+N_{2}w+\beta_{2}N_{2}z+2\beta_{1}aT _{2})\,\partial_{x_{2}}w\\ =&-2\beta_{3}\hat{\sigma}\partial_{x_{2}}aT_{2}+aT \cdot(\partial_{t})_{x}N+aQ_{ij}T_{j}N_{i}+2\beta_{1}(\hat{u}\cdot NN_{2}+aT _{2}+v_{2})aT\cdot\partial_{x_{2}}N\\ &-2\beta_{3}\sigma(a\partial_{x_{2}}T_{2}+\hat{u}\cdot N\partial_{x _{2}}N_{2})-\left(-\frac{\partial_{t}f}{1+f_{x_{1}}}+2\beta_{1}v\cdot JN+2 \beta_{1}J\hat{u}\cdot N\right)a\partial_{x_{1}}T\cdot N\\ \partial_{t}& z+\left(-\frac{\partial_{t}f}{1+f_{x_{1}}}+2\beta_{1}v \cdot JN+\beta_{2}Jw+Jz\right)\partial_{x_{1}}z+(2\beta_{1}v_{2}+\beta_{2}N _{2}w+N_{2}z+2\beta_{1}aT_{2})\,\partial_{x_{2}}w\\ =& 2\beta_{3}\hat{\sigma}\partial_{x_{2}}aT_{2}+aT \cdot(\partial_{t})_{x}N+aQ_{ij}T_{j}N_{i}+2\beta_{1}(\hat{u}\cdot NN_{2}+aT _{2}+v_{2})aT\cdot\partial_{x_{2}}N\\ &+2\beta_{3}\hat{\sigma}(a\partial_{x_{2}}T_{2}+\hat{u}\cdot N \partial_{x_{2}}N_{2})-\left(-\frac{\partial_{t}f}{1+f_{x_{1}}}+2\beta_{1}v \cdot JN+2\beta_{1}J\hat{u}\cdot N\right)a\partial_{x_{1}}T\cdot N\\ \partial_{t}& a+\left(-\frac{\partial_{t}f}{1+f_{x_{1}}}+2\beta_{1}v \cdot JN+\beta_{1}Jw+\beta_{1}Jz\right)\partial_{x_{1}}a+2\beta_{1}\left(v_{2} +\frac{w+z}{2}N_{2}+aT_{2}\right)\partial_{x_{2}}a\\ =&-2\beta_{3}\hat{\sigma}T_{2}\partial_{x_{2}}\hat{\sigma}+ \hat{u}\cdot TN\cdot(\partial_{t})_{x}T+\hat{u}\cdot NQ_{ij}N_{j}T_{i}\\ &+2\beta_{1}(\hat{u}\cdot NN_{2}+aT_{2}+v_{2})\hat{u}\cdot NN \cdot\partial_{x_{2}}T-\left(-\frac{\partial_{t}f}{1+f_{x_{1}}}+2\beta_{1}v \cdot JN+2\beta_{1}J\hat{u}\cdot N\right)\hat{u}\cdot N\partial_{x_{1}}N\cdot T.\end{split} \tag{2.22}\]
#### 2.1.4. Self-similar transformation
We introduce self-similar variables as follows
\[\begin{cases}s(t)=-\log(\tau(t)-t)\\ y_{1}=\dfrac{x_{1}}{(\tau-t)^{3/2}}=x_{1}e^{\frac{3}{2}s}\\ y_{2}=\dfrac{x_{2}}{(\tau-t)^{1/2}}=x_{2}e^{\frac{5}{2}},\end{cases} \tag{2.23}\]
where \(\tau(t)\) is a parameter to be determined.
Now the original time \(t\) is transformed into the self-similar time \(\tau\), and the space variable \(x\) is transformed into the self-similar space variable \(y\). At each fixed time \(t\), \(y\) is a dilation of \(x\). In the \(y\) coordinate, we can closely observe the behavior of the solution around the shock location.
Now we assume that
\[\begin{cases}w(x,t)=e^{-\frac{\pi}{2}}W(y,s)+\kappa(t)\\ z(x,t)=Z(y,s)\\ a(x,t)=A(y,s),\end{cases} \tag{2.24}\]
where \(\kappa\) is also a modulation parameter to be determined.
In the self-similar variables, the system becomes
\[\begin{cases}\left(\partial_{s}-\dfrac{1}{2}\right)W+\left(\dfrac{3}{2}y_{1}+ g_{W}\right)\partial_{1}W+\left(\dfrac{1}{2}y_{2}+h_{W}\right)\partial_{2}W=F_{W} \\ \partial_{s}Z+\left(\dfrac{3}{2}y_{1}+g_{Z}\right)\partial_{1}Z+\left(\dfrac{ 1}{2}y_{2}+h_{Z}\right)\partial_{2}Z=F_{Z}\\ \partial_{s}A+\left(\dfrac{3}{2}y_{1}+g_{A}\right)\partial_{1}A+\left(\dfrac{ 1}{2}y_{2}+h_{A}\right)\partial_{2}A=F_{A}.\end{cases} \tag{2.25}\]
Here and throughout the papar we use the notation \(\partial_{j}=\partial_{y_{j}}\), and \(\beta_{\tau}:=\frac{1}{1-\tau}\). The transport terms and and the forcing terms are given by
\[\begin{cases}g_{W}=\beta_{\tau}JW+\beta_{\tau}e^{\frac{\pi}{2}} \left[-\dfrac{\partial_{t}f}{1+f_{x_{1}}}+J\left(\kappa+\beta_{2}Z+2\beta_{1} V\cdot N\right)\right]=\beta_{\tau}JW+G_{W}\\ g_{Z}=\beta_{2}\beta_{\tau}JW+\beta_{\tau}e^{\frac{\pi}{2}} \left[-\dfrac{\partial_{t}f}{1+f_{x_{1}}}+J\left(\beta_{2}\kappa+Z+2\beta_{1} V\cdot N\right)\right]=\beta_{2}\beta_{\tau}JW+G_{Z}\\ g_{A}=\beta_{1}\beta_{\tau}JW+\beta_{\tau}e^{\frac{\pi}{2}} \left[-\dfrac{\partial_{t}f}{1+f_{x_{1}}}+J\left(\beta_{1}\kappa+\beta_{1}Z+ 2\beta_{1}V\cdot N\right)\right]=\beta_{1}\beta_{\tau}JW+G_{A},\end{cases} \tag{2.26}\]
\[\begin{cases}h_{W}=\beta_{\tau}e^{-s}N_{2}W+\beta_{\tau}e^{-\frac{\pi}{2}} \left(2\beta_{1}V_{2}+N_{2}\kappa+\beta_{2}N_{2}Z+2\beta_{1}AT_{2}\right)\\ h_{Z}=\beta_{2}\beta_{\tau}e^{-s}N_{2}W+\beta_{\tau}e^{-\frac{\pi}{2}}\left(2 \beta_{1}V_{2}+\beta_{2}N_{2}\kappa+N_{2}Z+2\beta_{1}AT_{2}\right)\\ h_{A}=\beta_{1}\beta_{\tau}e^{-s}N_{2}W+\beta_{\tau}e^{-\frac{\pi}{2}}\left(2 \beta_{1}V_{2}+\beta_{1}N_{2}\kappa+\beta_{1}N_{2}Z+2\beta_{1}AT_{2}\right), \end{cases} \tag{2.27}\]
and
\[\begin{cases}F_{W}=&-2\beta_{3}\beta_{\tau}S\partial_{2}AT_{2}+\beta_{\tau}e^{- \frac{\varepsilon}{2}}AT\cdot(\partial_{t})_{x}N+\beta_{\tau}e^{-\frac{ \varepsilon}{2}}Q_{ij}AT_{j}N_{i}\\ &+2\beta_{1}\beta_{\tau}\left(V_{2}+U\cdot NN_{2}+AT_{2}\right)AT\cdot\partial _{2}N-2\beta_{3}\beta_{\tau}S(U\cdot N\partial_{2}N_{2}+A\partial_{2}T_{2})\\ &-\beta_{\tau}e^{s}\left(-\frac{\partial_{t}f}{1+f_{x_{1}}}+2\beta_{1}V\cdot JN +2\beta_{1}JU\cdot N\right)A\partial_{1}T\cdot N-\beta_{\tau}e^{-\frac{ \varepsilon}{2}}\hat{\kappa}\\ F_{Z}=&2\beta_{3}\beta_{\tau}e^{-\frac{\varepsilon}{2}}S\partial_{2}AT_{2}+ \beta\tau e^{-s}AT\cdot(\partial_{t})_{x}N+\beta e^{-s}Q_{ij}AT_{j}N_{i}\\ &+2\beta_{1}\beta_{\tau}e^{-\frac{\varepsilon}{2}}(V_{2}+U\cdot NN_{2}+AT_{2} )AT\cdot\partial_{2}N+2\beta_{3}\beta_{\tau}e^{-\frac{\varepsilon}{2}}(A \partial_{2}T_{2}+U\cdot N\partial_{2}N_{2})\\ &-\beta_{\tau}e^{\frac{\varepsilon}{2}}\left(-\frac{\partial_{t}f}{1+f_{x_{1} }}+2\beta_{1}V\cdot JN+2\beta_{1}JU\cdot N\right)A\partial_{1}T\cdot N\\ F_{A}=&-2\beta_{3}\beta_{\tau}e^{-\frac{\varepsilon}{2}}ST_{2}\partial_{2}S+ \beta_{\tau}e^{-s}U\cdot NN\cdot(\partial_{t})_{x}T+\beta_{\tau}e^{-s}Q_{ij}( U\cdot NN_{j}+AT_{j})T_{i}\\ &+2\beta_{1}\beta_{\tau}e^{-\frac{\varepsilon}{2}}(V_{2}+U\cdot NN_{2}+AT_{2} )U\cdot NN\cdot\partial_{2}T\\ &-\beta_{\tau}e^{\frac{\varepsilon}{2}}\left(-\frac{\partial_{t}f}{1+f_{x_{1} }}+2\beta_{1}V\cdot JN+2\beta_{1}JU\cdot N\right)U\cdot N\partial_{1}N\cdot T,\end{cases} \tag{2.28}\]
where \(U\), \(V\), \(S\) are the self-similar versions of \(\hat{u}\), \(v\), \(\hat{\sigma}\), for example \(S(y,s)=\hat{\sigma}(x,t)\).
If we write the transport terms as
\[\begin{cases}\mathcal{V}_{W}=\left(\frac{3}{2}y_{1}+g_{W},\frac{1}{2}y_{2}+h_{ W}\right)\\ \mathcal{V}_{Z}=\left(\frac{3}{2}y_{1}+g_{Z},\frac{1}{2}y_{2}+h_{Z}\right)\\ \mathcal{V}_{A}=\left(\frac{3}{2}y_{1}+g_{A},\frac{1}{2}y_{2}+h_{A}\right), \end{cases} \tag{2.29}\]
then the equation of \((W,Z,A)\) can be written in a compact form:
\[\begin{cases}\partial_{s}W-\frac{1}{2}W+\mathcal{V}_{W}\cdot\nabla W=F_{W}\\ \partial_{s}Z+\mathcal{V}_{Z}\cdot\nabla Z=F_{Z}\\ \partial_{s}A+\mathcal{V}_{A}\cdot\nabla A=F_{A}.\end{cases} \tag{2.30}\]
We also deduce the equations of \((U,S)\):
\[\begin{cases}\partial_{s}U_{i}-\beta_{\tau}e^{-s}Q_{ij}U_{j}+\mathcal{V}_{A} \cdot\nabla U=-2\beta_{3}\beta_{\tau}e^{\frac{\varepsilon}{2}}S\partial_{1} SJN_{i}-2\beta_{3}\beta_{\tau}e^{-\frac{\varepsilon}{2}}S\partial_{2}S \delta_{i2}\\ \partial_{s}S+\mathcal{V}_{A}\cdot\nabla S=-2\beta_{3}\beta_{\tau}e^{\frac{ \varepsilon}{2}}S\partial_{1}U\cdot JN-2\beta_{3}\beta_{\tau}e^{-\frac{ \varepsilon}{2}}S\partial_{2}U_{2}.\end{cases} \tag{2.31}\]
We can see that \((U,S)\) are transported in the same way as \(A\). The transport terms \(g_{A}\), \(h_{A}\) in the equation of \(A\) can also be expressed in terms of \(U\), \(S\):
\[\begin{cases}g_{A}=\beta_{\tau}e^{\frac{\varepsilon}{2}}\left[2\beta_{1}(U+V) \cdot JN-\frac{\partial_{t}f}{1+f_{x_{1}}}\right]\\ h_{A}=2\beta_{1}\beta_{\tau}e^{-\frac{\varepsilon}{2}}(U_{2}+V_{2}).\end{cases} \tag{2.32}\]
Here we record the relation between \((U,S)\) and \((W,Z,A)\):
\[\begin{cases}U=\frac{1}{2}\left(e^{-\frac{\varepsilon}{2}}W+Z+\kappa\right)N+AT \\ S=\frac{1}{2}\left(e^{-\frac{\varepsilon}{2}}W-Z+\kappa\right),\end{cases} \tag{2.33}\]
and
\[\begin{cases}W=e^{\frac{s}{2}}(U\cdot N+S-\kappa)\\ Z=U\cdot N-S\\ A=U\cdot T.\end{cases} \tag{2.34}\]
Although we introduce the self-similar version of functions like \(V(y,s)\) of \(v(x,t)\), we overload the functions \(f\), \(J\), \(N\), \(T\) as functions of \((y,s)\). For example, in the self-similar coordinates, we view \(N\) as the map \(y\mapsto N(x(y),t(s))\), and \(\partial_{2}N(y)=\partial_{y_{2}}[N(x(y),t(s))]\).
### Self-similar 2D Burgers profile
We first introduce the 1D self-similar Burgers profile
\[W_{1d}(y_{1})=\left(-\frac{y_{1}}{2}+\left(\frac{1}{27}+\frac{y_{1}^{2}}{4} \right)^{\frac{1}{2}}\right)^{\frac{1}{3}}-\left(\frac{y_{1}}{2}+\left(\frac{ 1}{27}+\frac{y_{1}^{2}}{4}\right)^{\frac{1}{2}}\right)^{\frac{1}{3}}, \tag{2.35}\]
which solves the 1D self-similar Burgers equation(cf. [18]):
\[-\frac{1}{2}W_{1d}+\left(\frac{3}{2}y_{1}+W_{1d}\right)\partial_{y_{1}}W_{1d }=0. \tag{2.36}\]
Moreover, we introduce
\[\overline{W}(y_{1},y_{2})=\langle y_{2}\rangle W_{1d}(\langle y_{2}\rangle^{- 3}y_{1}), \tag{2.37}\]
where \(\langle y_{2}\rangle=\sqrt{1+y_{2}^{2}}\). One an verify that \(\overline{W}\) is a solution to the 2D self-similar Burgers equation:
\[-\frac{1}{2}\overline{W}+\left(\frac{3}{2}y_{1}+\overline{W}\right)\partial_ {y_{1}}\overline{W}+\frac{1}{2}y_{2}\partial_{y_{2}}\overline{W}=0. \tag{2.38}\]
#### 2.2.1. Properties of \(\overline{W}\)
It can be checked via the explicit formula of \(W_{1d}\) that
\[|W_{1d}(y_{1})|\leq\min\left(|y_{1}|,\frac{|y_{1}|}{\frac{1}{3}+|y_{1}|^{\frac {2}{3}}}\right)\leq\min\left(|y_{1}|,|y_{1}|^{\frac{1}{3}}\right), \tag{2.39}\]
\[|W_{1d}^{\prime}(y_{1})|\leq\langle y_{1}\rangle^{-\frac{2}{3}},\ |W_{1d}^{\prime\prime}(y_{1})|\leq \langle y_{1}\rangle^{-\frac{2}{3}}, \tag{2.40}\]
\[|W_{1d}(y_{1})W_{1d}^{\prime}(y_{1})|\leq\frac{1}{3}\langle y_{1}\rangle^{- \frac{1}{3}},\ |(W_{1d}W_{1d}^{\prime})^{\prime}(y_{1})|\leq\min\left(\langle y_{1} \rangle^{-\frac{4}{3}},\frac{1}{7}|y_{1}|^{-1}\langle y_{1}\rangle^{-\frac{1} {3}}\right). \tag{2.41}\]
Define \(\eta(y)=1+y_{1}^{2}+y_{2}^{6}\), \(\tilde{\eta}(y)=1+|y|^{2}+y_{2}^{6}\), then the above inequalities imply that
\[\left|\overline{W}\right|\leq(1+y_{1}^{2})^{\frac{1}{6}}\leq\eta^{\frac{1}{6 }}, \tag{2.42}\]
\[|\partial_{1}\overline{W}|\leq\tilde{\eta}^{-\frac{1}{3}},\ |\partial_{2} \overline{W}|\leq\frac{2}{3}, \tag{2.43}\]
\[|\partial_{11}\overline{W}|\leq\tilde{\eta}^{-\frac{5}{6}},\ |\partial_{12}\overline{W}|\leq 2\eta^{-\frac{1}{2}},\ | \partial_{22}\overline{W}|\leq\frac{6}{7}\eta^{-\frac{1}{6}}. \tag{2.44}\]
At the origin we can check by the expression of \(\overline{W}\) that
\[\overline{W}(0)=0,\quad\nabla\overline{W}(0)=\begin{pmatrix}-1\\ 0\end{pmatrix},\quad\nabla^{2}\overline{W}(0)=\begin{pmatrix}0&0\\ 0&0\end{pmatrix},\quad\partial_{1}\nabla^{2}\overline{W}(0)=\begin{pmatrix}6&0 \\ 0&2\end{pmatrix}. \tag{2.45}\]
### Evolution of \(\tilde{W}\) and higher order derivatives of the unknowns
If we define \(\widetilde{W}=W-\overline{W}\), then \(\widetilde{W}\) satisfies
\[\left(\partial_{s}-\frac{1}{2}+\beta_{\tau}J\partial_{1}\overline{W}\right) \widetilde{W}+\mathcal{V}_{W}\cdot\nabla\widetilde{W}=\widetilde{F}_{W}, \tag{2.46}\]
where
\[\widetilde{F}_{W}=F_{W}+\left[(1-\beta_{\tau}J)\overline{W}-G_{W}\right] \partial_{1}\overline{W}-h_{W}\partial_{2}\overline{W}. \tag{2.47}\]
For a multi-index \(\gamma=(\gamma_{1},\gamma_{2})\) satisfying \(|\gamma|\geq 1\), we have the evolution equation for \((\partial^{\gamma}W,\partial^{\gamma}Z,\partial^{\gamma}A)\):
\[\left\{\begin{aligned} &\left(\partial_{s}+\frac{3\gamma_{1}+ \gamma_{2}-1}{2}+\beta_{\tau}(1+\gamma_{1}1_{\gamma_{1}\geq 2}J\partial_{1}W) \right)\partial^{\gamma}W+\mathcal{V}_{W}\cdot\nabla\partial^{\gamma}W=F_{W} ^{(\gamma)}\\ &\left(\partial_{s}+\frac{3\gamma_{1}+\gamma_{2}}{2}+\beta_{2} \beta_{\tau}\gamma_{1}J\partial_{1}W\right)\partial^{\gamma}Z+\mathcal{V}_{Z} \cdot\nabla\partial^{\gamma}Z=F_{Z}^{(\gamma)}\\ &\left(\partial_{s}+\frac{3\gamma_{1}+\gamma_{2}}{2}+\beta_{2} \beta_{\tau}\gamma_{1}J\partial_{1}W\right)\partial^{\gamma}A+\mathcal{V}_{A} \cdot\nabla\partial^{\gamma}A=F_{A}^{(\gamma)},\end{aligned}\right. \tag{2.48}\]
where the forcing terms are
\[\begin{split} F_{W}^{(\gamma)}=&\partial^{\gamma}F_ {W}-\beta_{\tau}\partial_{1}W[\partial^{\gamma},J]W-\beta_{\tau}\mathbbm{1}_{| \gamma|\geq 2}\sum_{\begin{subarray}{c}|\beta|=|\gamma|-1\\ \beta_{1}=\gamma_{1}\end{subarray}}\binom{\gamma}{\beta}\partial^{\gamma- \beta}(JW)\partial_{1}\partial^{\beta}W\\ &-\beta_{\tau}\mathbbm{1}_{|\gamma|\geq 3}\sum_{ \begin{subarray}{c}1\leq|\beta|\leq|\gamma|-1\\ \beta\leq\gamma\end{subarray}}\binom{\gamma}{\beta}\partial^{\gamma-\beta}( JW)\partial_{1}\partial^{\beta}W-\sum_{0\leq\beta<\gamma}\binom{\gamma}{\beta} \left(\partial^{\gamma-\beta}G_{W}\partial_{1}\partial^{\beta}W+\partial^{ \gamma-\beta}h_{W}\partial_{2}\partial^{\beta}W\right),\end{split} \tag{2.49}\]
\[\begin{split} F_{Z}^{(\gamma)}=&\partial^{\gamma}F_ {Z}-\beta_{2}\beta_{\tau}\sum_{\begin{subarray}{c}|\beta|=|\gamma|-1\\ \beta_{1}=\gamma_{1}\end{subarray}}\binom{\gamma}{\beta}\partial^{\gamma- \beta}(JW)\partial_{1}\partial^{\beta}Z\\ &-\beta_{2}\beta_{\tau}\mathbbm{1}_{|\gamma|\geq 2}\sum_{ \begin{subarray}{c}0\leq|\beta|\leq|\gamma|-2\\ \beta\leq\gamma\end{subarray}}\binom{\gamma}{\beta}\partial^{\gamma-\beta}( JW)\partial_{1}\partial^{\beta}Z-\sum_{0\leq\beta<\gamma}\binom{\gamma}{\beta} \left(\partial^{\gamma-\beta}G_{Z}\partial_{1}\partial^{\beta}Z+\partial^{ \gamma-\beta}h_{Z}\partial_{2}\partial^{\beta}Z\right),\end{split} \tag{2.50}\]
\[\begin{split} F_{A}^{(\gamma)}=&\partial^{\gamma}F_ {A}-\beta_{2}\beta_{\tau}\sum_{\begin{subarray}{c}|\beta|=|\gamma|-1\\ \beta_{1}=\gamma_{1}\end{subarray}}\binom{\gamma}{\beta}\partial^{\gamma-\beta}( JW)\partial_{1}\partial^{\beta}A\\ &-\beta_{2}\beta_{\tau}\mathbbm{1}_{|\gamma|\geq 2}\sum_{ \begin{subarray}{c}0\leq|\beta|\leq|\gamma|-2\\ \beta\leq\gamma\end{subarray}}\binom{\gamma}{\beta}\partial^{\gamma-\beta}( JW)\partial_{1}\partial^{\beta}A-\sum_{0\leq\beta<\gamma}\binom{\gamma}{\beta} \left(\partial^{\gamma-\beta}G_{A}\partial_{1}\partial^{\beta}A+\partial^{ \gamma-\beta}h_{A}\partial_{2}\partial^{\beta}A\right).\end{split} \tag{2.51}\]
Similarly we can deduce the equation of \(\partial^{\gamma}\widetilde{W}\):
\[\left[\partial_{s}+\frac{3\gamma_{1}+\gamma_{2}-1}{2}+\beta_{\tau}J(\partial_{1 }\overline{W}+\gamma_{1}\partial_{1}W)\right]\partial^{\gamma}\widetilde{W}+ \mathcal{V}_{W}\cdot\nabla\partial^{\gamma}\widetilde{W}=\widetilde{F}_{W}^{( \gamma)}, \tag{2.52}\]
where
\[\begin{split}\widetilde{F}_{W}^{(\gamma)}=&\partial^{ \gamma}\widetilde{F}_{W}-\sum_{0\leq\beta<\gamma}\begin{pmatrix}\gamma\\ \beta\end{pmatrix}\left[\partial^{\gamma-\beta}G_{W}\partial_{1}\partial^{ \beta}\widetilde{W}+\partial^{\gamma-\beta}h_{W}\partial_{2}\partial^{\beta} \widetilde{W}+\beta_{\tau}\partial^{\gamma-\beta}(J\partial_{1}\overline{W}) \partial^{\beta}\widetilde{W}\right]\\ &-\beta_{\tau}\gamma_{2}\partial_{2}(JW)\partial_{1}^{\gamma_{1}+1} \partial_{2}^{\gamma_{2}-1}\widetilde{W}-\beta_{\tau}1_{|\gamma|\geq 2}\sum_{ \begin{subarray}{c}0\leq|\beta|\leq|\gamma|-2\\ \beta\leq\gamma\end{subarray}}\begin{pmatrix}\gamma\\ \beta\end{pmatrix}\partial^{\gamma-\beta}(JW)\partial_{1}\partial^{\beta} \widetilde{W}.\end{split} \tag{2.53}\]
## 3. Main result
In this section, we state the initial condition and the main shock formation result of the 2D compressible Euler equations. The proof of the main theorem will be given in section 14.
### Initial data in physical variables
We assume that the initial time is \(t=-\varepsilon\) with \(\varepsilon\) to be determined. For modulation variables, we assume that
\[\kappa(-\varepsilon)=\kappa_{0},\ \ \xi(-\varepsilon)=0,\ \ n_{2}(-\varepsilon)=0,\ \ \tau(- \varepsilon)=0,\ \ \ \phi(-\varepsilon)=\phi_{0}=0, \tag{3.1}\]
where
\[\kappa_{0}\geq\frac{3}{1-\max(\beta_{1},\beta_{2})}. \tag{3.2}\]
Since \(n_{2}(-\varepsilon)=0\) and \(\xi(-\varepsilon)=0\), x-coordinate and \(\tilde{x}\)-coordinate coincide at \(t=-\varepsilon\), and
\[\begin{cases}x_{1}=\tilde{f}(\mathrm{x}_{1},\mathrm{x}_{2},-\varepsilon)\\ x_{2}=\mathrm{x}_{2}.\end{cases} \tag{3.3}\]
Now we prescribe the initial data:
\[u_{0}(\mathrm{x}):=u(\mathrm{x},-\varepsilon),\quad\rho_{0}(\mathrm{x}):=\rho (\mathrm{x},-\varepsilon),\quad\sigma_{0}:=\frac{\rho_{0}^{\alpha}}{\alpha}. \tag{3.4}\]
We choose \(u_{0}\) and \(\rho_{0}\) such that the corresponding Riemann varibles satisfy the conditions stated in this section. The initial data of the Riemann variables are denoted as
\[\begin{split}\widetilde{w}_{0}(\mathrm{x}):&=u_{0}( \mathrm{x})\cdot N(\mathrm{x},-\varepsilon)+\sigma_{0}(\mathrm{x})=:w_{0}(x) \\ \widetilde{z}_{0}(\mathrm{x}):&=u_{0}(\mathrm{x}) \cdot N(\mathrm{x},-\varepsilon)-\sigma_{0}(\mathrm{x})=:z_{0}(x)\\ \widetilde{a}_{0}(\mathrm{x}):&=u_{0}(\mathrm{x}) \cdot T(\mathrm{x},-\varepsilon)=:w_{0}(x).\end{split} \tag{3.5}\]
First we assume that
\[\mathrm{supp}_{\mathrm{x}}\ (\widetilde{w}_{0}-\kappa_{0},\widetilde{z}_{0}, \widetilde{a}_{0})\subset\mathcal{x}_{0}:=\left\{|\mathrm{x}_{1}|\leq\frac{1} {2}\varepsilon^{\frac{1}{2}},|\mathrm{x}_{2}|\leq\varepsilon^{\frac{1}{6}} \right\}. \tag{3.6}\]
This implies that
\[\mathrm{supp}_{x}\ (w_{0}-\kappa_{0},z_{0},a_{0})\subset\left\{|x_{1}|\leq \varepsilon^{\frac{1}{2}},|x_{2}|\leq\varepsilon^{\frac{1}{6}}\right\}. \tag{3.7}\]
The function \(\widetilde{w}_{0}(\mathrm{x})\) is chosen such that
\[\text{the minimum negative slope of }\widetilde{w}_{0}\ \text{occurs in the x}_{1}\ \text{ direction}, \tag{3.8}\]
\[\partial_{\mathrm{x}_{1}}\widetilde{w}_{0}\ \text{attains its global minimum at x}=0. \tag{3.9}\]
and
\[\nabla_{\mathrm{x}}\partial_{\mathrm{x}_{1}}\widetilde{w}_{0}(0)=0. \tag{3.10}\]
We also assume that
\[\widetilde{w}_{0}(0)=\kappa_{0},\quad\partial_{\mathrm{x}_{1}}\widetilde{w}_{0 }(0)=-\frac{1}{\varepsilon},\quad\partial_{\mathrm{x}_{2}}\widetilde{w}_{0}(0 )=0, \tag{3.11}\]
Define
\[\overline{w}_{\varepsilon}(x):=\varepsilon^{\frac{1}{2}}\overline{W}(\varepsilon^{- \frac{3}{2}}x_{1},\varepsilon^{-\frac{1}{2}}x_{2}), \tag{3.11}\]
and we set
\[\widehat{w}_{0}(\mathrm{x}):=\widetilde{w}_{0}(\mathrm{x})-\overline{w}_{ \varepsilon}(\mathrm{x}_{1}-\tilde{f}(\mathrm{x},t),\mathrm{x}_{2})=w_{0}(x)- \overline{w}_{\varepsilon}(x)=\varepsilon^{\frac{1}{2}}\widetilde{W}(y,-\log \varepsilon)+\kappa_{0}. \tag{3.12}\]
We assume that for \(\mathrm{x}\) such that \(\left|(\varepsilon^{-\frac{3}{2}}\mathrm{x}_{1},\varepsilon^{-\frac{1}{2}} \mathrm{x}_{2})\right|\leq 2\varepsilon^{-\frac{1}{10}}\), the following bounds hold:
\[\begin{split}|\widehat{w}_{0}(\mathrm{x})-\kappa_{0}|& \leq\varepsilon^{\frac{1}{10}}\left(\varepsilon^{3}+\mathrm{x}_{1}^{2}+ \mathrm{x}_{2}^{6}\right)^{\frac{1}{6}},\\ |\partial_{\mathrm{x}_{1}}\widetilde{w}_{0}(\mathrm{x})|& \leq\varepsilon^{\frac{1}{11}}\left(\varepsilon^{3}+\mathrm{x}_{1}^{2}+ \mathrm{x}_{2}^{6}\right)^{-\frac{1}{3}},\\ |\partial_{\mathrm{x}_{2}}\widetilde{w}_{0}(\mathrm{x})|& \leq\frac{1}{2}\varepsilon^{\frac{1}{12}}.\end{split} \tag{3.13}\]
For \(\mathrm{x}\) such that \(\left|(\varepsilon^{-\frac{3}{2}}\mathrm{x}_{1},\varepsilon^{-\frac{1}{2}} \mathrm{x}_{2})\right|\leq 1\), we assume that
\[|\partial_{\mathrm{x}}^{\gamma}\widetilde{w}_{0}(\mathrm{x})|\overset{|\gamma |=4}{\leq}\frac{1}{2}\varepsilon^{\frac{5}{8}-\frac{1}{2}(\gamma_{1}+\gamma_{ 2})}. \tag{3.14}\]
At \(\mathrm{x}=0\), we assume that
\[|\partial_{\mathrm{x}}^{\gamma}\widetilde{w}_{0}(0)|\overset{|\gamma|=3}{\leq }\frac{1}{2}\varepsilon^{1-\frac{1}{2}(3\gamma_{1}+\gamma_{2})-\frac{4}{2k-7}}. \tag{3.15}\]
For \(\mathrm{x}\in x_{0}\) such that \(\left|(\varepsilon^{-\frac{3}{2}}\mathrm{x}_{1},\varepsilon^{-\frac{1}{2}} \mathrm{x}_{2})\right|\geq\frac{1}{2}\varepsilon^{-\frac{1}{10}}\), we assume that
\[\begin{split}|\widetilde{w}_{0}(\mathrm{x})-\kappa_{0}|& \leq(1+\varepsilon^{\frac{1}{11}})\left(\varepsilon^{4}+\mathrm{x}_{1}^{2}+ \mathrm{x}_{2}^{6}\right)^{\frac{1}{6}},\\ |\partial_{\mathrm{x}_{1}}\widetilde{w}_{0}(\mathrm{x})|& \leq(1+\varepsilon^{\frac{1}{12}})\left(\varepsilon^{4}+\mathrm{x}_{1}^{2}+ \mathrm{x}_{2}^{6}\right)^{-\frac{1}{3}},\\ |\partial_{\mathrm{x}_{2}}\widetilde{w}_{0}(\mathrm{x})|& \leq\frac{2}{3}+\varepsilon^{\frac{1}{13}}.\end{split} \tag{3.16}\]
For all \(\mathrm{x}\in x_{0}\), we assume that
\[\begin{split}|\partial_{\mathrm{x}_{1}}^{2}\widetilde{w}_{0}( \mathrm{x})|&\leq\varepsilon^{-\frac{3}{2}}\left(\varepsilon^{3} +\mathrm{x}_{1}^{2}+\mathrm{x}_{2}^{6}\right)^{-\frac{1}{3}},\\ |\partial_{\mathrm{x}_{1}\mathrm{x}_{2}}\widetilde{w}_{0}(\mathrm{ x})|&\leq\frac{1}{2}\varepsilon^{-\frac{1}{2}}\left(\varepsilon^{3}+ \mathrm{x}_{1}^{2}+\mathrm{x}_{2}^{6}\right)^{-\frac{1}{3}},\\ |\partial_{\mathrm{x}_{2}}^{2}\widetilde{w}_{0}(\mathrm{x})|& \leq\frac{1}{2}\left(\varepsilon^{3}+\mathrm{x}_{1}^{2}+\mathrm{x}_{2}^{6} \right)^{-\frac{1}{6}}.\end{split} \tag{3.17}\]
Also att \(\mathrm{x}=0\) we assume that
\[\left|\partial_{\mathrm{x}_{2}}^{2}\widetilde{w}_{0}(0)\right|\leq 1. \tag{3.18}\]
For the initial data of \(\widetilde{z}_{0}\) and \(\widetilde{a}_{0}\) we assume that
\[\begin{split}|\widetilde{z}_{0}(\mathrm{x})|\leq\varepsilon, \quad|\partial_{\mathrm{x}_{1}}\widetilde{z}_{0}(\mathrm{x})|\leq 1,\quad| \partial_{\mathrm{x}_{2}}\widetilde{z}_{0}(\mathrm{x})|\leq\frac{1}{2} \varepsilon^{\frac{1}{2}},\\ |\partial_{\mathrm{x}_{1}}^{2}\widetilde{z}_{0}(\mathrm{x})|\leq \varepsilon^{-\frac{3}{2}},\quad|\partial_{\mathrm{x}_{1}\mathrm{x}_{2}} \widetilde{z}_{0}(\mathrm{x})|\leq\frac{1}{2}\varepsilon^{-\frac{1}{2}},\quad| \partial_{\mathrm{x}_{2}}^{2}\widetilde{z}_{0}(\mathrm{x})|\leq\frac{1}{2}, \end{split} \tag{3.19}\]
and
\[|\widetilde{a}_{0}(\mathrm{x})|\leq\varepsilon,\quad|\partial_{\mathrm{x}_{1} }\widetilde{a}_{0}(\mathrm{x})|\leq 1,\quad|\partial_{\mathrm{x}_{2}}\widetilde{a}_{0}( \mathrm{x})|\leq\frac{1}{2}\varepsilon^{\frac{1}{2}},\quad|\partial_{ \mathrm{x}_{2}}^{2}\widetilde{a}_{0}(\mathrm{x})|\leq\frac{1}{2}. \tag{3.20}\]
For the initial specific vorticity, we assume that
\[\left\|\frac{\mathrm{curl}\,u_{0}(\mathrm{x})}{\rho_{0}(\mathrm{x})}\right\|_{L^ {\infty}}\leq 1. \tag{3.21}\]
Finally for the Sobolev norm of the initial data we assume that for a fixed \(k\) with \(k\geq 18\) the following holds:
\[\sum_{\gamma=k}\varepsilon^{2}\|\partial_{\!\!x}^{\gamma}\widetilde{w}_{0}\|_{L^ {2}}^{2}+\|\partial_{\!\!x}^{\gamma}\widetilde{z}_{0}\|_{L^{2}}^{2}+\|\partial _{\!\!x}^{\gamma}\widetilde{a}_{0}\|_{L^{2}}^{2}\leq\frac{1}{2}\varepsilon^{ \frac{7}{2}-3\gamma_{1}-\gamma_{2}}. \tag{3.22}\]
**Theorem 3.1** (_Main result in physical variables_).: If
* the initial values of the modulation variables satisfy (3.1)(3.2);
* the initial data \((u_{0},\rho_{0})\) of the Euler equations is smooth, and it gurantees that the corresponding Riemann variables \((w_{0},z_{0},a_{0})\) satisfies the initial conditions (3.8)-(3.22),
then the cooresponding solution \((u,\rho)\) to (1.1) blows up in finite time \(-\varepsilon<T_{*}=O(\varepsilon^{2})<+\infty\). Moreover, we have the following description of the shock:
1. _Blow-up speed_. We have the following inequalities for \((u,\sigma)\): \[\frac{c}{T_{*}-t}\leq\|\nabla_{\!\!x}u(t)\|_{L^{\infty}}\leq\frac{C}{T_{*}-t},\] (3.23) \[\frac{c}{T_{*}-t}\leq\|\nabla_{\!\!x}\sigma(t)\|_{L^{\infty}}\leq\frac{C}{T_{* }-t}.\] (3.24)
2. _Blow-up location_. For arbitrary \(\delta\in(0,1)\), there holds that \[\|\nabla_{\!\!x}u(t)\|_{L^{\infty}(B_{\!\!\xi}^{\gamma}(\xi(t)))}+\|\nabla_{ \!\!x}\sigma(t)\|_{L^{\infty}(B_{\!\!\xi}^{\gamma}(\xi(t)))}\leq C(\delta),\] (3.25) while we have the unboundedness of gradient along \(\xi(t)\): \[|\nabla_{\!\!x}u(\xi(t),t)|\geq\frac{c}{T_{*}-t},\ \ |\nabla_{\!\!x} \sigma(\xi(t),t)|\geq\frac{c}{T_{*}-t}.\] (3.26) Moreover, the limit of \(\xi(t)\) exists: \[\lim_{t\to T_{*}}\xi(t)=\xi_{*}\in\mathbb{R}^{2}.\] (3.27)
3. _Direction of the shock_. The gradient of \((u,\sigma)\) blows up only in one direction: \[|[(R(t)N)\cdot\nabla_{\!\!x}]u(\xi(t),t)|\geq\frac{c}{T_{*}-t},\ \ |(R(t)N)\cdot\nabla_{\!\!x} \sigma(\xi(t),t)|\geq\frac{c}{T_{*}-t};\] (3.28) \[\|[(R(t)T)\cdot\nabla_{\!\!x}]u(t)\|_{L^{\infty}}+\|[(R(t)T)\cdot\nabla_{\!\! x}]\sigma(t)\|_{L^{\infty}}\leq C.\] (3.29) Moreover, we have \(n(t)=R(t)N(0,t)\), and the limit of \(n(t)\) exists: \[\lim_{t\to T_{*}}n(t)=n_{*}\in\mathbb{S}^{1}.\] (3.30)
4. _1/3-Holder continuity_. The solution has a uniform-in-time \(C^{1/3}\) bound. More precisely, we have that \[(u,\sigma)\in L_{t}^{\infty}([-\varepsilon,T_{*}),C_{\!\!x}^{1/3}).\] (3.31) Proof of the main result will be given in section 14.
### Initial data in self-similar variables
Since \(\tau(-\varepsilon)=0\), we have that the initial self-similar time is \(s=-\log\varepsilon\).
When \(s=-\log\varepsilon\), \(y_{1}=x_{1}\varepsilon^{-\frac{3}{2}}\), \(y_{2}=x_{2}\varepsilon^{-\frac{1}{2}}\), from (3.7) we have that the initial data of \(W,Z,A\) are supported in
\[\mathcal{X}_{0}=\{|y_{1}|\leq\varepsilon^{-1},|y_{2}|\leq\varepsilon^{-\frac{ 1}{3}}\}. \tag{3.32}\]
Now we introduce a large constant \(M=M(\alpha,\kappa_{0},k)\) to absorb universal constants, here \(k\) is the order of energy estimate to be established later in section 6. In terms of \(M\) and \(\varepsilon\), we define a small scale \(l\) and a large scale \(L\) by
\[l=(\log M)^{-5}, \tag{3.33a}\] \[L=\varepsilon^{-\frac{1}{10}}. \tag{3.33b}\]
From (3.13)(3.14)(3.15) we know that \(\widetilde{W}(y,-\log\varepsilon)\) satisfies
\[\eta^{-\frac{1}{6}}\big{|}\widetilde{W}(y,-\log\varepsilon) \big{|}\,\mathbb{1}_{|y|\leq L} \leq\varepsilon^{\frac{1}{10}}, \tag{3.34a}\] \[\eta^{\frac{1}{3}}\left|\partial_{1}\widetilde{W}(y,-\log \varepsilon)\right|\mathbb{1}_{|y|\leq L} \leq\varepsilon^{\frac{1}{11}},\] (3.34b) \[\left|\partial_{2}\widetilde{W}(y,-\log\varepsilon)\right| \mathbb{1}_{|y|\leq L} \leq\varepsilon^{\frac{1}{12}},\] (3.34c) \[\left|\partial^{\gamma}\widetilde{W}(y,-\log\varepsilon)\right| \mathbb{1}_{|y|\leq l} \stackrel{{|\gamma|=4}}{{\leq}}\varepsilon^{\frac{1}{8}}\] (3.34d) \[\left|\partial^{\gamma}\widetilde{W}(0,-\log\varepsilon)\right| \stackrel{{|\gamma|=3}}{{\leq}}\varepsilon^{\frac{1}{2}-\frac{1} {k-3}}. \tag{3.34e}\]
For \(W(y,-\log\varepsilon)\), from (3.16) that for all \(y\in\mathcal{X}_{0}\in\{|y|\geq L\}\), there hold that
\[\eta^{-\frac{1}{6}}\left|W(y,-\log\varepsilon)\right| \leq 1+\varepsilon^{\frac{1}{11}}, \tag{3.35}\] \[\eta^{\frac{1}{3}}\left|\partial_{1}W(y,-\log\varepsilon)\right| \leq 1+\varepsilon^{\frac{1}{12}},\] \[\left|\partial_{2}W(y,-\log\varepsilon)\right| \leq\frac{3}{4}.\]
and from (3.17) we have that for all \(y\in\mathcal{X}_{0}\) there hold that
\[\eta^{\frac{1}{3}}\left|\partial_{11}W(y,-\log\varepsilon)\right| \leq 1, \tag{3.36}\] \[\eta^{\frac{1}{3}}\left|\partial_{12}W(y,-\log\varepsilon)\right| \leq 1,\] \[\eta^{\frac{1}{6}}\left|\partial_{22}W(y,-\log\varepsilon)\right| \leq 1.\]
From (3.19)(3.20), we have that the initial data of \(Z\) and \(A\) satisfy
\[|\partial^{\gamma}Z(y,-\log\varepsilon)| \leq\begin{cases}\varepsilon^{\frac{3}{2}},&\gamma_{1}>0,\ |\gamma|=2\\ \varepsilon,&\gamma_{1}=0,\ |\gamma|\leq 2,\end{cases} \tag{3.37}\] \[|\partial^{\gamma}A(y,-\log\varepsilon)| \leq\begin{cases}\varepsilon^{\frac{3}{2}},&\gamma=(1,0)\\ \varepsilon,&\gamma_{1}=0,\ |\gamma|\leq 2.\end{cases} \tag{3.38}\]
Furthermore, from (3.21) we know the specific vorticity satisfies
\[\left\|\Omega(\cdot,-\log\varepsilon)\right\|_{L^{\infty}}\leq 1. \tag{3.39}\]
Finally from (3.22) we have
\[\varepsilon\|W(\cdot,-\log\varepsilon)\|_{\dot{H}^{k}}^{2}+\|Z(\cdot,-\log \varepsilon)\|_{\dot{H}^{k}}^{2}+\|A(\cdot,-\log\varepsilon)\|_{\dot{H}^{k}} ^{2}\leq\varepsilon. \tag{3.40}\]
**Theorem 3.2** (Main theorem in self-similar coordinate).: Suppose \(W(y,-\log\varepsilon)\),\(Z(y,-\log\varepsilon)\), \(A(y,-\log\varepsilon)\in H^{k}(\mathbb{R}^{2})\) with integer \(k\) large enough, and they satisfy (3.32)-(3.40), and the initial data of modulation variables \((\kappa,\xi,n_{2},\tau,\phi)\) satisfy (3.1)(3.2), then there exists a choice of \(\varepsilon\ll 1\), such that the system (2.25) coupled with (7.8)(7.17)(7.9)(7.15) admits a global solution, and the solution \((W,Z,A,\kappa,\phi,\tau,\xi)\) satisfies the bootstrap assumptions(which are stated in the next section) for all time.
## 4. Bootstrap argument
To establish global existence in self-similar coordinate, we set up bootstrap argument.
### Bootstrap assumption
We first state the bootstrap assumptions.
1. _Assumptions on modulation variables_. For the modulation variables, we assume that \[\begin{cases}\frac{1}{2}\kappa_{0}\leq\kappa\leq 2\kappa_{0},&|\dot{\kappa}|\leq M \\ |\tau|\leq M\varepsilon^{2},&|\dot{\tau}|\leq Me^{-s}\\ |\xi|\leq M^{\frac{1}{4}}\varepsilon,&|\dot{\xi}|\leq M^{\frac{1}{4}}\\ |n_{2}|\leq M^{2}\varepsilon^{\frac{3}{2}},&|\dot{n}_{2}|\leq M^{2} \varepsilon^{\frac{1}{2}}\\ |\phi|\leq M^{2}\varepsilon,&|\dot{\phi}|\leq M^{2}.\end{cases}\] (B-M)
2. _Assumptions on Spatial support bootstrap_. We define \(\mathcal{X}(s):=\left\{|y_{1}|\leq 2\varepsilon^{\frac{1}{2}}e^{\frac{3}{2}s},|y_{ 2}|\leq 2\varepsilon^{\frac{1}{4}}e^{\frac{s}{2}}\right\}\), and assume that \[\operatorname{supp}(DW,DZ,DA)\subset\mathcal{X}(s).\] (B-S) We will show that this assumption together with (2.15) imply that \(\operatorname{supp}(DU,DS)\subset\mathcal{X}(s)\) in Lemma 5.1.
3. _Assumptions on \(W\) and \(\widetilde{W}\)_. For \(|\gamma|\leq 2\), we assume that either \(\partial^{\gamma}W\) is close to \(\partial^{\gamma}\overline{W}\), or it behaves like \(\partial^{\gamma}\overline{W}\). More precisely, we assume that \[\begin{cases}|W|\leq(1+\varepsilon^{\frac{1}{20}})\eta^{\frac{1}{6}},&| \partial_{1}W|\leq 2\eta^{-\frac{1}{3}},&|\partial_{2}W|\leq 1\\ |\partial_{11}W|\leq M^{\frac{1}{3}}\eta^{-\frac{1}{3}},&|\partial_{12}W|\leq M ^{\frac{2}{3}}\eta^{-\frac{1}{3}},&|\partial_{22}W|\leq M\eta^{-\frac{1}{6}}. \end{cases}\] (B-
\[W\]
) Noting that by \(DW\subset\mathcal{X}(s)\) and \(W(0)=\overline{W}(0)=0\), we have
\[|W(y)|\leq\int_{0}^{y_{1}}2\eta^{-\frac{1}{3}}(y_{1}^{\prime},0)dy_{1}^{\prime }+\|\partial_{2}W\|_{L^{\infty}}|y_{2}|\lesssim\varepsilon^{\frac{1}{6}}e^{ \frac{s}{2}}. \tag{4.1}\]
For \(\widetilde{W}\) we assume that
\[\begin{cases}\left|\widetilde{W}\right|\mathbb{1}_{|y|\leq L}\leq\varepsilon^ {\frac{1}{14}}\eta^{\frac{1}{6}}\\ \left|\partial_{1}\widetilde{W}\right|\mathbb{1}_{|y|\leq L}\leq\varepsilon^ {\frac{1}{14}}\eta^{-\frac{1}{3}}\\ \left|\partial_{2}\widetilde{W}\right|\mathbb{1}_{|y|\leq L}\leq\varepsilon^ {\frac{1}{14}}.\end{cases}\] (B-
\[\widetilde{W}\]
-1)
where \(L=\varepsilon^{-\frac{1}{10}}\), and
\[\left|\partial^{\gamma}\widetilde{W}\right|\mathbb{1}_{|y|\leq l}\leq\log^{4 }M\varepsilon^{\frac{1}{10}}|y|^{4-|\gamma|}+M\varepsilon^{\frac{1}{4}}|y|^{3-| \gamma|}\ \ \ \ \ (\forall|\gamma|\leq 3),\] (B-
\[\widetilde{W}\]
-2)
\[\left|\partial^{\gamma}\widetilde{W}\right|\mathbb{1}_{|y|\leq l}\leq\frac{1} {2}\log^{|\gamma|}M\varepsilon^{\frac{1}{10}}\ \ \ \ \ (\forall|\gamma|=4).\] (B-
\[\widetilde{W}\]
-3)
where \(l=(\log M)^{-5}\), and \[\left|\partial^{\gamma}\widetilde{W}(0,s)\right|\leq\varepsilon^{\frac{1}{4}} \hskip 14.226378pt(\forall|\gamma|=3,\forall s\geq s_{0}).\] (B-\(\widetilde{W}^{0}\))
4. _Assumptions on \(Z\) and \(A\)_. For \(Z\), \(A\) and their derivatives up to second order, we assume they are small or have decay properties. More precisely, we assume that \[\begin{cases}|Z|\leq M\varepsilon,&|\partial_{1}Z|\leq M^{\frac{1}{2}}e^{- \frac{3}{2}s},\hskip 5.690551pt|\partial_{2}Z|\leq M\varepsilon^{\frac{1}{2}}e^{- \frac{s}{2}}\\ |\partial_{11}Z|\leq M^{\frac{1}{2}}e^{-\frac{3}{2}s},&|\partial_{12}Z|\leq Me ^{-\frac{3}{2}s},\hskip 5.690551pt|\partial_{22}Z|\leq Me^{-s}.\end{cases}\] (B-\(Z\)) and \[\begin{cases}|A|\leq M\varepsilon,&|\partial_{1}A|\leq Me^{-\frac{3}{2}s}\\ |\partial_{2}A|\leq M\varepsilon^{\frac{1}{2}}e^{-\frac{s}{2}},&|\partial_{22 }A|\leq Me^{-s}.\end{cases}\] (B-\(A\))
### Bootstrap procedure
Now we state the improved bootstrap inequality (IB), which supposedly can be deduced from the bootstrap assumptions and the initial conditions:
\[\begin{cases}\frac{3}{4}\kappa_{0}\leq\kappa\leq\frac{5}{4}\kappa_{0},&| \dot{\kappa}|\leq\frac{1}{2}M\\ |\tau|\leq\frac{1}{4}M\varepsilon^{2},&|\dot{\tau}|\leq\frac{1}{4}Me^{-s}\\ |\xi|\leq\frac{1}{2}M^{\frac{1}{2}}\varepsilon,&|\dot{\xi}|\leq\frac{1}{2}M^{ \frac{1}{2}}\\ |n_{2}|\leq\frac{1}{2}M\varepsilon,&|\dot{n}_{2}|\leq\frac{1}{2}M^{2} \varepsilon^{\frac{1}{2}}\\ |\phi|\leq\frac{1}{2}M^{2}\varepsilon,&|\dot{\phi}|\leq\frac{1}{10}M^{2},\end{cases}\] (IB-M)
\[\text{supp}(DW,DZ,DA)\subset\frac{7}{8}\mathcal{X}(s),\] (IB-S)
\[\begin{cases}|W|\leq(1+\varepsilon^{\frac{1}{2}})\eta^{\frac{1}{6}},&| \partial_{1}W|\leq\left(1+\varepsilon^{\frac{1}{45}}\right)\eta^{-\frac{1}{ 2}},\hskip 5.690551pt|\partial_{2}W|\leq\frac{5}{6}\\ |\partial_{11}W|\leq\frac{1}{2}M^{\frac{1}{3}}\eta^{-\frac{1}{3}},&|\partial_{ 12}W|\leq\frac{1}{2}M^{\frac{2}{3}}\eta^{-\frac{1}{3}},\hskip 5.690551pt| \partial_{22}W|\leq\frac{1}{2}M\eta^{-\frac{1}{6}},\end{cases}\] (IB-\(W\))
\[\begin{cases}\left|\widetilde{W}\right|\mathbbm{1}_{|y|\leq L}\leq\frac{1}{2} \varepsilon^{\frac{1}{10}}\eta^{\frac{1}{6}}\\ |\partial_{1}\widetilde{W}\right|\mathbbm{1}_{|y|\leq L}\leq\frac{1}{2} \varepsilon^{\frac{1}{12}}\eta^{-\frac{1}{3}}\\ \left|\partial_{2}\widetilde{W}\right|\mathbbm{1}_{|y|\leq L}\leq\frac{1}{2} \varepsilon^{\frac{1}{13}},\end{cases}\] (IB-\(\widetilde{W}\)-1)
\[\left|\partial^{\gamma}\widetilde{W}\right|\mathbbm{1}_{|y|\leq l}\leq\frac{1} {2}\log^{4}M\varepsilon^{\frac{1}{10}}|y|^{4-|\gamma|}+\frac{1}{2}M\varepsilon ^{\frac{1}{1}}|y|^{3-|\gamma|}\hskip 14.226378pt(\forall|\gamma|\leq 3),\] (IB-\(\widetilde{W}\)-2)
\[\left|\partial^{\gamma}\widetilde{W}\right|\mathbbm{1}_{|y|\leq l}\leq\frac{1} {4}\log^{|\dot{\gamma}|}M\varepsilon^{\frac{1}{10}}\hskip 14.226378pt(\forall| \gamma|=4),\] (IB-\(\widetilde{W}\)-3)
\[\left|\partial^{\gamma}\widetilde{W}(0,s)\right|\leq\frac{1}{10}\varepsilon^{ \frac{1}{4}}\hskip 14.226378pt(\forall|\gamma|=3,\forall s\geq s_{0}),\] (IB-\(\widetilde{W}^{0}\)
\[\begin{cases}|Z|\leq\frac{1}{2}M\varepsilon,&|\partial_{1}Z|\leq\frac{1}{2}M^{ \frac{1}{2}}e^{-\frac{3}{2}s},\ \ |\partial_{2}Z|\leq\frac{1}{2}M\varepsilon^{\frac{1}{2}}e^{-\frac{ \varepsilon}{2}}\\ |\partial_{11}Z|\leq\frac{1}{2}M^{\frac{1}{2}}e^{-\frac{3}{2}s},&|\partial_{1 2}Z|\leq\frac{1}{2}Me^{-\frac{3}{2}s},\ \ |\partial_{22}Z|\leq\frac{1}{2}Me^{-s},\end{cases}\] (IB- \[Z\] )
\[\begin{cases}|A|\leq M\varepsilon,&|\partial_{1}A|\leq\frac{1}{2}Me^{-\frac{3} {2}s}\\ |\partial_{2}A|\leq\frac{1}{2}M\varepsilon^{\frac{1}{2}}e^{-\frac{ \varepsilon}{2}},&|\partial_{22}A|\leq\frac{1}{2}Me^{-s}.\end{cases}\] (IB- \[A\] )
Compare to the 3d case in [13], we carefully close the bootstrap argument of spatial support in subsection 10.1. To prove \(W,Z,A\) are constant outside \(\frac{7}{8}\mathcal{X}(s)\), we define two rectangles \(Q_{big}=\{|y_{1}|\leq M^{\prime},|y_{2}|\leq M^{\prime}\}\) and \(Q_{small}(s)\) satisfying
\[\frac{3}{4}\mathcal{X}(s)\subset Q_{small}(s)\subset\frac{7}{8}\mathcal{X}(s) \subset Q_{big},\]
where \(M^{\prime}\) can be chosen arbitrarily large. Then we consider the quantity
\[\int_{Q_{big}\backslash Q_{small}}E(y,s)dy,\]
where \(E(y,s)=\frac{1}{2}\left(e^{-s}(W-W_{\infty})^{2}+(Z-Z_{\infty})+2(A-A_{\infty })^{2}\right)\). From the equations of \(W,Z,A\) and bootstrap assumptions, we find that
\[\frac{d}{ds}\int_{Q_{big}\backslash Q_{small}}E\leq C\int_{Q_{big}\backslash Q _{small}}E.\]
By Gronwall's inequality and the initial conditions, we can deduce that \(W,Z,A\) are constant outside \(Q_{small}\).
## 5. Immediate corollaries of bootstrap assumptions
### Blow-up time
By the definition of \(s\), we have \(t=\tau-e^{-s}\). From the bootstrap assumption of \(\tau\) and \(s\geq-\log\varepsilon\), we can see that if the bootstrap assumptions hold on the interval \([t_{0},t]=[-\varepsilon,t]\), then \(t\) satisfies
\[|t-t_{0}|=|t+\varepsilon|\leq\varepsilon+M\varepsilon^{2}+e^{\log\varepsilon} \leq 3\varepsilon. \tag{5.1}\]
The blow-up time \(T_{*}\) is defined to be \(T_{*}=\tau(T_{*})\).
### Closure of bootstrap argument for \(W\),\(\widetilde{W}\) near the origin
From estimates (2.42)(2.43) of \(\overline{W}\) and bootstrap assumptions (B-\(\widetilde{W}\)-1), we have
\[\begin{cases}|W|\,\mathbbm{1}_{|y|\leq L}\leq(1+\varepsilon^{\frac{1}{11}}) \eta^{\frac{1}{6}}\\ |\partial_{1}W|\,\mathbbm{1}_{|y|\leq L}\leq(1+\varepsilon^{\frac{1}{12}}) \eta^{-\frac{1}{3}}\\ |\partial_{2}W|\,\mathbbm{1}_{|y|\leq L}\leq\frac{2}{3}+ \varepsilon^{\frac{1}{13}}.\end{cases} \tag{5.2}\]
Thus we closed the bootstrap argument for \(W\) and \(DW\) in the region \(\{|y|\leq L\}\), and by \(D^{2}\widetilde{W}(0,s)=0\), the bootstrap argument for \(D^{2}W\) in \(\{|y|\leq l\}\) is automatically closed.
Note that by (5.2)(B-\(W\)), for \(\varepsilon\) taken small enough, we have
\[|\partial_{1}W|\leq(1+\varepsilon^{\frac{1}{12}})\eta^{-\frac{1}{3}}\mathbbm{1 }_{|y|\leq L}+2\eta^{-\frac{1}{3}}\mathbbm{1}_{|y|>L}\leq 1+\varepsilon^{\frac{1}{12}}. \tag{5.3}\]
This bound will be used in the estimate of the damping terms.
Now we prove (IB-\(\widetilde{W}\)-2) for \(\widetilde{W}\):
\[\begin{split}\left|\partial^{\gamma}\widetilde{W}\right|\mathbb{1}_ {|y|\leq l}&\stackrel{{|\gamma|=3}}{{\leq}}\left| \partial^{\gamma}\widetilde{W}(0,s)\right|+\left\|D\partial^{\gamma} \widetilde{W}\right\|_{L^{\infty}(|y|\leq l)}|y|\\ &\leq&\varepsilon^{\frac{1}{4}}+\frac{1}{2}\log^{4}M \varepsilon^{\frac{1}{10}}|y|;\end{split} \tag{5.4}\]
if \(|\gamma|\leq 2\), we have that
\[\left|\partial^{\gamma}\widetilde{W}\right|\mathbb{1}_{|y|\leq l} \stackrel{{|\gamma|\leq 2}}{{\leq}}\left\|D\partial^{\gamma} \widetilde{W}(\cdot,s)\right\|_{L^{\infty}(\cdot|\leq|y|)}|y|. \tag{5.5}\]
### Spatial support of unknowns
For the support of unknowns, we have the following lemma.
**Lemma 5.1**.: \(\operatorname{supp}\ (DU,DS)\subset\mathcal{X}(s)\)_._
Proof.: According to the spatial support assumption of \((DW,DZ,DA)\), it suffices to show \(\operatorname{supp}_{x}(D_{x}N,D_{x}T)\subset\{|x_{1}|\leq 2\varepsilon^{\frac{1}{2}},|x_{2}|\leq 2\varepsilon^{\frac{1}{6}}\}\). Now by the expression of \(N,T\), we only need to show that \(\operatorname{supp}_{x}\ f_{x_{2}}\subset\{|x_{1}|\leq 2\varepsilon^{\frac{1}{2}},|x_{2}|\leq 2\varepsilon^{\frac{1}{6}}\}\). Note that \(f_{x_{2}}=\tilde{f}_{\tilde{x}_{2}}(1+\frac{\tilde{f}_{\tilde{x}_{1}}}{1- \tilde{f}_{\tilde{x}_{1}}})\), and \(\operatorname{supp}_{\tilde{x}}\tilde{f}_{\tilde{x}_{2}}\subset\{|\tilde{x}_{1 }|\leq\frac{5}{4}\varepsilon^{\frac{1}{2}},|\tilde{x}_{2}|\leq\frac{5}{4} \varepsilon^{\frac{1}{6}}\}\), thus we have \(\operatorname{supp}_{x}(D_{x}N,D_{x}T)\subset\{|x_{1}|\leq\frac{3}{2} \varepsilon^{\frac{1}{2}},|x_{2}|\leq\frac{3}{2}\varepsilon^{\frac{1}{6}}\}\) by choosing \(\varepsilon\) small enough in terms of \(M\).
From (3.7), we know that in the original x coordinate, we have
\[\lim_{|\mathrm{x}|\to\infty}u(\mathrm{x},-\varepsilon)=\frac{\kappa_{0}}{2}e _{1},\ \lim_{|\mathrm{x}|\to\infty}\sigma(\mathrm{x},-\varepsilon)=\frac{\kappa_{0}}{2}. \tag{5.6}\]
From the finite speed propagation of the Euler equations, we have that for all \(t\in[-\varepsilon,T_{*})\), there hold
\[\lim_{|\mathrm{x}|\to\infty}u(\mathrm{x},t)=\frac{\kappa_{0}}{2}e _{1},\ \lim_{|\mathrm{x}|\to\infty}\sigma(\mathrm{x},t)=\frac{\kappa_{0}}{2}. \tag{5.7}\]
Note that the coordinate transformation is determined by the modulation variables, and from bootstrap assumptions we can deduce that
\[y\notin\mathcal{X}(s)\text{ implies that }\begin{cases}W(y,s)=W_{\infty}(s)\\ Z(y,s)=Z_{\infty}(s)\\ A(y,s)=A_{\infty}(s)\\ S(y,s)=S_{\infty}(s)\\ U(y,s)=U_{\infty}(s),\end{cases} \tag{5.8}\]
where
\[\begin{cases}W_{\infty}(s):=\left[\frac{\kappa_{0}}{2}(n_{1}+1)- \kappa\right]e^{\frac{s}{2}}\\ Z_{\infty}(s):=\frac{\kappa_{0}}{2}(n_{1}-1)\\ A_{\infty}(s):=-\frac{\kappa_{0}}{2}n_{2}\\ S_{\infty}(s):=\frac{e^{-\frac{s}{2}}W_{\infty}+\kappa-Z_{\infty}}{2}=\frac{ \kappa_{0}}{2}\\ U_{\infty}(s):=\frac{e^{-\frac{s}{2}}W_{\infty}+\kappa+Z_{\infty}}{2}\tilde{e}_{1 }+A_{\infty}\tilde{e}_{2}=\frac{\kappa_{0}n_{1}}{2}\tilde{e}_{1}-\frac{\kappa _{0}n_{2}}{2}\tilde{e}_{2}.\end{cases} \tag{5.9}\]
### Estimates related to coordinate transformation
In this section we will estimate the functions \(f\), \(J\), \(N\), \(T\), \(Q\), \(V\), which only depend on modulation variables.
**Lemma 5.2**.: For any multi-index \(\gamma\in\mathbb{Z}_{\geq 0}^{2}\), we have
\[\begin{cases}|\partial_{x}^{\gamma}f|\leq C_{\gamma}M^{2}\varepsilon^{\frac{ 4}{3}-\frac{\gamma_{1}}{2}-\frac{\gamma_{2}}{6}}\\ |\partial_{x}^{\gamma}(J-1)|\leq C_{\gamma}M^{2}\varepsilon^{\frac{5}{6}- \frac{\gamma_{1}}{2}-\frac{\gamma_{2}}{6}}\\ |\partial_{x}^{\gamma}(N-\tilde{e}_{1})|\leq C_{\gamma}M^{2}\varepsilon^{ \frac{7}{6}-\frac{\gamma_{1}}{2}-\frac{\gamma_{2}}{6}}\\ |\partial_{x}^{\gamma}(T-\tilde{e}_{2})|\leq C_{\gamma}M^{2}\varepsilon^{ \frac{7}{6}-\frac{\gamma_{1}}{2}-\frac{\gamma_{2}}{6}}\\ |\partial_{x}^{\gamma}(JN-\tilde{e}_{1})|\leq C_{\gamma}M^{2}\varepsilon^{ \frac{5}{6}-\frac{\gamma_{1}}{2}-\frac{\gamma_{2}}{6}}\\ |\partial_{x}^{\gamma}\partial_{t}f|\leq C_{\gamma}M^{2}\varepsilon^{\frac{ 4}{3}-\frac{\gamma_{1}}{2}-\frac{\gamma_{2}}{6}}\\ |\partial_{x}^{\gamma}\partial_{t}N|\leq C_{\gamma}M^{2}\varepsilon^{\frac{ 1}{6}-\frac{\gamma_{1}}{2}-\frac{\gamma_{2}}{6}}\\ |\partial_{x}^{\gamma}\partial_{t}T|\leq C_{\gamma}M^{2}\varepsilon^{\frac{ 1}{6}-\frac{\gamma_{1}}{2}-\frac{\gamma_{2}}{6}}.\end{cases} \tag{5.10}\]
Proof.: From the expression of \(\tilde{f}\) and the bootstrap assumption for \(\phi\) and \(\dot{\phi}\), it is not hard to see that \(|\partial_{\tilde{x}}^{\gamma}\tilde{f}|\leq C_{\gamma}M^{2}\varepsilon^{ \frac{4}{3}-\frac{\gamma_{1}}{2}-\frac{\gamma_{2}}{6}}\), \(|\partial_{\tilde{x}}^{\gamma}\partial_{t}\tilde{f}|\leq C_{\gamma}M^{2} \varepsilon^{\frac{1}{3}-\frac{\gamma_{1}}{2}-\frac{\gamma_{2}}{6}}\).
Using chain rule, one can see that
\[\begin{cases}\partial_{x_{1}}=\frac{\partial_{\tilde{x}_{1}}}{1-\tilde{f}_{ \tilde{x}_{1}}}\\ \partial_{x_{2}}=\frac{\tilde{f}_{\tilde{x}_{2}}}{1-\tilde{f}_{\tilde{x}_{1}} }\partial_{\tilde{x}_{1}}+\partial_{\tilde{x}_{2}}.\end{cases} \tag{5.11}\]
By Faa di Bruno's formula, we have
\[\begin{split}\left|\partial_{\tilde{x}}^{\gamma}\left(\frac{1}{ 1-\tilde{f}_{\tilde{x}_{1}}}\right)\right|&\overset{\gamma>0}{\lesssim} \sum_{|\gamma|}\sum_{\beta\leq\gamma}\sum_{\beta m_{\beta}=\gamma}\left|1- \tilde{f}_{\tilde{x}_{1}}\right|^{-1-\sum\limits_{\beta\leq\gamma}m_{\beta}} \prod_{\beta\leq\gamma}\left|\partial_{\tilde{x}}^{\beta}\tilde{f}_{\tilde{x }_{1}}\right|^{m_{\beta}}\\ &\overset{\varepsilon\ll 1}{\lesssim}\sum_{\beta\leq\gamma}\sum_{ \beta m_{\beta}=\gamma}\left(1-\varepsilon^{\frac{1}{2}}\right)^{-\sum \limits_{\beta\leq\gamma}m_{\beta}}\prod_{\beta\leq\gamma}\left(M^{2} \varepsilon^{\frac{4}{3}-\frac{\beta_{1}+1}{2}-\frac{\beta_{2}}{6}}\right)^{ m_{\beta}}\\ &\lesssim\varepsilon^{-\frac{\gamma_{1}}{2}-\frac{\gamma_{2}}{6}} \sum_{\beta\leq\gamma}\sum_{\beta m_{\beta}=\gamma}\left((1-\varepsilon^{ \frac{1}{2}})M^{2}\varepsilon^{\frac{6}{6}}\right)^{\sum\limits_{\beta\leq \gamma}m_{\beta}}\lesssim M^{2}\varepsilon^{\frac{6}{6}-\frac{\gamma_{1}}{2}- \frac{\gamma_{2}}{6}}.\end{split} \tag{5.12}\]
And by Leibniz rule, we have
\[\left|\partial_{\tilde{x}}^{\gamma}\left(\frac{\tilde{f}_{\tilde{x}_{2}}}{1- \tilde{f}_{\tilde{x}_{1}}}\right)\right|\lesssim\sum_{0\leq\beta\leq\gamma} \left|\partial^{\gamma-\beta}\tilde{f}_{\tilde{x}_{2}}\right|\left|\partial_{ \tilde{x}}^{\beta}\left(\frac{1}{1-\tilde{f}_{\tilde{x}_{1}}}\right)\right|+M^ {2}\varepsilon^{\frac{4}{3}-\frac{\gamma_{1}}{2}-\frac{\gamma_{2}+1}{6}} \lesssim M^{2}\varepsilon^{\frac{7}{6}-\frac{\gamma_{1}}{2}-\frac{\gamma_{2}}{6}}. \tag{5.13}\]
Note that
\[\partial_{x_{2}}^{k}=\left(\frac{\tilde{f}_{\tilde{x}_{2}}}{1-\tilde{f}_{ \tilde{x}_{1}}}\partial_{\tilde{x}_{1}}+\partial_{\tilde{x}_{2}}\right)^{k}= \sum\limits_{\begin{subarray}{c}|\beta|\leq k\\ \gamma_{1}+\gamma_{2}+\sum\limits_{|\beta|\leq k}n_{\beta}|\beta|=k\end{subarray}}C (k,\gamma,n_{\beta})\prod\limits_{|\beta|\leq k}\left(\partial_{\tilde{x}}^{ \beta}\left(\frac{\tilde{f}_{\tilde{x}_{2}}}{1-\tilde{f}_{\tilde{x}_{1}}} \right)\right)^{n_{\beta}}\partial_{\tilde{x}}^{\gamma}. \tag{5.14}\]
Thus we have
\[\left|\partial_{\tilde{x}_{1}}^{j}(\partial_{x_{2}}^{k}f)\right| \lesssim\left|\partial_{\tilde{x}_{1}}^{j}\sum\limits_{\begin{subarray}{c }|\beta|\leq k\\ \gamma_{1}+\gamma_{2}+\sum\limits_{|\beta|\leq k}n_{\beta}|\beta|=k\end{subarray}}C (k,\gamma,n_{\beta})\prod\limits_{|\beta|\leq k}\left(\partial_{\tilde{x}}^{ \beta}\left(\frac{\tilde{f}_{\tilde{x}_{2}}}{1-\tilde{f}_{\tilde{x}_{1}}} \right)\right)^{n_{\beta}}\partial_{\tilde{x}}^{\gamma}\tilde{f}\right| \tag{5.15}\] \[\lesssim_{j,k}\sum\limits_{\begin{subarray}{c}|\beta|\leq k+j\\ \gamma_{1}+\gamma_{2}+\sum\limits_{|\beta|\leq k+j}n_{\beta}|\beta|=k+j\end{subarray}} \prod\limits_{|\beta|\leq k+j}\left|\partial_{\tilde{x}}^{\beta}\left(\frac{ \tilde{f}_{\tilde{x}_{2}}}{1-\tilde{f}_{\tilde{x}_{1}}}\right)\right|^{n_{ \beta}}|\partial_{\tilde{x}}^{\gamma}f|\] \[\lesssim\sum\limits_{\begin{subarray}{c}|\beta|\leq k+j\\ \gamma_{1}+\gamma_{2}+\sum\limits_{|\beta|\leq k+j}n_{\beta}|\beta|=k+j\end{subarray}} \prod\limits_{|\beta|\leq k+j}\left(M^{2}\varepsilon^{\frac{7}{6}-\frac{\beta_ {1}}{2}-\frac{\beta_{2}}{6}}\right)^{n_{\beta}}M^{2}\varepsilon^{\frac{4}{3}- \frac{\gamma_{1}}{2}-\frac{\gamma_{2}}{6}}\] \[\lesssim M^{2}\varepsilon^{\frac{4}{3}-\frac{j}{2}-\frac{k}{6}} \sum\limits_{\begin{subarray}{c}|\beta|\leq k+j\\ \gamma_{1}+\gamma_{2}+\sum\limits_{|\beta|\leq k+j}n_{\beta}|\beta|=k+j\end{subarray}} \left(M^{2}\varepsilon^{\frac{4}{3}-\frac{j}{2}-\frac{k}{6}}\right)^{\sum \limits_{|\beta|\leq k+j}n_{\beta}}\lesssim M^{2}\varepsilon^{\frac{4}{3}- \frac{j}{2}-\frac{k}{6}}.\]
Finally, we have
\[\left|\partial_{x}^{\gamma}f\right| =\left|\left(\frac{\partial_{\tilde{x}_{1}}}{1-\tilde{f}_{\tilde{ x}_{1}}}\right)^{\gamma_{1}}\partial_{x_{2}}^{\gamma_{2}}f\right| \tag{5.16}\] \[\overset{\gamma_{1}\gtrsim 1}{\lesssim}\sum\limits_{j=1}^{\gamma_{1}} \sum\limits_{\begin{subarray}{c}n_{1}+2n_{2}+\cdots+\gamma_{1}n_{\gamma_{1}}= \gamma_{1}-j\\ n_{0}+n_{1}+\cdots+n_{\gamma_{1}}=\gamma_{1}\end{subarray}}\left|\frac{1}{1- \tilde{f}_{\tilde{x}_{1}}}\right|^{n_{0}}\left|\partial_{\tilde{x}_{1}}\left( \frac{1}{1-\tilde{f}_{\tilde{x}_{1}}}\right)\right|^{n_{1}}\cdots\left| \partial_{\tilde{x}_{1}}^{\gamma_{1}}\left(\frac{1}{1-\tilde{f}_{\tilde{x}_{1} }}\right)\right|^{n_{\gamma_{1}}}\left|\partial_{\tilde{x}_{1}}^{j}\partial_{x _{2}}^{\gamma_{2}}f\right|\] \[\lesssim\sum\limits_{j=1}^{\gamma_{1}}\sum\limits_{\begin{subarray} {c}n_{1}+2n_{2}+\cdots+\gamma_{1}n_{\gamma_{1}}=\gamma_{1}-j\\ n_{0}+n_{1}+\cdots+n_{\gamma_{1}}=\gamma_{1}\end{subarray}}\left(1-\varepsilon^{ \frac{1}{2}}\right)^{-n_{0}}\left(M^{2}\varepsilon^{\frac{5}{6}-\frac{1}{2}} \right)^{n_{1}}\cdots\left(M^{2}\varepsilon^{\frac{5}{6}-\frac{\gamma_{1}}{2}} \right)^{n_{\gamma_{1}}}\left|\partial_{\tilde{x}_{1}}^{j}\partial_{x_{2}}^{ \gamma_{2}}f\right|\] \[\lesssim\sum\limits_{j=1}^{\gamma_{1}}\varepsilon^{-\frac{\gamma_{1 }-j}{2}}\left|\partial_{x_{1}}^{j}\partial_{x_{2}}^{\gamma_{2}}f\right|\sum \limits_{\begin{subarray}{c}n_{1}+2n_{2}+\cdots+\gamma_{1}n_{\gamma_{1}}= \gamma_{1}-j\\ n_{0}+n_{1}+\cdots+n_{\gamma_{1}}=\gamma_{1}\end{subarray}}\left[(1-\varepsilon^{ \frac{1}{2}})M^{2}\varepsilon^{\frac{5}{6}}\right]^{\gamma_{1}-n_{0}}\] \[\lesssim\varepsilon^{-\frac{\gamma_{1}}{2}}\sum\limits_{j=1}^{ \gamma_{1}}\varepsilon^{\frac{j}{4}}M^{2}\varepsilon^{\frac{4}{3}-\frac{i}{2}- \frac{\gamma_{2}}{6}}\lesssim M^{2}\varepsilon^{\frac{4}{3}-\frac{\gamma_{1}}{2} -\frac{\gamma_{2}}{6}}.\]
One can check the same estimate holds when \(\gamma_{1}=0\).
Also from Faa di Bruno's formula one can see that for \(\alpha\in\mathbb{R}\) and \(\gamma>0\), we have \(\left|\partial_{x}^{\gamma}(1+f_{x_{2}}^{2})^{\alpha}\right|\lesssim_{\alpha, \gamma}\), \(M^{4}\varepsilon^{\frac{7}{3}-\frac{\gamma_{1}}{2}-\frac{\gamma_{2}}{6}}\), this estimate combining with Leibniz rule gives that \(|\partial_{x}^{\gamma}N|\lesssim M^{2}\varepsilon^{\frac{7}{6}-\frac{\gamma_ {1}}{2}-\frac{\gamma_{2}}{6}}\) for \(\gamma>0\). \(|N-\tilde{e}_{1}|\lesssim M^{4}\varepsilon^{\frac{7}{6}}\) can be checked separately. The estimates of \(N\) implies \(|\partial_{x}^{\gamma}(T-\tilde{e}_{2})|\lesssim M^{2}\varepsilon^{\frac{7}{6} -\frac{\gamma_{1}}{2}-\frac{\gamma_{2}}{6}}\) for \(\gamma\geq 0\) since \(T=N^{\perp}\). The estimate of \(JN\) is similar.
As for \(J=\frac{\sqrt{1+f_{x_{2}}^{2}}}{1+f_{x_{1}}}\), we use Leibniz rule to deduce that \(|\partial^{\gamma}J|\lesssim M^{2}\varepsilon^{\frac{5}{6}-\frac{\gamma_{1}}{2 }-\frac{\gamma_{2}}{6}}\) holds for \(\gamma>0\), then one can check \(|J-1|\lesssim M^{2}\varepsilon^{\frac{5}{6}}\).
The estimates of \(\partial_{t}f\) and \(\frac{\partial_{t}f}{1+f_{x_{1}}}\) is much the same and rely on the facts that \(|\partial_{\tilde{x}}^{\gamma}\partial_{t}\tilde{f}|\lesssim M^{2}\varepsilon^ {\frac{1}{3}-\frac{\gamma_{1}}{2}-\frac{\gamma_{2}}{6}}\) and \((\partial_{t})_{x}f=\frac{(\partial_{t})_{\tilde{x}}f}{1-\tilde{f}_{x_{1}}}\).
Here we emphasize that \(C_{\gamma}\) in Lemma 5.2 grows at least exponentially since \(f\) is compactly supported and cannot be analytic.
**Lemma 5.3**.: For \(\varepsilon\ll 1\) small enough and \(M\gg 1\) large enough we have
\[|Q|\leq M^{2}\varepsilon^{\frac{1}{2}}. \tag{5.17}\]
Proof.: Since we have
\[Q=\dot{R}^{T}R=\begin{bmatrix}0&-n_{1}\dot{n}_{2}+n_{2}\dot{n}_{1}\\ -n_{2}\dot{n}_{1}+n_{1}\dot{n}_{2}&0\end{bmatrix}, \tag{5.18}\]
the rest is appealing to \(n_{1}=\sqrt{1-n_{2}^{2}}\) and the bootstrap assumptions (B-M) for \(n_{2}\) and \(\dot{n}_{2}\).
**Lemma 5.4**.: For \(y\in 10\mathcal{X}(s)=\{|y_{1}|\leq 20\varepsilon^{\frac{1}{2}}e^{\frac{3}{2}s},|y_ {2}|\leq 20\varepsilon^{\frac{1}{6}}e^{\frac{\varepsilon}{2}}\}\), we have
\[|V|\lesssim M^{\frac{1}{4}} \tag{5.19}\]
and for \(\forall y\in\mathbb{R}^{2}\), it holds that
\[\begin{cases}|\partial_{1}V|\lesssim M^{2}\varepsilon^{\frac{1}{2}}e^{-\frac {3}{2}s}\\ |\partial_{2}V|\lesssim M^{2}\varepsilon^{\frac{1}{2}}e^{-\frac{\varepsilon}{ 2}}\\ |\partial_{11}V|\lesssim M^{4}\varepsilon^{\frac{5}{6}}e^{-3s}\\ |\partial_{12}V|\lesssim M^{4}\varepsilon^{\frac{7}{6}}e^{-2s}\\ |\partial_{22}V|\lesssim M^{4}\varepsilon^{\frac{3}{2}}e^{-s}\\ |\partial^{\gamma}V|\lesssim M^{4}\varepsilon^{\frac{11}{6}}e^{-(\gamma_{1}+ \gamma_{2}/3)s}\\ |\partial^{\gamma}V|\lesssim\begin{matrix}|\gamma|\geq 1\\ \sim\end{matrix}\\ M^{4}\varepsilon^{\frac{3}{2}}e^{-(\gamma_{1}+\gamma_{2}/3)s}.\end{cases} \tag{5.20}\]
Proof.: Note that
\[V(y,s)=\frac{1+\alpha}{2}\left(Q\begin{bmatrix}y_{1}e^{-\frac{3}{2}s}+f\\ y_{2}e^{-\frac{\varepsilon}{2}}\end{bmatrix}-R^{T}\dot{\xi}\right). \tag{5.21}\]
By \(R\in SO(2)\) and (B-M)(5.10), we have the above estimates.
### Estimates for \(U,s\)
**Lemma 5.5**.: For \(U\cdot N\) and \(S\), we have that
\[|\partial^{\gamma}(U\cdot N)|+|\partial^{\gamma}S|\lesssim\begin{cases}M^{\frac{1 }{4}}&\gamma=(0,0)\\ e^{-\frac{s}{2}}\eta^{-\frac{1}{3}}&\gamma=(1,0)\\ e^{-\frac{s}{2}}&\gamma=(0,1)\\ M^{\frac{1}{3}}e^{-\frac{s}{2}}\eta^{-\frac{1}{3}}&\gamma=(2,0)\\ M^{\frac{2}{3}}e^{-\frac{s}{2}}\eta^{-\frac{1}{3}}&\gamma=(1,1)\\ Me^{-\frac{s}{2}}\eta^{-\frac{1}{6}}&\gamma=(0,2).\end{cases} \tag{5.22}\]
Proof.: One can express \(U\cdot N\), \(S\) in terms of \(W\), \(Z\), \(A\) as in (2.33). Then by directly appealing to the bootstrap assumptions we obtain the desired estimates.
**Lemma 5.6**.: By taking \(\varepsilon\) sufficiently small, we have
\[\begin{cases}|U|\lesssim M^{\frac{1}{4}}\\ |\partial_{1}U|\leq\left(1+\varepsilon^{\frac{3}{4}}\right)e^{-\frac{s}{2}} \\ |\partial_{2}U|\leq e^{-\frac{s}{2}}\\ |\partial_{1}S|\leq(1+\varepsilon)e^{-\frac{s}{2}}\\ |\partial_{2}S|\leq\left(\frac{1}{2}+\varepsilon^{\frac{1}{2}}\right)e^{- \frac{s}{2}}.\end{cases} \tag{5.23}\]
Proof.: Express \(U\) in terms of \(W\), \(Z\), \(A\), then use bootstrap assumptions and the estimates (5.10) of \(N\), \(T\).
### Transport estimates
**Lemma 5.7**.: For \(\varepsilon\ll 1\) and \(\forall y\in 10\mathcal{X}(s)\), we have
\[\begin{cases}|\partial_{1}G_{A}|\lesssim M^{2}e^{-\frac{s}{6}s},&|\partial_{2 }G_{A}|\lesssim M^{2}\varepsilon^{\frac{1}{6}}\\ |\partial_{11}G_{A}|\lesssim M^{\frac{1}{2}}e^{-s},&|\partial_{12}G_{A}| \lesssim Me^{-s},&|\partial_{22}G_{A}|\lesssim M^{2}e^{-\frac{s}{2}}.\end{cases} \tag{5.24}\]
Proof.: We first deal with \(\partial_{1}G_{A}\). Using the definition (2.26) of \(G_{A}\), the estimates (5.10) for functions of coordinate transformation, estimates (5.19)(5.20) for \(V\), and the bootstrap assumptions, and by Leibniz rule, we have that
\[\begin{split}|\partial_{1}G_{A}|&\lesssim e^{\frac{s} {2}}\left|\partial_{1}\frac{\partial_{t}f}{1+f_{x_{1}}}\right|+e^{\frac{s}{2} }|\partial_{1}J|(\kappa_{0}+|Z|+|V|)+e^{\frac{s}{2}}|\partial_{1}Z|+e^{\frac{s} {2}}|\partial_{1}(V\cdot N)|\\ &\lesssim e^{\frac{s}{2}}M^{2}\varepsilon^{-\frac{1}{6}}e^{-\frac{ 3}{2}s}+e^{\frac{s}{2}}\varepsilon^{\frac{1}{6}}e^{-\frac{3}{2}s}M^{\frac{1} {4}}+e^{\frac{s}{2}}(M^{\frac{1}{2}}e^{-\frac{3}{2}s}+M^{2}\varepsilon^{ \frac{1}{2}}e^{-\frac{3}{2}s}+M^{2+\frac{1}{4}}\varepsilon e^{-\frac{3}{2} s})\\ &\lesssim M^{2}\varepsilon^{-\frac{1}{6}}e^{-s}\lesssim M^{2}e^{- \frac{5}{6}s}.\end{split} \tag{5.25}\]
The other derivatives of \(G_{A}\) are estimated in the similar way.
**Lemma 5.8**.: For \(\varepsilon\ll 1\) and \(\forall y\in\mathcal{X}(s)\), we have
\[\begin{cases}|g_{A}|\lesssim M^{\frac{1}{4}}e^{\frac{s}{2}}\\ |\partial_{1}g_{A}|\leq 3\\ |\partial_{2}g_{A}|\leq 2\\ |D^{2}g_{A}|\lesssim M\eta^{-\frac{1}{6}}+M^{2}e^{-\frac{s}{2}}\\ |\partial_{1}h_{A}|\lesssim e^{-s}\\ |\partial_{2}h_{A}|\lesssim e^{-s}.\end{cases} \tag{5.26}\]
Proof.: Use the definition (2.26) and the estimates (B-\(W\))(5.10)(5.24), estimate similarly as we did in the proof of (5.24) with more care since there is no room of a universal constant here.
## 6. Energy estimate
To overcome the loss of derivative in \(L^{\infty}\) estimates of \(W\), \(Z\), and \(A\), we will establish an additional energy estimate to control the \(\dot{H}^{k}(k\ll 1)\) norms of \(W\), \(Z\), and \(A\). It is crutial that in the proof of energy estimate we only use the bootstrap assumptions, not requiring any information on higher order derivatives.
**Proposition 6.1** (Energy estimate for \(W\), \(Z\), \(A\)).: For an integer \(k\geq 18\), and a constant \(\lambda=\lambda(k)\),
\[\|Z(\cdot,s)\|_{\dot{H}^{k}}^{2}+\|A(\cdot,s)\|_{\dot{H}^{k}}^{2}\leq 2\lambda^ {-k}e^{-s}+M^{4k}e^{-s}(1-\varepsilon^{-s}e^{-s})\lesssim M^{4k}e^{-s}, \tag{6.1}\]
\[\|W(\cdot,s)\|_{\dot{H}^{k}}^{2}\leq 2\lambda^{-k}\varepsilon^{-1}e^{-s}+M^{4k }(1-\varepsilon^{-s}e^{-s}). \tag{6.2}\]
We will prove this by using the \(\dot{H}^{k}\) bound for \((U,S)\), and the fact that the \(\dot{H}^{k}\) norm of \((W,Z,A)\) can be controlled by the \(\dot{H}^{k}\) norm of \((U,S)\). More precisely, we have:
**Lemma 6.2**.: The following inequalities hold:
\[\begin{split}\|W\|_{\dot{H}^{k}}\lesssim_{k}e^{\frac{s}{2}}\left( \|U\|_{\dot{H}^{k}}+\|S\|_{\dot{H}^{k}}+M^{\frac{9}{4}}\varepsilon^{\frac{3}{ 2}}e^{-\frac{k-3}{3}s}\right),\\ \|Z\|_{\dot{H}^{k}}+\|A\|_{\dot{H}^{k}}\lesssim_{k}\|U\|_{\dot{H} ^{k}}+\|S\|_{\dot{H}^{k}}+M^{\frac{9}{4}}\varepsilon^{\frac{3}{2}}e^{-\frac{ k-3}{3}s}.\end{split} \tag{6.3}\]
Proof.: We first estimate \(\|W\|_{\dot{H}^{k}}\). Note that by (2.34), \(\operatorname{supp}(DU,DS)\subset\mathcal{X}(s)\), we have
\[\begin{split} e^{-\frac{s}{2}}\|\partial^{\gamma}W\|_{L^{2}( \mathbb{R}^{2})}&\stackrel{{|\gamma|=k}}{{\lesssim }}\|\partial^{\gamma}S\|_{L^{2}}+\sum_{\beta\leq\gamma}\|\partial^{\gamma- \beta}U\cdot\partial^{\beta}N\|_{L^{2}(\mathcal{X}(s))}\\ &\lesssim\|S\|_{\dot{H}^{k}}+\|U\|_{L^{\infty}}\|\partial^{\gamma} N\|_{L^{\infty}}|\mathcal{X}(s)|^{\frac{1}{2}}+\|\partial^{\gamma}U\|_{L^{2}}+ \sum_{0<\beta<\gamma}\|\partial^{\gamma-\beta}U\|_{L^{2}}\|\partial^{\beta}N \|_{L^{\infty}}\\ &\stackrel{{\text{Poincare}}}{{\lesssim}}\|S\|_{\dot{H} ^{k}}+\|U\|_{\dot{H}^{k}}+M^{\frac{9}{4}}M^{2}\varepsilon^{\frac{7}{6}-\frac{ 71}{2}-\frac{72}{6}}e^{-(\frac{3}{2}\gamma_{1}+\frac{1}{2}\gamma_{2})s} \varepsilon^{\frac{1}{3}}e^{s}\\ &\quad+\sum_{0<\beta<\gamma}(\varepsilon^{\frac{s}{6}}e^{\frac{ 7}{2}})^{|\beta|}\|D^{k}U\|_{L^{2}}M^{2}\varepsilon^{\frac{7}{6}-\frac{\beta_ {1}}{2}-\frac{\beta_{2}}{6}}e^{-(\frac{3}{2}\beta_{1}+\frac{1}{2}\beta_{2})s} \\ &\lesssim\|S\|_{\dot{H}^{k}}+\|U\|_{\dot{H}^{k}}+M^{\frac{9}{4}} \varepsilon^{\frac{9}{2}}e^{-\frac{|\gamma|-3}{3}s}.\end{split} \tag{6.4}\]
The estimates of \(Z\) and \(A\) are similar.
**Definition 6.3** (Modified \(\dot{H}^{k}\) norm).: We define
\[E_{k}^{2}(s):=\sum_{|\gamma|=k}\lambda^{\gamma_{2}}\left(\|\partial^{\gamma}U( \cdot,s)\|_{L^{2}}^{2}+\|\partial^{\gamma}S(\cdot,s)\|_{L^{2}}^{2}\right), \tag{6.5}\]
where \(\lambda\in(0,1)\) is to be specified below. Clearly we have the norm equivalence:
\[\lambda^{k}\left(\|U\|_{\dot{H}^{k}}^{2}+\|S\|_{\dot{H}^{k}}^{2}\right)\leq E_ {k}^{2}\leq\|U\|_{\dot{H}^{k}}^{2}+\|S\|_{\dot{H}^{k}}^{2}. \tag{6.6}\]
### Evolution of derivatives of \((U,s)\)
Applying \(\partial^{\gamma}\) to both sides of the \((U,S)\) equation (2.31), we see that
\[\begin{split}\partial_{s}\partial^{\gamma}U_{i}&- \beta_{\tau}e^{-s}Q_{ij}\partial^{\gamma}U_{j}+\mathcal{V}_{A}\cdot\nabla \partial^{\gamma}U_{i}+D_{\gamma}\partial^{\gamma}U_{i}+\beta_{3}\beta_{ \tau}(1+\gamma_{1})JN_{i}\partial^{\gamma}S\partial_{1}W,\\ &+2\beta_{3}\beta_{\tau}S\left(e^{\frac{\epsilon}{2}}JN_{i} \partial_{1}\partial^{\gamma}S+e^{-\frac{\epsilon}{2}}\delta_{i2}\partial_{2 }\partial^{\gamma}S\right)=F_{U_{i}}^{(\gamma)},\end{split} \tag{6.7a}\] \[\begin{split}\partial_{s}\partial^{\gamma}S&+ \mathcal{V}_{A}\cdot\nabla\partial^{\gamma}S+D_{\gamma}\partial^{\gamma}S+ \beta_{\tau}(\beta_{1}+\beta_{3}\gamma_{1})JN\cdot\partial^{\gamma}U\partial_ {1}W\\ &+2\beta_{3}\beta_{\tau}S\left(e^{\frac{\epsilon}{2}}JN\cdot \partial_{1}\partial^{\gamma}U+e^{-\frac{\epsilon}{2}}\partial_{2}\partial^{ \gamma}U_{2}\right)=F_{S}^{(\gamma)}\end{split} \tag{6.7b}\]
where \(D_{\gamma}=\frac{1}{2}|\gamma|+\gamma_{1}(1+\partial_{1}g_{U})\), and the forcing terms are \(F_{U_{i}}^{(\gamma)}=F_{U_{i}}^{(\gamma,U)}+F_{U_{i}}^{(\gamma-1,U)}+F_{U_{i} }^{(\gamma,S)}+F_{U_{i}}^{(\gamma-1,S)}\), \(F_{S}^{(\gamma)}=F_{S}^{(\gamma,U)}+F_{S}^{(\gamma-1,U)}+F_{S}^{(\gamma,S)}+F _{S}^{(\gamma-1,S)}\). Here
\[\begin{split} F_{U_{i}}^{(\gamma,U)}=&-2\beta_{1} \beta_{\tau}\left(e^{\frac{\epsilon}{2}}JN_{j}\partial^{\gamma}U_{j}\partial_ {1}U_{i}+e^{-\frac{\epsilon}{2}}\partial^{\gamma}U_{2}\partial_{2}U_{i}\right) \\ &-\gamma_{2}\partial_{2}g_{A}\partial_{1}\partial^{\gamma-\epsilon _{2}}U_{i}-\sum_{\begin{subarray}{c}|\beta|=|\gamma|-1\\ \beta\leq\gamma\end{subarray}}\binom{\gamma}{\beta}\partial^{\gamma-\beta}h_{ A}\partial_{2}\partial^{\beta}U_{i}\\ =& F_{U_{i},(1)}^{(\gamma,U)}+F_{U_{i},(2)}^{( \gamma,U)}+F_{U_{i},(3)}^{(\gamma,U)},\end{split} \tag{6.8a}\] \[\begin{split} F_{U_{i}}^{(\gamma-1,U)}=&-2\beta_{ 1}\beta_{\tau}e^{\frac{\epsilon}{2}}[\partial^{\gamma},JN]\cdot U\partial_ {1}U_{i}-\beta_{\tau}e^{\frac{\epsilon}{2}}\partial^{\gamma}\left(2\beta_{1}V \cdot JN-\frac{\partial_{t}f}{1+f_{x_{1}}}\right)\partial_{1}U_{i}-2\beta_{1} \beta_{\tau}e^{-\frac{\epsilon}{2}}\partial^{\gamma}V_{2}\partial_{2}U_{i}\\ =& F_{U_{i},(1)}^{(\gamma-1,U)}+F_{U_{i},(2)}^{( \gamma-1,U)}+F_{U_{i},(3)}^{(\gamma-1,U)}+F_{U_{i},(4)}^{(\gamma-1,U)},\\ F_{U_{i}}^{(\gamma,S)}=&-2\beta_{3}\beta_{\tau}\gamma_{ 2}e^{\frac{\epsilon}{2}}\partial_{2}(SJN_{i})\partial_{1}\partial^{\gamma- \epsilon_{2}}S-\beta_{3}\beta_{\tau}(1+\gamma_{1})e^{\frac{\epsilon}{2}}JN_{i} \partial_{1}Z\partial^{\gamma}S\\ &-2\beta_{3}\beta_{\tau}e^{-\frac{\epsilon}{2}}\delta_{i2}\sum_{ \begin{subarray}{c}|\beta|=|\gamma|-1\\ \beta\leq\gamma\end{subarray}}\binom{\gamma}{\beta}\partial^{\gamma-\beta}S \partial_{2}\partial^{\beta}S-2\beta_{3}\beta_{\tau}\delta_{i2}e^{-\frac{ \epsilon}{2}}\partial^{\gamma}S\partial_{2}S-2\beta_{3}\beta_{\tau}\gamma_{1}e ^{\frac{\epsilon}{2}}\partial_{1}(JN_{i})S\partial^{\gamma}S\\ =& F_{U_{i},(1)}^{(\gamma,S)}+F_{U_{i},(2)}^{( \gamma,S)}+F_{U_{i},(3)}^{(\gamma,S)}+F_{U_{i},(5)}^{(\gamma,S)},\\ F_{U_{i}}^{(\gamma-1,S)}=&-2\beta_{3}\beta_{\tau}\sum_{ \begin{subarray}{c}|\beta|\leq|\gamma|-2\\ \beta\leq\gamma\end{subarray}}\binom{\gamma}{\beta}\left(e^{\frac{\epsilon}{2}} \partial^{\gamma-\beta}(SJN_{i})\partial_{1}\partial^{\beta}S+e^{-\frac{ \epsilon}{2}}\delta_{i2}\partial^{\gamma-\beta}S\partial_{2}\partial^{\beta}S\right) \\ &-2\beta_{3}\beta_{\tau}e^{\frac{\epsilon}{2}}[\partial^{\gamma},JN_{i} ]S\partial_{1}S\\ =& F_{U_{i},(1)}^{(\gamma-1,S)}+F_{U_{i},(2)}^{( \gamma-1,S)},\end{split} \tag{6.8b}\]
\[F_{S}^{(\gamma,S)}= -2\beta_{3}\beta_{\tau}\left(e^{\frac{\pi}{2}}\partial^{\gamma}SJN_{ j}\partial_{1}U_{j}+e^{-\frac{\pi}{2}}\partial^{\gamma}S\partial_{2}U_{2}\right)\] \[-\gamma_{2}\partial_{2}g_{A}\partial_{1}\partial^{\gamma-e_{2}}S -\sum_{\begin{subarray}{c}|\beta|=|\gamma|-1\\ \beta\leq\gamma\end{subarray}}\binom{\gamma}{\beta}\partial^{\gamma-\beta}h_ {A}\partial_{2}\partial^{\beta}S, \tag{6.8e}\] \[F_{S}^{(\gamma-1,S)}= -\sum_{\begin{subarray}{c}1\leq|\beta|\leq|\gamma|-2\\ \beta\leq\gamma\end{subarray}}\binom{\gamma}{\beta}\left(\partial^{\gamma- \beta}g_{A}\partial_{1}\partial^{\beta}S+\partial^{\gamma-\beta}h_{A}\partial _{2}\partial^{\beta}S\right)\] \[-2\beta_{3}\beta_{\tau}\sum_{\begin{subarray}{c}1\leq|\beta|\leq| \gamma|-2\\ \beta\leq\gamma\end{subarray}}\binom{\gamma}{\beta}\left(e^{\frac{\pi}{2}} \partial^{\gamma-\beta}(SJN)\cdot\partial_{1}\partial^{\beta}U+e^{-\frac{ \pi}{2}}\partial^{\gamma-\beta}S\partial_{2}\partial^{\beta}U_{2}\right)\] (6.8f) \[-2\beta_{3}\beta_{\tau}e^{\frac{\pi}{2}}\partial_{1}U_{j}[ \partial^{\gamma},JN_{j}]S-\beta_{\tau}e^{\frac{\pi}{2}}\partial^{\gamma} \left(2\beta_{1}V\cdot JN-\frac{\partial_{t}f}{1+f_{x_{1}}}\right)\partial_{1 }S-2\beta_{1}\beta_{\tau}e^{-\frac{\pi}{2}}\partial^{\gamma}V_{2}\partial_{2}S,\] \[F_{S}^{(\gamma,U)}= -2\beta_{3}\beta_{\tau}\gamma_{2}e^{\frac{\pi}{2}}\partial_{2}( SJN)\cdot\partial_{1}\partial^{\gamma-e_{2}}U+\beta_{\tau}(\beta_{1}+\beta_{3} \gamma_{1})e^{\frac{\pi}{2}}JN\cdot\partial^{\gamma}U\partial_{1}Z\] \[-2\beta_{3}\beta_{\tau}e^{-\frac{\pi}{2}}\sum_{\begin{subarray} {c}|\beta|=|\gamma|-1\\ \beta\leq\gamma\end{subarray}}\binom{\gamma}{\beta}\partial^{\gamma-\beta}S \partial_{2}\partial^{\beta}U_{2}-2\beta_{1}\beta_{\tau}e^{-\frac{\pi}{2}} \partial^{\gamma}U_{2}\partial_{2}S-2\beta_{3}\beta_{\tau}\gamma_{1}e^{\frac{ \pi}{2}}S\partial^{\gamma}U_{j}\partial_{1}(JN_{j}),\] (6.8g) \[F_{S}^{(\gamma-1,U)}= -2\beta_{1}\beta_{\tau}e^{\frac{\pi}{2}}\partial_{1}S[\partial^{ \gamma},JN_{j}]U_{j}. \tag{6.8h}\]
### Estimates for forcing terms
**Lemma 6.4**.: Let \(k\gg 1\) and \(\delta\in(0,\frac{1}{32}]\), \(\lambda=\frac{\delta^{2}}{12k^{2}}\), then for \(\varepsilon\ll 1\) we have
\[2\sum_{|\gamma|=k}\lambda^{\gamma_{2}}\int_{\mathbb{R}^{3}} \left|F_{U_{i}}^{(\gamma)}\partial^{\gamma}U_{i}\right| \leq(4+8\delta)E_{k}^{2}+e^{-s}M^{4k-4}, \tag{6.9a}\] \[2\sum_{|\gamma|=k}\lambda^{\gamma_{2}}\int_{\mathbb{R}^{3}} \left|F_{S}^{(\gamma)}\partial^{\gamma}S\right| \leq(4+8\delta)E_{k}^{2}+e^{-s}M^{4k-4}. \tag{6.9b}\]
Proof.: We begin with (6.9a).
We first deal with the term \(F_{U_{i}}^{(\gamma,U)}\) involving the top order derivatives of \(U\), this term is decomposed as a sum \(F_{U_{i},(1)}^{(\gamma,U)}+F_{U_{i},(2)}^{(\gamma,U)}+F_{U_{i},(3)}^{(\gamma,U)}\). From (B-M), \(0<\beta_{1},\beta_{\tau}<1\), and (5.10), we have
\[2\sum_{|\gamma|=k}\lambda^{\gamma_{2}}\int_{\mathbb{R}^{3}} \left|F_{U_{i},(1)}^{(\gamma)}\partial^{\gamma}U_{i}\right| \tag{6.10}\] \[\leq(4+\varepsilon^{\frac{1}{2}})E_{k}^{2}.\]
By (5.26) and Young's inequality, we can see that
\[\begin{split} 2\sum_{|\gamma|=k}\lambda^{\gamma_{2}}\int_{\mathbb{R}^{3}} \left|F_{U_{i},(2)}^{(\gamma)}\partial^{\gamma}U_{i}\right|&\overset{ \eqref{eq:2.2.2}a}{\leq}2\sum_{|\gamma|=k}\lambda^{\gamma_{2}}\gamma_{2}\| \partial_{2}g_{A}\|_{L^{\infty}(\mathcal{X}(s))}\|\partial_{1}\partial^{\gamma -e_{2}}U_{i}\|_{L^{2}}\|\partial^{\gamma}U_{i}\|_{L^{2}}\\ &\leq 2\sum_{|\gamma|=k}\left(\frac{\gamma_{2}^{2}}{\delta}\lambda^{ \gamma_{2}+1}\|\partial^{\gamma}U\|_{L^{2}}^{2}+\mathbb{1}_{\gamma_{2}>0} \delta\lambda^{\gamma_{2}-1}\|\partial_{1}\partial^{\gamma-e_{2}}U\|_{L^{2}}^ {2}\right)\\ &\leq\lambda\frac{2k^{2}}{\delta}E_{k}^{2}+2\delta E_{k}^{2} \overset{\lambda=\frac{\delta^{2}}{2k^{2}}}{\leq}3\delta E_{k}^{2},\end{split} \tag{6.11}\]
and
\[\begin{split} 2\sum_{|\gamma|=k}\lambda^{\gamma_{2}}\int_{ \mathbb{R}^{3}}\left|F_{U_{i},(3)}^{(\gamma)}\partial^{\gamma}U_{i}\right|& \overset{\eqref{eq:2.2.2}a}{\lesssim}\sum_{|\gamma|=k}\int \sum_{\begin{subarray}{c}|\beta|=|\gamma|-1\\ \beta\leq\gamma\end{subarray}}|\partial^{\gamma-\beta}h_{A}||\partial_{2} \partial^{\beta}U||\partial^{\gamma}U|\\ &\lesssim\varepsilon\sum_{|\gamma|=k}\sum_{\begin{subarray}{c}| \beta|=|\gamma|\\ \beta\leq\gamma\end{subarray}}\left(\|\partial^{\gamma}U\|_{L^{2}}^{2}+\| \partial_{2}\partial^{\beta}U\|_{L^{2}}^{2}\right)\leq\varepsilon^{\frac{1}{2 }}E_{k}^{2}.\end{split} \tag{6.12}\]
Combining these three estimates, we have
\[2\sum_{|\gamma|=k}\lambda^{\gamma_{2}}\int_{\mathbb{R}^{3}}\left|F_{U_{i}}^{( \gamma,U)}\partial^{\gamma}U_{i}\right|\leq(4+3\delta+\varepsilon^{\frac{1}{2 }})E_{k}^{2}. \tag{6.13}\]
Next we deal with the forcing terms \(F_{U_{i}}^{(\gamma-1,U)}\) involving lower order derivatives of \(U\). We decompose its first part as \(F_{U_{i},(1)}^{(\gamma-1,U)}=I_{i1}+I_{i2}+I_{i3}\) where
\[\begin{split} I_{i1}&=-\sum_{\begin{subarray}{c}1 \leq|\beta|\leq|\gamma|-2\\ \beta\leq\gamma\end{subarray}}\binom{\gamma}{\beta}\partial^{\gamma-\beta}g_{ A}\partial^{\beta}\partial_{1}(U\cdot NN_{i}),\\ I_{i2}&=-\sum_{\begin{subarray}{c}1\leq|\beta|\leq| \gamma|-2\\ \beta\leq\gamma\end{subarray}}\binom{\gamma}{\beta}\partial^{\gamma-\beta}g_{ A}\partial^{\beta}\partial_{1}(AT_{i}),\\ I_{i3}&=-\sum_{\begin{subarray}{c}1\leq|\beta|\leq| \gamma|-2\\ \beta\leq\gamma\end{subarray}}\binom{\gamma}{\beta}\partial^{\gamma-\beta}h_{A} \partial^{\beta}\partial_{2}U_{i}.\end{split} \tag{6.14}\]
Since \(D(U\cdot N)\) is supported in \(\mathcal{X}(,)\), we introduce a positive cut-off function \(\tilde{\theta}\in C_{c}(5\mathcal{X}(0))\) such that \(\tilde{\theta}\equiv 1\) on \(\mathcal{X}(0)\). Let \(\tilde{\theta}_{s}(y)=\tilde{\theta}(y_{1}e^{-\frac{3}{2}s},y_{2}e^{-\frac{s}{2}})\), then \(\tilde{\theta}_{s}\in C_{c}^{\infty}(5\mathcal{X}(s))\), \(\tilde{\theta}_{s}\equiv 1\) on \(\mathcal{X}(s)\), and
\[\|\partial^{\gamma}\tilde{\theta}_{s}\|_{L^{\infty}}\lesssim\varepsilon^{\frac {\gamma_{1}}{2}-\frac{\gamma_{2}}{6}}e^{-\frac{3}{2}\gamma_{1}s-\frac{\gamma_{2}} {2}s}\lesssim e^{-\frac{|\gamma|}{3}s}. \tag{6.15}\]
By the interpolation inequality (B.3), we have
\[\|I_{i1}\|_{L^{2}(\mathbb{R}^{2})}\lesssim\left\|D^{k}\left(\tilde{\theta}_{s}g _{A}\right)\right\|_{L^{2}_{y}(\mathbb{R}^{2})}^{a}\left\|D^{2}\left(\tilde{ \theta}_{s}g_{A}\right)\right\|_{L^{q}(\mathbb{R}^{2})}^{1-a}\|D^{k}(U\cdot NN )\|_{L^{2}(\mathbb{R}^{2})}^{b}\|D^{2}(U\cdot NN)\|_{L^{q}(\mathbb{R}^{2})}^{ 1-b}. \tag{6.16}\]
We estimate each factor. We first bound the \(D^{2}g_{A}\) term:
\[\begin{split}\left\|D^{2}\left(\tilde{\theta}_{s}g_{A}\right)\right\| _{L^{q}(\mathbb{R}^{2})}&\stackrel{{\eqref{eq:D2}}}{{ \lesssim}}M^{\frac{1}{4}}e^{\frac{s}{2}}e^{-\frac{2}{3}s}(e^{\frac{2}{3}}e^{2 s})^{\frac{1}{q}}+e^{-\frac{s}{3}}(\varepsilon^{\frac{2}{3}}e^{2s})^{\frac{1}{q}}+ \|M\eta^{-\frac{1}{6}}+M^{2}e^{-\frac{s}{2}}\|_{L^{q}(5\mathcal{X}(s))}\\ &\lesssim M\|\eta^{-1}\|_{L^{\frac{1}{6}}(\mathbb{R}^{2})}^{\frac {1}{q}}+M^{2}e^{-\frac{s}{6}}e^{\frac{2}{3q}}e^{\frac{2}{q}s}{\lesssim}M.\end{split} \tag{6.17}\]
In the last inequality we require \(q\geq 12\) and use the fact that \((1+|y_{1}|^{\alpha_{1}}+\cdots+|y_{d}|^{\alpha_{d}})^{-1}\in L^{1}(R^{d})\) as long as \(\sum\alpha_{i}^{-1}<1\). From estimates (5.22) of \(U\cdot N\) and estimates (5.10) of \(N\), we have
\[\|D^{2}(U\cdot NN)\|_{L^{q}}\lesssim Me^{-\frac{s}{2}}. \tag{6.18}\]
Then, as we did in the proof of lemma 6.2, we have
\[\begin{split}\|D^{k}(U\cdot JN)\|_{L^{2}(5\mathcal{X}(s))}{ \lesssim}\|D^{k}U\|_{L^{2}(\mathbb{R}^{2})}+M^{2}\varepsilon^{\frac{1}{3}}e^ {-\frac{h-3}{3}},\end{split} \tag{6.19}\]
\[\begin{split}\|D^{m}g_{A}\|_{L^{2}(5\mathcal{X}(s))}& \lesssim e^{\frac{s}{2}}\left(\|D^{m}(U\cdot JN)\|_{L^{2}(5 \mathcal{X}(s))}+\|D^{m}(V\cdot JN)\|_{L^{2}(5\mathcal{X}(s))}+\left\|D^{m}( \frac{\partial_{t}f}{1+f_{x_{1}}})\right\|_{L^{2}(5\mathcal{X}(s))}\right)\\ &\stackrel{{ m>0}}{{\lesssim}}e^{\frac{s}{2}}\left(\|D ^{m}U\|_{L^{2}(\mathbb{R}^{2})}+M^{2}\varepsilon^{\frac{1}{3}}e^{-\frac{m-3}{ 3}}\right),\end{split} \tag{6.20}\]
\[\begin{split}\left\|\partial^{\gamma}\left(\tilde{\theta}_{s}g_{ A}\right)\right\|_{L^{2}(\mathbb{R}^{2})}&\lesssim_{\gamma} \varepsilon^{-\frac{\gamma_{1}}{2}-\frac{\gamma_{2}}{6}}e^{-\frac{3}{2}\gamma _{1}s-\frac{\gamma_{2}}{2}s}\|g_{A}\|_{L^{\infty}}|5\mathcal{X}(s)|^{1/2}+ \sum_{\beta<\gamma}\varepsilon^{-\frac{\beta_{1}}{2}-\frac{\beta_{2}}{6}}e^{ -\frac{3}{2}\beta_{1}s-\frac{\beta_{2}}{2}s}\|\partial^{\gamma-\beta}g_{A}\| _{L^{2}(5\mathcal{X}(s))}\\ &\lesssim e^{\frac{s}{2}}\left(\|D^{|\gamma|}U\|_{L^{2}(\mathbb{R }^{2})}+M^{2}\varepsilon^{\frac{1}{3}}e^{-\frac{|\gamma|-3}{3}s}\right).\end{split} \tag{6.21}\]
For \(k\geq 5\), we have \(a+b\geq\frac{1}{2}\), \(\frac{2-a-b}{1-a-b}\leq 2k-4\). Hence, by taking \(M\) to be large enough in terms of \(\lambda\) and \(k\), we have
\[\begin{split} 2\sum_{|\gamma|=k}\lambda^{\gamma_{2}}\int|I_{i1} \partial^{\gamma}U_{i}|&\lesssim\sum_{|\gamma|=k}\lambda^{\gamma_ {2}}\|D^{k}U\|_{L^{2}}\left[\|D^{k}U\|_{L^{2}}^{a+b}+\left(M^{2}e^{\frac{1}{3}} e^{-\frac{k-3}{3}s}\right)^{a+b}\right]M^{2-a-b}e^{\frac{a+b-1}{2}s}\\ &\lesssim\sum_{|\gamma|=k}\lambda^{\gamma_{2}}\left(\lambda^{- \frac{k}{2}}E_{k}\right)^{1+a+b}M^{2-a-b}e^{\frac{a+b-1}{2}s}+\sum_{|\gamma|=k }\lambda^{\gamma_{2}}M^{2+3a+3b}\varepsilon^{\frac{a+b}{3}}e^{-\frac{a+b}{3} ks+\frac{a+b+1}{2}s}\lambda^{-\frac{k}{2}}E_{k}\\ &\stackrel{{ a+b<1}}{{\leq}}2\delta E_{k}^{2}+C(a,b, \delta)e^{-s}M^{\frac{2(2-a-b)}{1-a-b}}\lambda^{-\frac{1+a+b}{1-a-b}k}+C(\delta )M^{10}e^{\frac{2}{3}(a+b)}\lambda^{-k}e^{-\frac{2}{3}(a+b)ks+(a+b+1)s}\\ &\leq 2\delta E_{k}^{2}+C(a,b,\delta)e^{-s}M^{4k-8}\lambda^{-\frac{1+a+ b}{1-a-b}k}\leq 2\delta E_{k}^{2}+e^{-s}M^{4k-6}.\end{split} \tag{6.22}\]
Next, we estimate the \(L^{2}\) norm of \(I_{i2}\):
\[\begin{split}\|I_{i2}\|_{L^{2}}&\lesssim e^{\frac{s}{2} }\sum_{j=1}^{k-2}\|D^{k-j}(U\cdot JN)D^{j}\partial_{1}(AT)\|_{L^{2}}+e^{\frac{s}{ 2}}\sum_{\begin{subarray}{c}1\leq|\beta|\leq|\gamma|-2\\ \beta\leq\gamma\end{subarray}}\left(|\partial^{\gamma-\beta}(V\cdot JN)|+ \left|\partial^{\gamma-\beta}\frac{\partial_{t}f}{1+f_{x_{1}}}\right|\right)\| \partial^{\beta}\partial_{1}(AT)\|_{L^{2}}\\ &=I_{i2,1}+I_{i2,2}.\end{split} \tag{6.23}\]
First, for \(I_{i2,1}\), we have that
\[\begin{split} I_{i2,1}&\stackrel{{\text{ H\"{o}lder}}}{{\lesssim}}e^{\frac{\epsilon}{2}}\sum_{j=1}^{k-2}\|D^{k-j-1}D(\tilde{ \theta}_{s}U\cdot JN)\|_{L^{\frac{2(k-1)}{k-1-j}}(\mathbb{R}^{2})}\|D^{j} \partial_{1}(AT)\|_{L^{\frac{2(k-1)}{j}}}\\ &\stackrel{{\text{(B.2)}}}{{\lesssim}}e^{\frac{ \epsilon}{2}}\sum_{j=1}^{k-2}\|D(\tilde{\theta}_{s}U\cdot JN)\|_{\tilde{H}^{k- 1}}^{\frac{k-j-1}{k-1}}\|D(\tilde{\theta}_{s}U\cdot JN)\|_{L^{\infty}}^{\frac{ j}{k-1}}\|\partial_{1}(AT)\|_{\tilde{H}^{k-1}}^{\frac{k-j-1}{k-1}}\|\partial_{1}(AT)\|_{L ^{\infty}}^{\frac{k-j-1}{k-1}}\\ &\stackrel{{\text{$\text{$\text{$\text{$\text{ $\text{ $\text{ $\text{ $\text{ $\text{ $\text{ $\text{ $\text{ $\text{ $\text{$\text{$\text{$\text{$\text{$\text{$\text{$ \text{$ \,}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\
Thus, if we choose \(\varepsilon\) small enough in terms of \(\lambda\) and \(k\), we have
\[2\sum_{|\gamma|=k}\lambda^{\gamma_{2}}\int\left|F_{U_{i},(2)}^{( \gamma-1,U)}\partial^{\gamma}U_{i}\right| \lesssim\sum_{|\gamma|=k}e^{\frac{\varepsilon}{2}}\left\|[ \partial^{\gamma},JN]U\right\|_{L^{2}}\|\partial^{\gamma}U\|_{L^{2}}\|\partial _{1}U\|_{L^{\infty}} \tag{6.30}\] \[\overset{\eqref{eq:2.1}}{\lesssim}e^{\frac{\varepsilon}{2}} \left(\varepsilon^{\frac{1}{2}}\|D^{k}U\|_{L^{2}}+\varepsilon e^{-\frac{k-3}{3 }s}\right)\|D^{k}U\|_{L^{2}}e^{-\frac{\varepsilon}{2}}\] \[\lesssim\lambda^{-k}\varepsilon^{\frac{1}{2}}E_{k}^{2}+ \varepsilon\lambda^{-\frac{k}{2}}E_{k}e^{-\frac{k-3}{3}s}\] \[\lesssim\lambda^{-k}\varepsilon^{\frac{1}{2}}E_{k}^{2}+ \varepsilon^{\frac{1}{2}}e^{-\frac{2(k-3)}{3}s}\leq\varepsilon^{\frac{1}{4}} E_{k}^{2}+e^{-s}.\]
From the estimates (5.19)(5.10) of \(V\) and \(J\),\(N\), we can see that
\[|\partial^{\gamma}(V\cdot JN)|+\left|\partial^{\gamma}\frac{\partial_{t}f}{1+ f_{x_{1}}}\right|\lesssim M^{2}\varepsilon^{\frac{1}{4}}e^{-\left(\gamma_{1}+ \frac{\gamma_{2}}{3}\right)s}. \tag{6.31}\]
Therefore, we have
\[2\sum_{|\gamma|=k}\lambda^{\gamma_{2}}\int\left|F_{U_{i},(3)}^{( \gamma-1,U)}\partial^{\gamma}U_{i}\right| \lesssim e^{\frac{\varepsilon}{2}}\sum_{|\gamma|=k}M^{2} \varepsilon^{\frac{1}{3}}e^{-\left(\gamma_{1}+\frac{\gamma_{2}}{3}\right)s} \|\partial_{1}U\|_{L^{\infty}}\|\partial^{\gamma}U\|_{L^{2}}|\mathcal{X}(s)|^{ \frac{1}{2}} \tag{6.32}\] \[\lesssim M^{2}\varepsilon^{\frac{3}{3}}e^{-\frac{k-3}{3}s}\|D^{ k}U\|_{L^{2}}\lesssim\varepsilon^{\frac{3}{3}}\|D^{k}U\|_{L^{2}}^{2}+M^{4} \varepsilon^{\frac{2}{3}}e^{-\frac{2(k-3)}{3}s}\] \[\leq\varepsilon^{\frac{1}{2}}E_{k}^{2}+e^{-s}.\]
The estimate of \(F_{U_{i},(4)}^{(\gamma-1,U)}\) is much the same, we have
\[2\sum_{|\gamma|=k}\lambda^{\gamma_{2}}\int\left|F_{U_{i},(4)}^{(\gamma-1,U)} \partial^{\gamma}U_{i}\right|\leq\varepsilon^{\frac{1}{4}}E_{k}^{2}+e^{-s}. \tag{6.33}\]
Combining the above estimates, we arrive at
\[2\sum_{|\gamma|=k}\lambda^{\gamma_{2}}\int\left|F_{U_{i}}^{(\gamma-1,U)} \partial^{\gamma}U_{i}\right|\leq 2(\delta+\varepsilon^{\frac{1}{4}})E_{k}^{2}+e^{-s}M ^{4k-4}. \tag{6.34}\]
Now we estimate the terms involving \(k\) order derivatives of \(S\).
\[2\sum_{|\gamma|=k}\lambda^{\gamma_{2}}\int\left|\left(F_{U_{i},(2)}^{(\gamma, S)}+F_{U_{i},(4)}^{(\gamma,S)}\right)\partial^{\gamma}U_{i}\right|\lesssim\left(e^{ \frac{\varepsilon}{2}}\|\partial_{1}Z\|_{L^{\infty}}+e^{-\frac{\varepsilon}{2 }}\|\partial_{2}S\|_{L^{\infty}}\right)\lambda^{-k}E_{k}^{2}\leq\varepsilon^{ \frac{1}{2}}E_{k}^{2}. \tag{6.35}\]
\[2\sum_{|\gamma|=k}\lambda^{\gamma_{2}}\int\left|F_{U_{i},(3)}^{( \gamma,S)}\partial^{\gamma}U_{i}\right| \lesssim\sum_{|\gamma|=k}\lambda^{\gamma_{2}}e^{-\frac{\varepsilon }{2}}\sum_{|\beta|=|\gamma|-1|\atop\beta\leq\gamma}\|\nabla S\|_{L^{\infty}} \|\partial_{2}\partial^{\beta}S\|_{L^{2}}\|\partial^{\gamma}U\|_{L^{2}} \tag{6.36}\] \[\lesssim e^{-s}\lambda^{-k}E_{k}^{2}\leq\varepsilon^{\frac{1}{2}}E_ {k}^{2},\] \[2\sum_{|\gamma|=k}\lambda^{\gamma_{2}}\int\left|F_{U_{i},(1)}^{( \gamma,S)}\partial^{\gamma}U_{i}\right| \lesssim\sum_{|\gamma|=k}\lambda^{\frac{\gamma_{2}+1}{2}}\gamma_{2} \|\partial_{2}(SJN)\|_{L^{\infty}}\|\partial^{\gamma}U\|_{L^{2}}\|\partial_{1} \partial^{\gamma-e_{2}}S\|_{L^{2}}\lambda^{\frac{\gamma_{2}-1}{2}}\] (6.37) \[\lesssim\sum_{|\gamma|=k}e^{-\frac{\varepsilon}{2}}\left(\lambda^{ \gamma_{2}+1}\|\partial^{\gamma}U\|_{L^{2}}^{2}+\lambda^{\gamma_{2}-1}\gamma_{2 }^{2}\|\partial_{1}\partial^{\gamma-e_{2}}S\|_{L^{2}}^{2}\right)\leq\varepsilon^ {\frac{1}{4}}E_{k}^{2},\]
\[2\sum_{|\gamma|=k}\lambda^{\gamma_{2}}\int\left|F^{(\gamma,S)}_{U_{i },(5)}\partial^{\gamma}U_{i}\right| \lesssim e^{\frac{s}{2}}\|\partial_{1}(JN)\|_{L^{\infty}}\|S\|_{L^{ \infty}}\lambda^{-k}E_{k}^{2} \tag{6.38}\] \[\lesssim e^{\frac{s}{2}}M^{2}\varepsilon^{\frac{s}{6}-\frac{1}{2} }e^{-\frac{3}{2}s}M^{\frac{1}{4}}\lambda^{-k}E_{k}^{2}\leq\varepsilon E_{k}^{ 2}.\]
Summing up the above inequalities, we get
\[2\sum_{|\gamma|=k}\lambda^{\gamma_{2}}\int\left|F^{(\gamma,S)}_{U_{i}}\partial ^{\gamma}U_{i}\right|\leq 2\varepsilon^{\frac{1}{4}}E_{k}^{2}. \tag{6.39}\]
Now we look at the terms involving lower order derivatives of \(S\). We decompose \(F^{(\gamma-1,S)}_{U_{i},(1)}=I_{i1}+I_{i2}+I_{i3}\) where
\[I_{i1} =-2\beta_{3}\beta_{\tau}\sum_{\begin{subarray}{c}1\leq|\beta| \leq|\gamma|-2\\ \beta\leq\gamma\end{subarray}}\binom{\gamma}{\beta}e^{\frac{s}{2}}\partial^{ \gamma-\beta}((S-S_{\infty})JN_{i})\partial_{1}\partial^{\beta}S, \tag{6.40}\] \[I_{i2} =-2\beta_{3}\beta_{\tau}\sum_{\begin{subarray}{c}1\leq|\beta| \leq|\gamma|-2\\ \beta\leq\gamma\end{subarray}}\binom{\gamma}{\beta}e^{\frac{s}{2}}S_{\infty} \partial^{\gamma-\beta}(JN_{i})\partial_{1}\partial^{\beta}S,\] \[I_{i3} =-2\beta_{3}\beta_{\tau}\sum_{\begin{subarray}{c}1\leq|\beta| \leq|\gamma|-2\\ \beta\leq\gamma\end{subarray}}\binom{\gamma}{\beta}e^{-\frac{s}{2}}\delta_{i2} \partial^{\gamma-\beta}S\partial_{2}\partial^{\beta}S.\]
For the first part \(I_{i_{1}}\) we have that
\[2\sum_{|\gamma|=k}\lambda^{\gamma_{2}}\int|I_{i1}\partial^{ \gamma}U_{i}| \lesssim e^{\frac{s}{2}}\|D^{k}U\|_{L^{2}}\sum_{j=1}^{k-2}\left\|D ^{k-1-(j-1)}((S-S_{\infty})JN)D^{j-1}D^{2}S\right\|_{L^{2}} \tag{6.41}\] \[\lesssim e^{\frac{s}{2}}\|D^{k}U\|_{L^{2}}\sum_{j=1}^{k-2}\|D^{k} ((S-S_{\infty})JN)\|_{L^{2}}^{a}\|D^{2}((S-S_{\infty})JN)\|_{L^{q}}^{1-a}\|D^ {k}S\|_{L^{2}}^{b}\|D^{2}S\|_{L^{q}}^{1-b}\]
As before we use Leibniz rule, estimates (5.10) of \(J\),\(N\) and the Poincare inequality in \(y_{2}\) direction to deduce that
\[\|D^{k}((S-S_{\infty})JN)\|_{L^{2}(\mathbb{R}^{2})}\lesssim\|D^{k}S\|_{L^{2}},\ |D^{2}(JN)|\lesssim\varepsilon^{\frac{1}{4}}e^{-s},\ \|D^{2}((S-S_{\infty})JN)\|_{L^{q}(\mathbb{R}^{2})} \lesssim Me^{-\frac{s}{2}}. \tag{6.42}\]
In the last inequality we used the fact that \(q>4\Rightarrow\|\eta^{-1}\|_{L^{\frac{q}{6}}(\mathbb{R}^{2})}<\infty\). Thus we have
\[2\sum_{|\gamma|=k}\lambda^{\gamma_{2}}\int|I_{i1}\partial^{\gamma }U_{i}| \lesssim e^{\frac{s}{2}}\|D^{k}U\|_{L^{2}}\sum_{j=1}^{k-2}\|D^{k}S\| _{L^{2}}^{a+b}\left(Me^{-\frac{s}{2}}\right)^{2-a-b} \tag{6.43}\] \[\lesssim\sum_{j=1}^{k-2}\lambda^{-\frac{k}{2}(1+a+b)}M^{2-a-b}e^{ -\frac{1-a-b}{2}s}E_{k}^{1+a+b}\] \[\leq\sum_{j=1}^{k-2}\left(\delta E_{k}^{2}+C(\delta)\lambda^{- \frac{2k(1+a+b)}{2(1-a-b)}}M^{\frac{2(2-a-b)}{1-a-b}}e^{-s}\right)\leq\delta E _{k}^{2}+e^{-s}M^{4k-6}\]
\(I_{i2}\) is estimated as
\[\|I_{i2}\|_{L^{2}} \lesssim\sum_{\begin{subarray}{c}1\leq|\beta|\leq|\gamma|-2\\ \beta\leq\gamma\end{subarray}}e^{\frac{s}{2}}M^{3}\varepsilon^{\frac{s}{8}- \frac{\gamma_{1}-\beta_{1}}{2}-\frac{\gamma_{2}-\beta_{2}}{6}}e^{-\frac{3}{2} (\gamma_{1}-\beta_{1})s-\frac{1}{2}(\gamma_{2}-\beta_{2})s}\cdot\left(e^{ \frac{s}{6}}e^{\frac{s}{2}}\right)^{k-1-|\beta|}\|D^{k}S\|_{L^{2}} \tag{6.44}\] \[\lesssim\begin{subarray}{c}|\gamma|=k\\ \lesssim\ M^{3}\varepsilon^{\frac{s}{3}}\|D^{k}S\|_{L^{2}}\end{subarray}\]
And \(I_{i3}\) is estimated as
\[\begin{split} 2\sum_{|\gamma|=k}\lambda^{\gamma_{2}}\int|I_{i3} \partial^{\gamma}U_{i}|&\lesssim\sum_{|\gamma|=k}e^{-\frac{s}{ 2}}\|D^{k}U\|_{L^{2}}\sum_{j=1}^{k-2}\|S\|_{\dot{H}^{k-1}}^{\frac{k-1-i}{k-1}} \|DS\|_{L^{\infty}}^{\frac{j}{k-1}}\|S\|_{\dot{H}^{k}}^{\frac{j}{k-1}}\| \partial_{2}S\|_{L^{\infty}}^{\frac{k-1-i}{k-1}}\\ &\lesssim e^{-\frac{s}{2}}\|U\|_{\dot{H}^{k}}\|S\|_{\dot{H}^{k}}e ^{-\frac{s}{2}}\leq\varepsilon^{\frac{s}{2}}E_{k}^{2}\end{split} \tag{6.45}\]
Hence, we have
\[2\sum_{|\gamma|=k}\lambda^{\gamma_{2}}\int\left|F_{U_{i},(1)}^{(\gamma-1,S)} \partial^{\gamma}U_{i}\right|\leq(\delta+2\varepsilon^{\frac{s}{2}})E_{k}^{2 }+e^{-s}M^{4k-6} \tag{6.46}\]
Next, we turn to \(F_{U_{i},(2)}^{(\gamma-1,S)}\). From Leibniz rule we have
\[\|[\partial^{\gamma},JN_{i}]S\|_{L^{2}(\mathcal{X}(s))}\lesssim\varepsilon^{ \frac{1}{2}}\|D^{k}S\|_{L^{2}(\mathbb{R}^{2})}+\varepsilon e^{-\left(\gamma_ {1}+\frac{\gamma_{2}}{3}-1\right)s}, \tag{6.47}\]
and
\[\begin{split} 2\sum_{|\gamma|=k}\lambda^{\gamma_{2}}\int\left|F_{U_{i},(2)}^{(\gamma-1,S)}\partial^{\gamma}U_{i}\right|&\lesssim\sum_{ |\gamma|=k}e^{\frac{s}{2}}\|[\partial^{\gamma},JN_{i}]S\|_{L^{2}(\mathcal{X}(s ))}\,\|\partial_{1}S\|_{L^{\infty}}\|D^{k}U_{i}\|_{L^{2}}\\ &\lesssim e^{\frac{s}{2}}\left(\varepsilon^{\frac{1}{2}}\|D^{k}S \|_{L^{2}}+\varepsilon e^{-\left(\gamma_{1}+\frac{\gamma_{2}}{3}-1\right)s} \right)e^{-\frac{s}{2}}\|D^{k}U\|_{L^{2}}\leq\varepsilon^{\frac{1}{4}}E_{k}^ {2}+e^{-s}.\end{split} \tag{6.48}\]
Thus we have
\[2\sum_{|\gamma|=k}\lambda^{\gamma_{2}}\int_{\mathbb{R}^{3}}\left|F_{U_{i}}^{( \gamma-1,S)}\partial^{\gamma}U_{i}\right|\leq(\delta+2\varepsilon^{\frac{1}{ 2}}+\varepsilon^{\frac{1}{4}})E_{k}^{2}+e^{-s}M^{4k-5}. \tag{6.49}\]
Summing all the estimates together leads us to
\[2\sum_{|\gamma|=k}\lambda^{\gamma_{2}}\int_{\mathbb{R}^{3}}\left|F_{U_{i}}^{( \gamma)}\partial^{\gamma}U_{i}\right|\leq\left(4+C\varepsilon^{\frac{1}{4}}+6 \delta\right)E_{k}^{2}+e^{-s}M^{4k-4}. \tag{6.50}\]
The proof of (6.9b) is similar.
Proof of \(\dot{H}^{k}\) estimates of \(U\), \(S\).: We multiply the equations of \(\partial^{\gamma}U_{i}\), \(\partial^{\gamma}S\) by \(\partial^{\gamma}U_{i}\), \(\partial^{\gamma}S\) respectively and sum over, then we arrive at
\[\begin{split}\frac{1}{2}\frac{d}{ds}\|\partial^{\gamma}U\|_{L^{ 2}}^{2}\leq&\frac{1}{2}\int|\partial^{\gamma}U|^{2}(\operatorname{ div}\mathcal{V}_{A}-2D_{\gamma})+\frac{1}{2}(1+\gamma_{1})\beta_{3}\beta_{ \tau}(1+\varepsilon^{\frac{1}{13}})\left(\|\partial^{\gamma}S\|_{L^{2}}^{2}+ \|\partial^{\gamma}U\|_{L^{2}}^{2}\right)\\ &-2\beta_{3}\beta_{\tau}\int S\left(e^{\frac{s}{2}}JN_{i} \partial_{1}\partial^{\gamma}S+e^{-\frac{s}{2}}\partial_{i2}\partial_{2} \partial^{\gamma}S\right)\partial^{\gamma}U_{i}+\int\left|F_{U_{i}}^{(\gamma)} \partial^{\gamma}U_{i}\right|,\\ \frac{1}{2}\frac{d}{ds}\left\|\partial^{\gamma}S\right\|_{2}^{2}& \leq\frac{1}{2}\int|\partial^{\gamma}S|^{2}\left(\operatorname{ div}\mathcal{V}_{A}-2D_{\gamma}\right)+\frac{1}{2}\beta_{\tau}\left(\beta_{1}+ \beta_{3}\gamma_{1}\right)\left(1+\varepsilon^{\frac{1}{13}}\right)\left(\| \partial^{\gamma}S\|_{2}^{2}+\|\partial^{\gamma}U\|_{2}^{2}\right)\\ &-2\beta_{3}\beta_{\tau}\int S\left(e^{\frac{s}{2}}\partial_{1} \partial^{\gamma}U_{j}JN_{j}+e^{-\frac{s}{2}}\partial_{2}\partial^{\gamma}U_{2} \right)\partial^{\gamma}S+\int\left|F_{S}^{(\gamma)}\partial^{\gamma}S\right|. \end{split} \tag{6.51b}\]
Here we used the fact that \(|JN\partial_{1}W|\leq|JN||\partial_{1}W|\leq(1+\varepsilon^{\frac{3}{2}})(1+ \varepsilon^{\frac{1}{12}})\leq 1+\varepsilon^{\frac{1}{13}}\). By summing up the above two inequalities and integrating by part, we get
\[\begin{split}&\frac{d}{ds}\left(\|\partial^{\gamma}U\|_{L^{2}}^{2}+ \|\partial^{\gamma}S\|_{L^{2}}^{2}\right)+\int\left(2D_{\gamma}-\operatorname{ div}\mathcal{V}_{A}-\beta_{\tau}(1+2\gamma_{1}\beta_{3})(1+\varepsilon^{\frac{1}{13}}) \right)\left(|\partial^{\gamma}U|^{2}+|\partial^{\gamma}S|^{2}\right)\\ &\leq 2\int\left|F_{U_{i}}^{(\gamma)}\partial^{\gamma}U_{i}\right|+2 \int\left|F_{S}^{(\gamma)}\partial^{\gamma}S\right|+4\beta_{3}\beta_{\tau}\int \left[e^{\frac{s}{2}}\partial^{\gamma}S\partial^{\gamma}U\cdot\partial_{1}(SJN )+e^{-\frac{s}{2}}\partial^{\gamma}S\partial_{2}U_{2}\partial_{2}S\right]\\ &\leq 2\int\left|F_{U_{i}}^{(\gamma)}\partial^{\gamma}U_{i}\right|+2 \int\left|F_{S}^{(\gamma)}\partial^{\gamma}S\right|+2\beta_{3}\beta_{\tau}(1+ 2\varepsilon^{\frac{1}{2}})\left(\|\partial^{\gamma}U\|_{L^{2}}^{2}+\| \partial^{\gamma}S\|_{L^{2}}^{2}\right).\end{split} \tag{6.52}\]
In the last inequality we used the facts that \(|\partial_{1}(SJN)|\leq(1+\varepsilon^{\frac{1}{2}})e^{-\frac{s}{2}}\) and estimate (5.23) of \(S\), the first fact can be obtained from (5.23) and estimates (5.10) of \(J\), \(N\).
Now we estimate the damping term:
\[\begin{split}& 2D_{\gamma}-\operatorname{div}\mathcal{V}_{A}-\beta_{ \tau}(1+2\gamma_{1}\beta_{3})(1+\varepsilon^{\frac{1}{13}})-2\beta_{3}\beta_{ \tau}(1+2\varepsilon^{\frac{1}{2}})\\ &\geq|\gamma|+2\gamma_{1}\left(1+\beta_{1}\beta_{\tau}\partial_{ 1}(JW)+\partial_{1}G_{A}\right)-2-\beta_{1}\beta_{\tau}\partial_{1}(JW)- \partial_{1}G_{A}-\partial_{2}h_{A}\\ &\qquad\qquad-2\beta_{3}\beta_{\tau}(1+\varepsilon^{\frac{1}{3} })\gamma_{1}-\underbrace{\left[\beta_{\tau}(1+\varepsilon^{\frac{1}{13}})+2 \beta_{3}\beta_{\tau}(1+2\varepsilon^{\frac{1}{2}})\right]}_{\leq 3}\\ &\stackrel{{\eqref{eq:2}}}{{\geq}}|\gamma|+2\gamma_{1} \left(1-\beta_{1}\beta_{\tau}-\beta_{3}\beta_{\tau}\right)-6-C\varepsilon^{ \frac{1}{13}}\geq k-7.\end{split} \tag{6.53}\]
Multiply (6.52) by \(\lambda^{\gamma_{2}}\) and take the sum, we have
\[\frac{d}{ds}E_{k}^{2}+(k-7)E_{k}^{2}\leq(8+16\delta)E_{k}^{2}+e^{-s}M^{4k-3}. \tag{6.54}\]
Taking \(k\geq 18\) we have
\[\frac{d}{ds}E_{k}^{2}+2E_{k}^{2}\leq e^{-s}M^{4k-3}, \tag{6.55}\]
which results in
\[E_{k}^{2}(s)\leq e^{-2(s-s_{0})}E_{k}^{2}(s_{0})+(1-e^{-(s-s_{0})})e^{-s}M^{4k -3}. \tag{6.56}\]
By Leibniz rule, we have
\[\begin{cases}\|WN\|_{H^{k}}\leq(1+C\varepsilon^{\frac{1}{2}})\|W\|_{\dot{H}^{ k}}+CM^{2}\varepsilon^{\frac{5}{3}}e^{-\left(\frac{k}{3}-\frac{3}{2}\right)s}\\ \|AT\|_{\dot{H}^{k}},\|ZN\|_{\dot{H}^{k}}\leq(1+C\varepsilon^{\frac{1}{2}})\|A \text{ or }Z\|_{\dot{H}^{k}}+CM^{3}\varepsilon^{\frac{5}{3}}e^{-\frac{k-3}{3}s},\end{cases} \tag{6.57}\]
and
\[\begin{cases}\|U\|_{\dot{H}^{k}}\leq(1+C\varepsilon^{\frac{1}{2}})\left[ \frac{1}{2}\left(e^{-\frac{s}{2}}\|W\|_{\dot{H}^{k}}+\|Z\|_{\dot{H}^{k}}\right) +\|A\|_{\dot{H}^{k}}\right]+CM^{3}\varepsilon^{\frac{5}{3}}e^{-\frac{k-3}{3} s}\\ \|S\|_{\dot{H}^{k}}\leq\frac{1}{2}(1+C\varepsilon^{\frac{1}{2}})\left(e^{- \frac{s}{2}}\|W\|_{\dot{H}^{k}}+\|Z\|_{\dot{H}^{k}}\right)+CM^{3}\varepsilon^{ \frac{5}{3}}e^{-\frac{k-3}{3}s}.\end{cases} \tag{6.58}\]
According to the assumption (3.40) of \(\dot{H}^{k}\) norm of \(W\),\(Z\),\(A\), we have
\[E_{k}^{2}(s_{0})\leq(2+C\varepsilon^{\frac{1}{2}})\varepsilon. \tag{6.59}\]
Thus we finally obtain that
\[\lambda^{k}\left(\|U\|_{\dot{H}^{k}}+\|S\|_{\dot{H}^{k}}\right)\leq E_{k}^{2} \leq(2+C\varepsilon^{\frac{1}{2}})\varepsilon^{-1}e^{-2s}+M^{4k-3}e^{-s}(1- \varepsilon^{-1}e^{-s}). \tag{6.60}\]
This finishes the proof of energy estimate.
### Higher order estimates for \(W,z,a\)
Using the energy estimate, we can further obtain higher order estimates for \(W,Z,A\).
**Lemma 6.5**.: For \(k\gg 1\), we have that
\[|\partial^{\gamma}W| \lesssim\begin{cases}\eta^{-\frac{1}{6}}e^{\frac{s}{2(k-3)}},& \gamma_{1}=0,\ |\gamma|=3\\ \eta^{-\frac{1}{6}}e^{\frac{s}{k-3}},&\gamma_{1}>0,\ |\gamma|=3,\end{cases} \tag{6.61a}\] \[|\partial^{\gamma}Z| \lesssim\begin{cases}e^{-\left(\frac{3}{2}-\frac{1}{2(k-3)}\right) s},&\gamma_{1}\geq 1,\ |\gamma|=3\\ e^{-\left(1-\frac{|\gamma|-1}{2(k-3)}\right)s},&|\gamma|=3,4,5,\end{cases}\] (6.61b) \[|\partial^{\gamma}A| \lesssim\begin{cases}e^{-\left(\frac{3}{2}-\frac{1}{k-2}\right) s},&\gamma_{1}\geq 1,\ |\gamma|=2,3\\ e^{-\left(1-\frac{|\gamma|-1}{2(k-3)}\right)s},&|\gamma|=3,4,5.\end{cases} \tag{6.61c}\]
Proof.: The proof is similar to the interpolation in [13], still for the reader's convinience we recap the proof here.
First we deal with \(A\). For \(\gamma_{1}\geq 1\), \(|\gamma|=2,3\), we have
\[|\partial^{\gamma}A| \lesssim\|\partial_{1}A\|_{\dot{H}^{k-1}}^{\frac{|\gamma|-1}{k- 2}}\|\partial_{1}A\|_{L^{\infty}}^{1-\frac{|\gamma|-1}{k-2}}\lesssim(M^{2k}e^ {-\frac{s}{2}})^{\frac{|\gamma|-1}{k-2}}(Me^{-\frac{3}{2}s})^{1-\frac{|\gamma| -1}{k-2}}\lesssim M^{2k}e^{-\left(\frac{3}{2}-\frac{|\gamma|-1}{k-2}\right)s} \lesssim e^{-\left(\frac{3}{2}-\frac{|\gamma|}{k-2}\right)s}. \tag{6.62}\]
For \(|\gamma|=3,4,5\), we have
\[|\partial^{\gamma}A| \lesssim\|D^{k}A\|_{L^{\infty}}^{\frac{|\gamma|-2}{k-2}}\|D^{2}A \|_{L^{\infty}}^{1-\frac{|\gamma|-2}{k-3}}\lesssim(M^{2k}e^{-\frac{s}{2}})^{ \frac{|\gamma|-2}{k-3}}(Me^{-s})^{1-\frac{|\gamma|-2}{k-3}}\lesssim M^{2k}e^{- \left(1-\frac{|\gamma|-2}{2(k-3)}\right)s}\lesssim e^{-\left(1-\frac{|\gamma| -1}{2(k-3)}\right)s}. \tag{6.63}\]
Next we estimate \(Z\). For \(\gamma_{1}\geq 1\), \(|\gamma|=3\), we have
\[|\partial^{\gamma}Z| \lesssim\|\partial_{1}\nabla Z\|_{\dot{H}^{k-2}}^{\frac{1}{k-3}} \|\partial_{1}\nabla Z\|_{L^{\infty}}^{1-\frac{1}{k-3}}\lesssim(M^{2k}e^{- \frac{s}{2}})^{\frac{1}{k-3}}(Me^{-\frac{3}{2}s})^{1-\frac{1}{k-3}}\lesssim M^ {2k}e^{-\left(\frac{3}{2}-\frac{1}{k-3}\right)s}\lesssim e^{-\left(\frac{3}{2 }-\frac{1}{2(k-3)}\right)}. \tag{6.64}\]
For \(|\gamma|=3,4,5\), the estimates for \(Z\) are the same as \(A\).
Now we turn to \(W\). Since \(|\gamma|=3\), we can split \(\gamma\) as \(\gamma=\gamma^{\prime}+\gamma^{\prime\prime}\), where \(|\gamma^{\prime}|=1\) and \(\gamma_{1}^{\prime\prime}=\min(\gamma_{1},2)\), then \(\eta^{\mu}\partial^{\gamma}W=\partial^{\gamma^{\prime}}(\eta^{\mu}\partial^{ \gamma^{\prime\prime}}W)-\partial^{\gamma^{\prime}}(\eta^{\mu})\partial^{ \gamma^{\prime\prime}}W=I_{1}+I_{2}\). Let
\[\mu=\begin{cases}\frac{1}{6},&\gamma_{1}=0\\ \frac{1}{3},&\text{otherwise}.\end{cases} \tag{6.65}\]
Note that \(|\partial_{1}(\eta^{\mu})|\lesssim\eta^{\mu-\frac{1}{3}}\), \(|\partial_{2}(\eta^{\mu})|\lesssim\eta^{\mu-\frac{1}{6}}\), thus when \(\gamma_{1}=0\) we have \(|I_{2}|\lesssim\eta^{\mu-\frac{1}{6}}|\partial_{22}W|\lesssim M\), when \(\gamma_{1}>0\) we have \(|I_{2}|\lesssim M\eta^{-\frac{1}{6}}\lesssim M\). By interpolation and bootstrap assumptions for \(W\), we have
\[|I_{1}|\lesssim\|D(\eta^{\mu}\partial^{\gamma^{\prime\prime}}W)\|_{L^{\infty} }\lesssim\|\eta^{\mu}\partial^{\gamma^{\prime\prime}}W\|_{\dot{H}^{k-2}}^{ \frac{1}{k-3}}\|\eta^{\mu}\partial^{\gamma^{\prime\prime}}W\|_{L^{\infty}}^{1- \frac{1}{k-3}}\lesssim\ M\|\eta^{\mu}\partial^{\gamma^{\prime\prime}}W\|_{ \dot{H}^{k-2}}^{\frac{1}{k-3}}. \tag{6.66}\]
We estimate the \(\dot{H}^{k-2}\) norm as follows:
\[\|\eta^{\mu}\partial^{\gamma^{\prime\prime}}W\|_{\dot{H}^{k-2}} \lesssim\sum_{m=0}^{k-2}\|D^{m}\partial^{\gamma^{\prime\prime}}WD^{k -2-m}\eta^{\mu}\|_{L^{2}}\lesssim\sum_{m=0}^{k-2}\|D^{m}\partial^{\gamma^{ \prime\prime}}W\|_{L^{\frac{2(k-1)}{m+1}}}\|D^{k-2-m}\eta^{\mu}\|_{L^{\frac{2(k- 1)}{k-2-m}}(\mathcal{X}(s))} \tag{6.67}\] \[\lesssim\sum_{m=0}^{k-2}\|W\|_{\dot{H}^{k}}^{\frac{n+1}{k-1}}\| \nabla W\|_{L^{\infty}}^{1-\frac{n+1}{k-1}}\|D^{k-2-m}\eta^{\mu}\|_{L^{\frac{2(k- 1)}{k-2-m}}(\mathcal{X}(s))}\] \[\lesssim\sum_{m=0}^{k-2}(M^{2k})^{\frac{m+1}{k-1}}\|D^{k-2-m}\eta ^{\mu}\|_{L^{\frac{2(k-1)}{k-2-m}}(\mathcal{X}(s))}.\]
Simple calculation yields \(|D^{k}(\eta^{\mu})|\lesssim\eta^{\mu-\frac{k}{6}}\), thus we have that
\[\|D^{k-2-m}\eta^{\mu}\|_{L^{\frac{2(k-1)}{k-2-m}}(\mathcal{X}(s))} \lesssim\|\eta^{\mu-\frac{k-2-m}{6}}\|_{L^{\frac{2(k-1)}{k-2-m}}( \mathcal{X}(s))}\lesssim\|\eta^{\mu}\|_{L^{\infty}(\mathcal{X}(s))}\|\eta^{- \frac{k-2-m}{6}}\|_{L^{\frac{2(k-1)}{k-2-m}}(\mathcal{X}(s))}\] \[\lesssim\varepsilon e^{3\mu s}\times\begin{cases}1,&m=k-2\\ \|\eta^{-1}\|_{L^{\frac{k-1}{3}}(\mathcal{X}(s))},&m<k-2\end{cases} \tag{6.68}\] \[\stackrel{{ k>3}}{{\lesssim}}\varepsilon^{\mu}e^{3 \mu s}.\]
Consequently, we obtain \(|I_{1}|\lesssim M(M^{2k}\varepsilon^{\mu}e^{3\mu s})^{\frac{1}{k-3}}\lesssim e ^{\frac{3\mu s}{k-3}}\), and \(|\eta^{\mu}\partial^{\gamma}W|\lesssim e^{\frac{3\mu s}{k-3}}+M\lesssim e^{ \frac{3\mu s}{k-3}}\).
## 7. Constraints and evolution of modulation variables
In this section we close the bootstrap argument for the modulation variables \(\xi,n,\phi,\tau,\kappa\). The equation of these variables are deduced from the constraints that we impose on \(W\).
### Constraints
We impose constraints on \(W\) and its derivatives up to second order at the origin, i.e.
\[W(0,s)=\overline{W}(0)=0,\quad\nabla W(0,s)=\nabla\overline{W}(0)=(-1,0)^{T}, \quad\nabla^{2}W(0,s)=\nabla^{2}\overline{W}(0)=\begin{pmatrix}0&0\\ 0&0\end{pmatrix}. \tag{7.1}\]
It is possible to impose these constraints. In fact, as long as the initial data \(W(y,-\log\varepsilon)\) satisfies these constraints, we can choose \(6\) modulation variables \(\xi\), \(n_{2}\), \(\phi\), \(\tau\), \(\kappa\) in a continuous manner with respect to time in terms of \(w(x,t)\), ensuring that \(W(y,s)\) still satisfies these constraints.
### The functions \(G_{w}\), \(h_{w}\), \(F_{w}\) and their derivatives, evaluated at \(y=0\)
In a neighborhood of the origin, \(\tilde{f}\) reduces to \(\tilde{f}(\tilde{x},t)=\frac{1}{2}\phi\tilde{x}_{2}^{2}\), and as a consequence in a neighborhood of \(0\), \(f(x,t)=\frac{1}{2}\phi x_{2}^{2}\). Note that any derivatives with respect to \(x_{1}\) or \(\tilde{x}_{1}\) of those function vanish at the origin, we can conveniently evaluated the \(f\)-related functions at the origin:
\[\tilde{f}^{0}=0,\ \ \partial_{\tilde{x}_{2}}\tilde{f}^{0}=0,\ \ \partial_{\tilde{x}_{2}}^{2}\tilde{f}^{0}=0; \tag{7.2a}\] \[(\partial_{t})_{\tilde{x}}\tilde{f}^{0}=0,\ \ \partial_{\tilde{x}_{2}}( \partial_{t})_{\tilde{x}}\tilde{f}^{0}=0,\ \ \partial_{\tilde{x}_{2}}^{2}(\partial_{t})_{\tilde{x}}\tilde{f}^{0}=\dot{\phi};\] (7.2b) \[f^{0}=0,\ \ \partial_{x_{2}}f^{0}=0,\ \ \partial_{x_{2}}^{2}f^{0}=0;\] (7.2c) \[J^{0}=0,\ \ \partial_{x_{2}}J^{0}=0,\ \ \partial_{x_{2}}^{2}J^{0}= \phi^{2},\ \ \partial_{x_{2}}^{3}J^{0}=0;\] (7.2d) \[N^{0}=(1,0)^{T},\ \ \partial_{x_{2}}N^{0}=(0,-\phi)^{T},\ \ \partial_{x_{2}}^{2}N^{0}=(-\phi^{2},0)^{T},\ \ \partial_{x_{2}}^{3}N^{0}=(0,2\phi^{3})^{T};\] (7.2e) \[T^{0}=(0,1)^{T},\ \ \partial_{x_{2}}T^{0}=(\phi,0)^{T},\ \ \partial_{x_{2}}^{2}T^{0}=(0, \phi^{2})^{T},\ \ \partial_{x_{2}}^{3}T^{0}=(-2\phi^{3},0)^{T};\] (7.2f) \[(\partial_{t})_{x}f^{0}=0,\ \ \partial_{x_{2}}(\partial_{t})_{x}f^{0}=0,\ \ \partial_{x_{2}}^{0}( \partial_{t})_{x}f^{0}=\dot{\phi};\] (7.2g) \[\partial_{t}J^{0}=0,\ \ \partial_{x_{2}}\partial_{t}J^{0}=0,\ \ \partial_{x_{2}}^{2} \partial_{t}J^{0}=2\phi\dot{\phi};\] (7.2h) \[\partial_{t}N^{0}=(0,0)^{T},\ \ \partial_{x_{2}}\partial_{t}N^{0}=(0,- \dot{\phi})^{T},\ \ \partial_{x_{2}}^{2}\partial_{t}N^{0}=(-2\phi\dot{\phi},0)^{T};\] (7.2i) \[\partial_{t}T^{0}=(0,0)^{T},\ \ \partial_{x_{2}}\partial_{t}T^{0}=( \dot{\phi},0)^{T},\ \ \partial_{x_{2}}^{2}\partial_{t}T^{0}=(0,-2\phi\dot{\phi})^{T}. \tag{7.2j}\]
By the definition of \(V\), we have
\[V_{i}^{0}=-\frac{1+\alpha}{2}R_{ji}\dot{\xi}_{j}, \tag{7.3a}\] \[\partial_{1}V^{0}=\frac{1+\alpha}{2}e^{-\frac{3}{2}s}(0,Q_{21})^{ T},\ \ \partial_{2}V^{0}=\frac{1+\alpha}{2}e^{-\frac{\alpha}{2}}(Q_{12},0)^{T}, \tag{7.3b}\]
\[\partial_{11}V^{0}=\frac{1+\alpha}{2}\phi e^{-3s}(0,Q_{21})^{T},\ \ \partial_{12}V^{0}=0,\ \ \partial_{22}V^{0}=0. \tag{7.3c}\]
From the definition of \(G_{W}\) and (7.2)(7.3), we have
\[\frac{1}{\beta_{\tau}}G_{W}^{0} =e^{\frac{\tau}{2}}[\kappa+\beta_{2}Z^{0}-(1+\alpha)\beta_{1}R_{j 1}\dot{\xi}_{j}], \tag{7.4a}\] \[\frac{1}{\beta_{\tau}}\partial_{1}G_{W}^{0} =\beta_{2}e^{\frac{\tau}{2}}\partial_{1}Z^{0},\] (7.4b) \[\frac{1}{\beta_{\tau}}\partial_{2}G_{W}^{0} =\beta_{2}e^{\frac{\tau}{2}}\partial_{2}Z^{0}+(1+\alpha)\beta_{1 }Q_{12}+\beta_{1}(1+\alpha)\phi R_{j2}\dot{\xi}_{j},\] (7.4c) \[\frac{1}{\beta_{\tau}}\partial_{11}G_{W}^{0} =\beta_{2}e^{\frac{\tau}{2}}\partial_{11}Z^{0},\] (7.4d) \[\frac{1}{\beta_{\tau}}\partial_{12}G_{W}^{0} =\beta_{2}e^{\frac{\tau}{2}}\partial_{12}Z^{0}-(1+\alpha)\beta_{1 }e^{-\frac{3}{2}s}\phi Q_{21},\] (7.4e) \[\frac{1}{\beta_{\tau}}\partial_{22}G_{W}^{0} =-\phi e^{-\frac{\tau}{2}}+\phi^{2}e^{-s}\frac{G_{W}^{0}}{\beta_{ \tau}}+e^{-\frac{\tau}{2}}\beta_{2}\partial_{22}Z^{0}-(1+\alpha)\beta_{1}\phi ^{2}e^{-\frac{\tau}{2}}R_{j1}\dot{\xi}_{j}. \tag{7.4f}\]
Similarly for \(h_{W}\), we have
\[\frac{1}{\beta_{\tau}}h_{W}^{0}=\beta_{1}e^{-\frac{\tau}{2}}\left(2A^{0}-(1+ \alpha)R_{j2}\dot{\xi}_{j}\right). \tag{7.5}\]
And for the forcing terms, we also insert the above evaluation to the definition (2.28), then we have
\[F_{W}^{0}= -\beta_{3}\beta_{\tau}(\kappa-Z^{0})\partial_{2}A^{0}+\beta_{ \tau}e^{-\frac{\tau}{2}}Q_{12}A^{0} \tag{7.6a}\] \[-2\phi\beta_{1}\beta_{\tau}e^{-\frac{\tau}{2}}\left(-\frac{1+ \alpha}{2}R_{j2}\dot{\xi}_{j}+A^{0}\right)A^{0}+\frac{1}{2}\phi\beta_{3}\beta _{\tau}e^{-\frac{\tau}{2}}(\kappa+Z^{0})(Z^{0}-\kappa),\] \[\partial_{1}F_{W}^{0}= \beta_{3}\beta_{\tau}Q_{2}A^{0}(\partial_{1}Z^{0}+e^{-\frac{\tau} {2}})-\beta_{3}\beta_{\tau}\partial_{12}A^{0}(\kappa-Z^{0})+\beta_{\tau}e^{ \frac{\tau}{2}}Q_{12}\partial_{1}A^{0}\] (7.6b) \[-\phi\partial_{1}Ah_{W}^{0}-\phi\beta_{2}\beta_{\tau}e^{-\frac{ \tau}{2}}A^{0}\left((1+\alpha)Q_{21}e^{-\frac{3}{2}s}+2\partial_{1}A^{0}\right)\] \[-\frac{1}{2}\phi\beta_{3}\beta_{\tau}e^{-\frac{\tau}{2}}(e^{- \frac{\tau}{2}}+\partial_{1}Z^{0})(\kappa+Z^{0})+\frac{1}{2}\phi\beta_{3}\beta _{\tau}e^{-\frac{\tau}{2}}(\kappa-Z^{0})(\partial_{1}Z^{0}-e^{-\frac{\tau}{2 }}),\] \[\partial_{2}F_{W}^{0}= -\beta_{3}\beta_{\tau}(\kappa-Z^{0})\partial_{22}A^{0}+\beta_{3} \beta_{\tau}\partial_{2}Z^{0}\partial_{2}A^{0}-\dot{\phi}\beta_{\tau}e^{-s}A^ {0}+\beta_{\tau}e^{-\frac{\tau}{2}}Q_{12}\partial_{2}A^{0}\] (7.6c) \[-\phi\beta_{3}\beta_{\tau}e^{-\frac{\tau}{2}}\partial_{2}Z^{0}Z^{0 }+\phi^{2}\beta_{3}\beta_{\tau}e^{-s}A^{0}(\kappa-Z^{0})\] \[-\phi\beta_{1}\beta_{\tau}e^{-\frac{\tau}{2}}A^{0}\left(2\partial _{2}A^{0}-\phi e^{-\frac{\tau}{2}}(\kappa+Z^{0})\right)-\phi\partial_{2}A^{0} h_{W}^{0},\] \[\partial_{11}F_{W}^{0}= 2\beta_{3}\beta_{\tau}(e^{-\frac{\tau}{2}}+\partial_{12}Z^{0}) \partial_{12}A^{0}-\beta_{3}\beta_{\tau}(\kappa-Z^{0})\partial_{112}A^{0}+ \beta_{\tau}e^{-\frac{\tau}{2}}Q_{12}\partial_{11}A^{0}\] (7.6d) \[-2\phi\beta_{1}\beta_{\tau}e^{-\frac{\tau}{2}}\partial_{11}A^{0} \left(2A^{0}-\frac{1+\alpha}{2}R_{j2}\dot{\xi}_{j}\right)-4\phi\beta_{1}\beta _{\tau}e^{-\frac{\tau}{2}}\partial_{1}A^{0}\left(\frac{1+\alpha}{2}Q_{21}e^{- \frac{3}{2}s}+\partial_{1}A^{0}\right)\] \[-\phi\beta_{3}\beta_{\tau}e^{-\frac{\tau}{2}}\left(\left(\partial _{1}Z^{0}\right)^{2}-e^{-s}+Z^{0}\partial_{11}Z^{0}\right)+\beta_{3}\beta_{\tau} \partial_{11}Z^{0}\partial_{2}A^{0},\]
\[\partial_{12}F^{0}_{W}= -2\beta_{3}\beta_{\tau}(\kappa-Z^{0})\partial_{122}A^{0}+\beta_{3} \beta_{\tau}\partial_{12}Z^{0}\partial_{2}A^{0}+\beta_{3}\beta_{\tau}\partial_{2 }Z^{0}\partial_{12}A^{0} \tag{7.6e}\] \[+\beta_{3}\beta_{\tau}(e^{-\frac{\varepsilon}{2}}+\partial_{1}Z^{ 0})\partial_{22}A^{0}-\beta_{\tau}\dot{\phi}e^{-s}\partial_{1}A^{0}+\beta_{ \tau}e^{-\frac{\varepsilon}{2}}Q_{12}\partial_{12}A^{0}\] \[-\phi\beta_{3}\beta_{\tau}e^{-\frac{\varepsilon}{2}}\left(\partial _{12}Z^{0}Z^{0}+\partial_{1}Z^{0}\partial_{2}Z^{0}\right)+\phi^{2}\beta_{3} \beta_{\tau}e^{-s}\left((\kappa-Z^{0})\partial_{1}A^{0}-(e^{-\frac{\varepsilon }{2}}+\partial_{1}Z^{0})A^{0}\right)\] \[-2\phi\beta_{1}\beta_{\tau}e^{-\frac{\varepsilon}{2}}\left[ \partial_{1}A^{0}\partial_{2}A^{0}+\left(\frac{1+\alpha}{2}Q_{21}e^{-\frac{3} {2}s}+\partial_{1}A^{0}\right)\partial_{2}A^{0}+A^{0}\partial_{12}A^{0}\right]\] \[-\phi\partial_{12}A^{0}h^{0}_{W}+\phi^{2}\beta_{1}\beta_{\tau}e^ {-s}\left[\partial_{1}A^{0}(\kappa+Z^{0})+A^{0}(\partial_{1}Z^{0}-e^{-\frac{ \varepsilon}{2}})\right],\] \[\partial_{22}F^{0}_{W}= \beta_{3}\beta_{\tau}\left[\partial_{22}Z^{0}\partial_{2}A^{0}-( \kappa-Z^{0})\partial_{22}A^{0}+\partial_{2}Z^{0}\partial_{2}A^{0}\right]+\phi ^{2}\beta_{3}\beta_{\tau}e^{-s}(\kappa-Z^{0})\partial_{2}A^{0}\] (7.6f) \[-2\dot{\phi}\beta_{\tau}e^{-s}\partial_{2}A^{0}-\phi\beta_{3} \beta_{\tau}e^{-\frac{\varepsilon}{2}}\partial_{2}Z^{0}\partial_{2}Z^{0}+ \beta_{\tau}e^{-\frac{\varepsilon}{2}}\partial_{22}A^{0}Q_{12}\] \[+2\phi^{2}\beta_{3}\beta_{\tau}e^{-s}\left[(\kappa-Z^{0})\partial _{2}A^{0}-A^{0}\partial_{2}Z^{0}\right]-\phi^{3}\beta_{3}\beta_{\tau}e^{- \frac{3}{2}s}(\kappa-Z^{0})(\kappa+Z^{0})\] \[-2\phi\beta_{1}\beta_{\tau}e^{-\frac{\varepsilon}{2}}[2\partial_{ 2}A^{0}\partial_{2}A^{0}+A^{0}\partial_{22}A^{0}]+2\phi^{3}\beta_{1}\beta_{ \tau}e^{-\frac{3}{2}s}(A^{0})^{2}\] \[-2\phi^{2}\beta_{1}\beta_{\tau}e^{-s}\partial_{2}[A(U\cdot N)]^{0} -\phi h^{0}_{W}\partial_{22}A^{0}+\phi^{3}e^{-s}h^{0}_{W}A^{0}\] \[-(1+\alpha)\phi^{2}\beta_{1}\beta_{\tau}e^{-\frac{3}{2}s}Q_{21}A ^{0}-\phi\beta_{3}\beta_{\tau}e^{-\frac{\varepsilon}{2}}Z^{0}\partial_{22}Z^ {0}.\]
Also note that if \(|\gamma|=1,2\), we have
\[F^{(\gamma),0}_{W}=\partial^{\gamma}F^{0}_{W}+\partial^{\gamma}G^{0}_{W}. \tag{7.7}\]
### Evolution of modulation variables
Setting \(y=0\) in the equation of \(W\), we can see that
\[\dot{\kappa}=\frac{1}{\beta_{\tau}}e^{\frac{\varepsilon}{2}}(F^{0}_{W}+G^{0}_ {W}). \tag{7.8}\]
Setting \(y=0\) in the equation of \(\partial_{1}W\), we have
\[\dot{\tau}=\frac{1}{\beta_{\tau}}(\partial_{1}F^{0}_{W}+\partial_{1}G^{0}_{W}). \tag{7.9}\]
Setting \(y=0\) in the equation of \(\partial_{2}W\), we have
\[0=\partial_{2}F^{0}_{W}+\partial_{2}G^{0}_{W}. \tag{7.10}\]
Combining this with (7.4c), we obtain
\[Q_{12}=-\frac{1}{\beta_{1}\beta_{\tau}(1+\alpha)}\left(\partial_{2}F^{0}_{W}+ \beta_{2}\beta_{\tau}e^{\frac{\varepsilon}{2}}\partial_{2}Z^{0}+\beta_{1}\beta _{\tau}(1+\alpha)e^{\frac{\varepsilon}{2}}\phi R_{j2}\dot{\xi}_{j}\right). \tag{7.11}\]
Setting \(y=0\) in the equation of \(\partial_{11}W\) and \(\partial_{12}W\), we have
\[\begin{pmatrix}\partial_{111}W^{0}&\partial_{112}W^{0}\\ \partial_{112}W^{0}&\partial_{122}W^{0}\end{pmatrix}\begin{pmatrix}G^{0}_{W}\\ h^{0}_{W}\end{pmatrix}=\begin{pmatrix}\partial_{11}F^{0}_{W}+\partial_{11}G^{0} _{W}\\ \partial_{12}F^{0}_{W}+\partial_{12}G^{0}_{W}\end{pmatrix}. \tag{7.12}\]
Denote the matrix \(\partial_{1}\nabla^{2}W^{0}\) by \(H^{0}(s)\), then we have
\[|G^{0}_{W}|+|h^{0}_{W}|\lesssim\left|(H^{0})^{-1}\right|\left(|\partial_{1} \nabla F^{0}_{W}|+|\partial_{1}\nabla G^{0}_{W}|\right), \tag{7.13}\]
which shall be used to establish an upper bound for \(|G^{0}_{W}|\) and \(|h^{0}_{W}|\). Since \(R\in SO(2)\), we have
\[\dot{\xi}_{j}=R_{ji}R_{ki}\dot{\xi}_{k}=R_{j1}R_{k1}\dot{\xi}_{k}+R_{j2}R_{k2} \dot{\xi}_{k}. \tag{7.14}\]
Combining this with (7.4a)(7.5), we have
\[\dot{\xi}_{j}=\frac{R_{j1}}{(1+\alpha)\beta_{1}}\left(\kappa+\beta_{2}Z^{0}- \frac{1}{\beta_{\tau}}e^{-\frac{\varepsilon}{2}}G^{0}_{W}\right)+\frac{R_{j2}}{1 +\alpha}\left(2A^{0}-\frac{e^{\frac{\varepsilon}{2}}}{\beta_{1}\beta_{\tau}}h^{0}_ {W}\right). \tag{7.15}\]
Setting \(y=0\) in the equation of \(\partial_{11}W\) and \(\partial_{12}W\), we have
\[G_{W}^{0}\partial_{122}W^{0}+h_{W}^{0}\partial_{222}W^{0}=\partial_{22}F_{W}^{0 }+\partial_{22}G_{W}^{0}. \tag{7.16}\]
Then from (7.4f), we have
\[\dot{\phi}= \frac{e^{\frac{\varepsilon}{2}}}{\beta_{\tau}}\left(\partial_{122 }W^{0}G_{W}^{0}+\partial_{222}W^{0}h_{W}^{0}-\partial_{22}F_{W}^{0}\right)+ \beta_{2}e^{s}\partial_{22}Z^{0}+\phi^{2}\left(\kappa+\beta_{2}Z^{0}-\frac{e^ {-\frac{\varepsilon}{2}}}{\beta_{\tau}}G_{W}^{0}\right)+\frac{\phi^{2}}{\beta _{\tau}}e^{-\frac{\varepsilon}{2}}G_{W}^{0}. \tag{7.17}\]
## 8. Closure of bootstrap argument for the modulation variables
From (2.45)(B-\(\widetilde{W}^{0}\)), we can see that
\[H^{0}:=\partial_{1}\nabla^{2}W^{0}=\partial_{1}\nabla^{2}\overline{W}^{0}+ \partial_{1}\nabla^{2}\widetilde{W}^{0}=\text{diag}(6,2)+O(\varepsilon^{\frac {1}{4}}). \tag{8.1}\]
As a consequence, we have
\[\left|(H^{0})^{-1}\right|\leq 1. \tag{8.2}\]
Next we estimate \(|\partial_{1}\nabla F_{W}^{0}|\). From (7.6d)(7.6e), bootstrap assumptions and (6.61c), we have \(|\partial_{11}F_{W}^{0}|\lesssim e^{-s}\) and \(|\partial_{12}F_{W}^{0}|\lesssim e^{-s}+\varepsilon^{2}|h_{W}^{0}|\). Then by invoking (7.13), one can see that
\[|G_{W}^{0}|+|h_{W}^{0}|\lesssim e^{-s}. \tag{8.3}\]
Now we give a new estimate for \(V_{2}=\frac{1+\alpha}{2}\left[Q_{21}\left(y_{1}e^{-\frac{3}{2}s}+f\right)+ \frac{e^{\frac{\varepsilon}{2}}}{(1+\alpha)\beta_{1}\beta_{\tau}}h_{W}^{0}+ \frac{2}{1+\alpha}A^{0}\right]\). Recall that in (5.19) we already have a bound \(|V_{2}|\lesssim M^{\frac{1}{4}}\), but now with the help of (8.3) one can see that for all \(y\in\mathcal{X}(s)\), there holds that
\[|V_{2}|\lesssim M\varepsilon^{\frac{1}{2}}. \tag{8.4}\]
### The \(\xi\) estimate
From (7.15) we have
\[|\dot{\xi}_{j}|= \kappa_{0}+M\varepsilon+e^{-\frac{s}{2}}Me^{-s}\leq\frac{1}{10}M ^{\frac{1}{4}}. \tag{8.5}\]
From (5.1) and \(\xi(-\varepsilon)=0\), we have
\[|\xi_{j}(t)|\leq\int_{-\varepsilon}^{t}|\dot{\xi}_{j}|dt\leq\frac{1}{10}M^{ \frac{1}{4}}\varepsilon. \tag{8.6}\]
### The \(\kappa\) estimate
From (7.6a) and bootstrap assumptions, we have \(|F_{W}^{0}|\lesssim\varepsilon^{\frac{1}{4}}e^{-\frac{\varepsilon}{2}}\), thus according to (7.8)(8.3), we have that
\[|\dot{\kappa}|\lesssim e^{\frac{\varepsilon}{2}}\left(Me^{-s}+\varepsilon^{ \frac{1}{4}}e^{-\frac{\varepsilon}{2}}\right)\leq\frac{1}{2}M, \tag{8.7}\]
and
\[|\kappa-\kappa_{0}|\leq\frac{1}{2}M|t+\varepsilon|\lesssim M\varepsilon\leq \frac{1}{4}\kappa_{0}. \tag{8.8}\]
### The \(\phi\) estimate
From (7.6f), bootstrap assumptions and (6.61c), we have \(|\partial_{22}F_{W}^{0}|\lesssim e^{-\frac{\varepsilon}{2}}\), thus via (7.17), we obtain
\[|\dot{\phi}| \lesssim e^{\frac{\varepsilon}{2}}\left(\varepsilon^{\frac{1}{4}} Me^{-s}+\varepsilon^{\frac{1}{4}}Me^{-s}+e^{-\frac{\varepsilon}{2}}\right)+e^{s}Me^{-s}+M^{4} \varepsilon^{2}\left(\kappa_{0}+M\varepsilon+e^{-\frac{\varepsilon}{2}}Me^{-s }\right)+M^{4}\varepsilon^{2}e^{-\frac{\varepsilon}{2}}M^{-s} \tag{8.9}\] \[\lesssim M\leq\frac{1}{10}M^{2}.\]
Since \(|\phi(-\varepsilon)|=|\phi_{0}|\leq\varepsilon\), we can further obtain that
\[|\phi|\leq\varepsilon+|\dot{\phi}||t+\varepsilon|\leq\frac{1}{2}M^{2}\varepsilon. \tag{8.10}\]
### The \(\tau\) estimate
Also from (7.6b)(7.4b) and bootstrap assumptions, we have \(|\partial_{1}F_{W}^{0}|\lesssim e^{-s}\) and \(|\partial_{1}G_{W}^{0}|\lesssim M^{\frac{1}{2}}e^{-s}\), thus by (7.9), we have
\[|\dot{\tau}|\lesssim e^{-s}+M^{\frac{1}{2}}e^{-s}\leq\frac{1}{4} Me^{-s}. \tag{8.11}\]
Since \(\tau(-\varepsilon)=0\), we get
\[|\tau(t)|\leq\int_{-\varepsilon}^{t}\frac{1}{4}M\varepsilon dt\leq \frac{1}{4}M\varepsilon^{2}. \tag{8.12}\]
### The \(n_{2}\) estimate
We first estimate \(Q_{12}\). From (7.6c)(8.3) and bootstrap assumptions, \(|\partial_{2}F_{W}^{0}|\lesssim M\kappa_{0}e^{-s}\), thus via (7.11), we can bound \(Q_{12}\) by
\[|Q_{12}|\lesssim M\kappa_{0}e^{-s}+e^{\frac{s}{2}}M\varepsilon^{ \frac{1}{2}}e^{-\frac{s}{2}}\leq 2M\varepsilon^{\frac{1}{2}}. \tag{8.13}\]
From the definition of \(Q\), we have
\[Q_{12}=-\dot{n}_{2}\sqrt{1-n_{2}^{2}}-\frac{n_{2}^{2}\dot{n}_{2}}{\sqrt{1-n_{ 2}^{2}}}, \tag{8.14}\]
thus by bootstrap assumption of \(n_{2}\), we finally can see that
\[|\dot{n}_{2}|=|Q_{12}|\left(\sqrt{1-n_{2}^{2}}+\frac{n_{2}^{2}}{ \sqrt{1-n_{2}^{2}}}\right)^{-1}\leq\left(1+\varepsilon^{\frac{1}{2}}\right)|Q _{12}|\leq\frac{1}{2}M^{2}\varepsilon^{\frac{1}{2}}. \tag{8.15}\]
By \(n_{2}(-\varepsilon)=0\), we improve the assumption of \(n_{2}\) by factor \(\frac{1}{2}\).
## 9. Estimates for transport and forcing terms
To close the bootstrap argument of the Riemann variables \(W,Z,A\), we will estimate each term in the transport-type equations they satisfy.
### Transport estimates
**Lemma 9.1**.: For the transport terms in the equations of \(W,Z,A\), we have the following inequalities:
\[|\partial^{\gamma}G_{W}|\lesssim\begin{cases}Me^{-s}+M^{\frac{1}{2}}e^{-s}|y _{1}|+M^{2}\varepsilon^{\frac{1}{2}}|y_{2}|\lesssim\varepsilon^{\frac{1}{3}}e ^{\frac{s}{2}}&\gamma=(0,0)\\ M^{2}e^{-\frac{5}{6}s}&\gamma=(1,0)\\ M^{2}\varepsilon^{\frac{1}{0}}&\gamma=(0,1)\\ M^{2}e^{-\frac{s}{2}}&|\gamma|=2,\end{cases} \tag{9.1}\]
\[\left|\partial^{\gamma}\left(G_{A}+(1-\beta_{1})\kappa_{0}e^{\frac{s}{2}} \right)\right|+\left|\partial^{\gamma}\left(G_{Z}+(1-\beta_{2})\kappa_{0}e^{ \frac{s}{2}}\right)\right|\lesssim\begin{cases}\varepsilon^{\frac{1}{3}}e^{ \frac{s}{2}}&\gamma=(0,0)\\ M^{2}e^{-\frac{5}{6}s}&\gamma=(1,0)\\ M^{2}\varepsilon^{\frac{1}{6}}&\gamma=(0,1)\\ M^{2}e^{-\frac{s}{2}}&|\gamma|=2,\end{cases} \tag{9.2}\]
\[|\partial^{\gamma}h_{W}|+|\partial^{\gamma}h_{Z}|+|\partial^{\gamma}h_{A}|\lesssim \begin{cases}M\varepsilon^{\frac{1}{2}}e^{-\frac{s}{2}}&\gamma=(0,0)\\ M\varepsilon^{\frac{1}{3}}e^{-s}\eta^{-\frac{1}{3}}&\gamma=(1,0)\\ \varepsilon^{\frac{1}{3}}e^{-s}&\gamma=(0,1)\\ \varepsilon^{\frac{1}{6}}e^{-s}\eta^{-\frac{1}{6}}&\gamma=(2,0)\\ \varepsilon^{\frac{1}{6}}e^{-s}\eta^{-\frac{1}{6}}&\gamma=(1,1)\\ e^{-s}\eta^{-\frac{1}{6}}&\gamma=(0,2).\end{cases} \tag{9.3}\]
Furthermore, for \(|\gamma|=3,4\) we have
\[\begin{cases}|\partial^{\gamma}G_{W}|\lesssim e^{-\left(\frac{1}{2}-\frac{| \gamma|-1}{2(8-3)}\right)s}\\ |\partial^{\gamma}h_{W}|\lesssim e^{-s}.\end{cases} \tag{9.4}\]
Proof.: For \(\gamma>0\), from the definition (2.26) of \(G_{W}\), we have
\[|\partial^{\gamma}G_{W}|\lesssim e^{\frac{s}{2}}\left|\partial^{\gamma}\frac{ \partial_{t}f}{1+f_{x_{1}}}\right|+e^{\frac{s}{2}}\sum_{\beta\leq\gamma} \left|\partial^{\beta}J\right|\left(\kappa_{0}\mathbb{1}_{\beta=\gamma}+| \partial^{\gamma-\beta}Z|+|\partial^{\gamma-\beta}(V\cdot N)|\right). \tag{9.5}\]
Then appealing to bootstrap assumptions and (5.10)(5.19)(6.61c)(6.61b), we obtain the desired estimates for \(G_{W}\). For the case \(\gamma=0\), we have that
\[|G_{W}| \leq\left|\left(G_{W}+\beta_{\tau}e^{\frac{s}{2}}\frac{\partial_ {t}f}{1+f_{x_{1}}}\right)^{0}\right|+\left\|\partial_{1}\left(G_{W}+\beta_{ \tau}e^{\frac{s}{2}}\frac{\partial_{t}f}{1+f_{x_{1}}}\right)\right\|_{L^{ \infty}}|y_{1}|+\left\|\partial_{2}\left(G_{W}+\beta_{\tau}e^{\frac{s}{2}} \frac{\partial_{t}f}{1+f_{x_{1}}}\right)\right\|_{L^{\infty}}|y_{2}| \tag{9.6}\] \[\left|\beta_{\tau}e^{\frac{s}{2}}\frac{\partial_{t}f}{1+f_{x_{1} }}\right|\] \[\lesssim|G_{W}^{0}|+M^{\frac{1}{2}}\varepsilon^{\frac{1}{2}}e^{-s }|y_{1}|+M^{2}\varepsilon^{\frac{2}{3}}e^{\frac{s}{2}}\] \[\lesssim Me^{-s}+M^{\frac{1}{2}}\varepsilon^{\frac{s}{2}}+M^{2} \varepsilon^{\frac{2}{3}}e^{\frac{s}{2}}\lesssim\varepsilon^{\frac{1}{3}}e^{ \frac{s}{2}}.\]
Once we have the bounds for \(G_{W}\) and its derivatives, the estimates of \(G_{Z}\) and \(G_{A}\) follow from the identities
\[G_{Z}+(1-\beta_{2})\kappa_{0}e^{\frac{s}{2}} =G_{W}+(1-\beta_{2})e^{\frac{s}{2}}\left[(\kappa_{0}-\kappa)+(1- \beta_{\tau}J)\kappa+\beta_{\tau}J\right], \tag{9.7}\] \[G_{A}+(1-\beta_{1})e^{\frac{s}{2}}\kappa_{0} =G_{W}+(1-\beta_{1})e^{\frac{s}{2}}\left[(\kappa_{0}-\kappa)+(1- \beta_{\tau}J)\kappa\right]+(\beta_{2}-\beta_{1})\beta_{\tau}e^{\frac{s}{2}}JZ.\]
The estimates of \(h_{W}\), \(h_{Z}\), \(h_{A}\) can be obtain by the definition of these transport terms, bootstrap assumptions and (5.10)(5.19)(6.61c)(6.61b)(6.61a).
### Forcing estimates
Now we deal with the forcing terms that appear in the equations of \(W,Z,A\).
**Lemma 9.2**.: For derivatives of the forcing terms, we have the following bounds:
\[|\partial^{\gamma}F_{W}|+e^{\frac{s}{2}}|\partial^{\gamma}F_{Z}|\lesssim \begin{cases}e^{-\frac{s}{2}},&\gamma=(0,0)\\ e^{-s}\eta^{-\frac{1}{6}+\frac{2}{3(8-2)}},&\gamma=(1,0)\\ M^{2}e^{-s},&\gamma=(0,1)\\ e^{-s}\eta^{-\frac{1}{6}+\frac{1}{8-2}},&\gamma=(2,0)\\ e^{-s}\eta^{-\frac{1}{6}+\frac{1}{8-2}},&\gamma=(1,1)\\ M^{\frac{1}{4}}e^{-\left(1-\frac{1}{8-2}\right)s},&\gamma=(0,2),\end{cases} \tag{9.8}\]
\[|\partial^{\gamma}F_{W}|\lesssim\begin{cases}e^{-\frac{s}{2}},&|\gamma|=3\\ \varepsilon^{\frac{1}{6}},&|\gamma|=4,|y|\leq l,\end{cases} \tag{9.9}\]
\[|\partial^{\gamma}F_{A}|\lesssim\begin{cases}M^{\frac{1}{2}}e^{-s},&\gamma=(0,0) \\ M^{\frac{1}{2}}e^{-s},&\gamma=(0,1)\\ M^{\frac{1}{2}}e^{-\left(1-\frac{1}{k-3}\right)s}\eta^{-\frac{1}{6}},&\gamma=(0, 2),\end{cases} \tag{9.10}\]
\[|\partial^{\gamma}\widetilde{F}_{W}|\lesssim\begin{cases}M\varepsilon^{\frac{1 }{6}}\eta^{-\frac{1}{6}},&\gamma=(0,0),\ |y|\leq L\\ \varepsilon^{\frac{1}{6}}\eta^{-\frac{1}{2}+\frac{2}{3(k-2)}},&\gamma=(1,0), \ |y|\leq L\\ M^{2}\varepsilon^{\frac{1}{6}}\eta^{-\frac{1}{3}},&\gamma=(0,1),\ |y|\leq L\\ \varepsilon^{\frac{1}{7}},&|\gamma|\leq 4,\ |y|\leq l,\end{cases} \tag{9.11}\]
and
\[\left|(\partial^{\gamma}\widetilde{F}_{W})^{0}\right|\overset{|\gamma|=3}{ \lesssim}e^{-\left(\frac{1}{2}-\frac{1}{k-3}\right)s}. \tag{9.12}\]
Proof.: The proof of (9.8)(9.9)(9.10)(9.11) is just taking derivatives of the forcing terms, then using the bootstrap assumptions and the estimates (5.10)(5.17)(5.19)(5.20)(6.61a)(6.61b)(6.61c)(9.1)(9.2)(9.3)(9.4) to estimate each term therein. Finally we prove (9.12). Since \(\partial^{\gamma}\overline{W}^{0}=0\) when \(|\gamma|\) is even, and \(\partial_{2}G_{W}^{0}+\partial_{2}F_{W}^{0}=0\), we have
\[|(\partial^{\gamma}\widetilde{F}_{W})^{0}| \lesssim e^{-\frac{s}{2}}+|(1-\beta_{\gamma}J)^{0}|+\sum_{m=1}^{ 3}|\nabla^{m}J^{0}|+|\nabla G_{W}^{0}|+|\nabla^{3}G_{W}^{0}|+|\nabla h_{W}^{0 }|+|\nabla^{3}h_{W}^{0}| \tag{9.13}\] \[\lesssim Me^{-\frac{s}{2}}+M^{2}e^{-\frac{5}{6}s}+e^{-\left(\frac {1}{2}-\frac{1}{k-3}\right)s}\lesssim e^{-\left(\frac{1}{2}-\frac{1}{k-3} \right)s}.\]
**Lemma 9.3**.: For the forcing terms of \(\partial^{\gamma}W,\partial^{\gamma}Z,\partial^{\gamma}A\), we have that
\[|F_{W}^{(\gamma)}|\lesssim\begin{cases}e^{-\frac{s}{2}},&\gamma=(0,0)\\ \varepsilon^{\frac{1}{4}}\eta^{-\frac{1}{2}+\frac{2}{3(k-2)}},&\gamma=(1,0) \\ M^{2}\varepsilon^{\frac{1}{6}}\eta^{-\frac{1}{3}},&\gamma=(0,1)\\ \eta^{-\frac{1}{2}+\frac{1}{k-2}},&\gamma=(2,0)\\ M^{\frac{3}{2}}\eta^{-\frac{1}{3}},&\gamma=(1,1)\\ M^{\frac{2}{9}}\eta^{-\frac{1}{3}+\frac{1}{3(k-3)}},&\gamma=(0,2),\end{cases} \tag{9.14}\]
\[|F_{Z}^{(\gamma)}|\lesssim\begin{cases}e^{-s},&\gamma=(0,0)\\ e^{-\frac{3}{2}s}\eta^{-\frac{1}{6}+\frac{2}{3(k-2)}},&\gamma=(1,0)\\ M^{2}e^{-\frac{3}{2}s},&\gamma=(0,1)\\ e^{-\frac{3}{2}s}\left(1+M\eta^{-\frac{1}{3}}\right),&\gamma=(2,0)\\ e^{-\frac{3}{2}s}\left(M^{\frac{1}{2}}+M^{2}\eta^{-\frac{1}{3}}\right),&\gamma= (1,1)\\ M^{\frac{1}{4}}e^{-\left(\frac{3}{2}-\frac{1}{k-3}\right)s},&\gamma=(0,2),\end{cases} \tag{9.15}\]
\[|F_{A}^{(\gamma)}|\lesssim\begin{cases}M^{\frac{1}{4}}e^{-s},&\gamma=(0,0)\\ M^{\frac{1}{4}}e^{-s},&\gamma=(0,1)\\ e^{-\left(1-\frac{2}{k-3}\right)s}\eta^{-\frac{1}{6}},&\gamma=(0,2),\end{cases} \tag{9.16}\]
\[|\widetilde{F}_{W}^{(\gamma)}|\lesssim\begin{cases}e^{\frac{1}{11}}\eta^{- \frac{1}{2}},&\gamma=(1,0),\ |y|\leq L\\ e^{\frac{1}{12}}\eta^{-\frac{1}{3}},&\gamma=(0,1),\ |y|\leq L\\ e^{\frac{1}{7}}+e^{\frac{1}{10}}\left(\log M\right)^{\gamma_{2}-1},&|\gamma| \leq 4,\ |y|\leq l.\end{cases} \tag{9.17}\]
And for \(y=0\) and \(|\gamma|=3\), we have
\[\left|\widetilde{F}_{W}^{(\gamma),0}\right|\lesssim e^{-\left(\frac{1}{2}- \frac{1}{k-3}\right)s},\quad|\gamma|=3. \tag{9.18}\]
Proof.: First, we have
\[\left|F_{W}^{(0,0)}\right|=|F_{W}|\lesssim e^{-\frac{s}{2}}. \tag{9.19}\]
For the case \(1\leq|\gamma|\leq 2\), we decompose the estimate for forcing term as
\[\begin{split}\left|F_{W}^{(\gamma)}\right|& \overset{1\leq|\gamma|\leq 2}{\lesssim}|\partial^{\gamma}F_{W}|+ \sum_{0\leq\beta<\gamma}\left(|\partial^{\gamma-\beta}G_{W}||\partial_{1} \partial^{\beta}W|+|\partial^{\gamma-\beta}h_{W}||\partial_{2}\partial^{\beta }W|\right)\\ &\qquad+\mathbb{1}_{|\gamma|=2}\gamma_{2}|\partial_{2}(JW)|| \partial_{1}^{\gamma_{1}+1}\partial_{2}^{\gamma_{2}-1}W|+|[\partial^{\gamma},J]W\partial_{1}W|\\ =&|\partial^{\gamma}F_{W}|+I_{1}^{(\gamma)}+I_{2}^{( \gamma)}+I_{3}^{(\gamma)}.\end{split} \tag{9.20}\]
Then one can check that each term do not exceed the proposed bound. \(F_{Z}^{(\gamma)}\), \(F_{A}^{(\gamma)}\) and \(\widetilde{F}_{W}^{(\gamma)}\) can be estimated in a similar fashion.
## 10. Bounds on Lagrangian trajectories
Given a point \(y_{0}\) and an initial time \(s_{0}\geq-\log\varepsilon\), we define the Lagrangian trajectory \(\Phi_{W}^{y_{0}}\) by
\[\begin{cases}\dfrac{d}{ds}\Phi_{W}^{y_{0}}(s)=\mathcal{V}_{W}\circ\Phi_{W}^{y _{0}}(s)\\ \Phi_{W}^{y_{0}}(s_{0})=y_{0}.\end{cases} \tag{10.1}\]
Similarly we define \(\Phi_{Z}^{y_{0}}\) and \(\Phi_{A}^{y_{0}}\) using the transport terms in the equations of \(Z\) and \(A\) respectively.
We now discuss the upper bound and the lower bound of these Lagrangian trajectories, and we will close the bootstrap argument for the spatial support of \(W,Z,A\).
### Upper bound of the trajectories
**Lemma 10.1**.: Let \(\Phi\) denote either \(\Phi_{W}^{y_{0}}\), \(\Phi_{Z}^{y_{0}}\), \(\Phi_{A}^{y_{0}}\), for any \(y_{0}\in\mathcal{X}_{0}\), we have that
\[\begin{split}|\Phi_{1}(s)|&\leq\frac{3}{2}e^{\frac{1} {2}}e^{\frac{3}{2}s},\\ |\Phi_{2}(s)|&\leq\frac{3}{2}e^{\frac{1}{2}}e^{\frac{ 3}{2}}.\end{split} \tag{10.2}\]
Proof.: We first deal with the case \(\Phi=\Phi_{W}^{y_{0}}\). Note that
\[\begin{split}\frac{d}{ds}\left(e^{-\frac{s}{2}s}\Phi_{1}(s)\right)& =e^{-\frac{3}{2}s}g_{W}\circ\Phi,\\ \frac{d}{ds}\left(e^{-\frac{s}{2}}\Phi_{2}(s)\right)& =e^{-\frac{s}{2}}h_{W}\circ\Phi,\\ \Phi(-\log\varepsilon)&=y_{0}.\end{split} \tag{10.3}\]
Then the estimates are direct consequences of \(|g_{W}|\leq e^{\frac{s}{2}}\) and \(|h_{W}|\leq e^{\frac{s}{2}}\). We omit the detail, which is the same as that in [13] The estimates for \(\Phi_{Z}\) and \(\Phi_{A}\) are similar.
Now we close the bootstrap bound for spatial support. We attempt to show that
\[\operatorname{supp}(DW,DZ,DA)\subset\frac{7}{8}\mathcal{X}(s)=\left\{|y_{1}| \leq\frac{7}{4}\varepsilon^{\frac{1}{2}}e^{\frac{3}{2}s},|y_{2}|\leq\frac{7}{ 4}\varepsilon^{\frac{1}{6}}e^{\frac{s}{2}}\right\}. \tag{10.4}\]
Since \(\operatorname{supp}_{x}(D_{x}N,D_{x}T)\subset\frac{3}{4}\mathcal{X}(s)=\{|x_ {1}|\leq\frac{3}{2}\varepsilon^{\frac{1}{7}},|x_{2}|\leq\frac{3}{2} \varepsilon^{\frac{1}{6}}\}\), in \(\left(\frac{3}{4}\mathcal{X}(s)\right)^{c}\), there hold
\[\begin{cases}g_{W}=\beta_{\tau}JW+\beta_{\tau}e^{\frac{s}{2}}\left[-\frac{ \partial_{t}f}{1+f_{x_{1}}}+J\left(\kappa+\beta_{2}Z+2\beta_{1}V_{1}\right) \right]\\ g_{Z}=\beta_{2}\beta_{\tau}JW+\beta_{\tau}e^{\frac{s}{2}}\left[-\frac{ \partial_{t}f}{1+f_{x_{1}}}+J\left(\beta_{2}\kappa+Z+2\beta_{1}V_{1}\right) \right]\\ g_{A}=\beta_{1}\beta_{\tau}JW+\beta_{\tau}e^{\frac{s}{2}}\left[-\frac{ \partial_{t}f}{1+f_{x_{1}}}+J\left(\beta_{1}\kappa+\beta_{1}Z+2\beta_{1}V_{1} \right)\right],\end{cases} \tag{10.5}\]
\[h_{W}=h_{Z}=h_{A}=2\beta_{1}\beta_{\tau}e^{-\frac{s}{2}}\left(V_{2}+A\right), \tag{10.6}\]
\[\begin{cases}F_{W}=-2\beta_{3}\beta_{\tau}S\partial_{2}A+\beta_{\tau}e^{- \frac{s}{2}}Q_{12}A\\ F_{Z}=2\beta_{3}\beta_{\tau}S\partial_{2}A+\beta_{\tau}e^{-\frac{s}{2}}Q_{12} A\\ F_{A}=-2\beta_{3}\beta_{\tau}S\partial_{2}S-\beta_{\tau}e^{-\frac{s}{2}}Q_{12}U\cdot N.\end{cases} \tag{10.7}\]
We also define
\[\begin{cases}W_{\infty}(t)=\left[\frac{\kappa_{0}}{2}(n_{1}+1)-\kappa\right] e^{\frac{s}{2}}\\ Z_{\infty}(t)=\frac{\kappa_{0}}{2}(n_{1}-1)\\ A_{\infty}(t)=-\frac{\kappa_{0}}{2}n_{2}\\ S_{\infty}(t)=\frac{e^{-\frac{s}{2}}W_{\infty}+\kappa-Z_{\infty}}{2}=\frac{ \kappa_{0}}{2}.\end{cases} \tag{10.8}\]
Then \(W-W_{\infty}\), \(Z-Z_{\infty}\), \(A-A_{\infty}\) satisfy transport-type equations:
\[\begin{split}\left(\partial_{s}-\frac{1}{2}\right)&(W-W_{ \infty})+\mathcal{V}_{W}\cdot\nabla(W-W_{\infty})=F_{W-W_{\infty}},\\ &\partial_{s}(Z-Z_{\infty})+\mathcal{V}_{Z}\cdot\nabla(Z-Z_{\infty})=F_{ Z-Z_{\infty}},\\ &\partial_{s}(A-A_{\infty})+\mathcal{V}_{A}\cdot\nabla(A-A_{\infty})=F_{ A-A_{\infty}}.\end{split} \tag{10.9}\]
where
\[\begin{split} F_{W-W_{\infty}}=&-\beta_{3}\beta_{\tau}e^ {-\frac{\varepsilon}{2}}(W-W_{\infty})\partial_{2}A+\beta_{3}\beta_{\tau}(Z-Z_{ \infty})\partial_{2}A+\beta_{\tau}e^{-\frac{\varepsilon}{2}}Q_{12}(A-A_{ \infty})-2\beta_{3}\beta_{\tau}S_{\infty}\partial_{2}A,\\ F_{Z-Z_{\infty}}=&\beta_{3}\beta_{\tau}e^{-s}(W-W_{ \infty})\partial_{2}A-\beta_{3}\beta_{\tau}e^{-\frac{\varepsilon}{2}}(Z-Z_{ \infty})\partial_{2}A+2\beta_{3}\beta_{\tau}e^{-\frac{\varepsilon}{2}}S_{ \infty}\partial_{2}A+\beta_{\tau}e^{-s}Q_{12}(A-A_{\infty}),\\ F_{A-A_{\infty}}=&-\beta_{3}\beta_{\tau}e^{-s}(W-W_ {\infty})\partial_{2}S+\beta_{3}\beta_{\tau}e^{-\frac{\varepsilon}{2}}(Z-Z_{ \infty})\partial_{2}S-2\beta_{3}\beta_{\tau}e^{-\frac{\varepsilon}{2}}S_{ \infty}\partial_{2}S\\ &-\beta_{\tau}e^{-\frac{\varepsilon}{2}s}Q_{12}(W-W_{\infty})- \beta_{\tau}e^{-s}Q_{12}(Z-Z_{\infty}).\end{split} \tag{10.10}\]
For \(y_{0}\notin\frac{7}{8}\mathcal{X}(s)\), let \(M^{\prime}>|y_{0}|\) be a large enough constant. Define
\[Q_{big}=\left\{|y_{1}|\leq M^{\prime},|y_{2}|\leq M^{\prime}\right\},\ Q_{ small}(s)=\left\{|y_{1}|\leq e^{\frac{3}{2}s}\mu_{1}(s),|y_{2}|\leq e^{\frac{ \varepsilon}{2}}\mu_{2}(s)\right\}, \tag{10.11}\]
where
\[\begin{cases}\mu_{1}(s)=\frac{3+\varepsilon}{2}\varepsilon^{\frac{1}{2}}-2CM^ {\frac{1}{4}}e^{-s}\\ \mu_{2}(s)=\frac{3+\varepsilon}{2}\varepsilon^{\frac{1}{6}}-2CM^{ \frac{1}{4}}e^{-s}.\end{cases} \tag{10.12}\]
One can verify that \(\frac{3}{4}\mathcal{X}(s)\subset Q_{small}\subset\frac{7}{8}\mathcal{X}(s) \subset Q_{big}\) if we take \(\varepsilon\) small enough and \(M^{\prime}\) large enough. Define
\[E(y,s)=\frac{1}{2}\left(e^{-s}(W-W_{\infty})^{2}+(Z-Z_{\infty})+2(A-A_{ \infty})^{2}\right), \tag{10.13}\]
then we have
\[\frac{d}{ds}\int_{Q_{big}\backslash Q_{small}}E\leq C\int_{Q_{big}\backslash Q _{small}}E. \tag{10.14}\]
From the initial condition, we can see that when \(s=-\log\varepsilon\), \(\int_{Q_{big}\backslash Q_{small}}E=0\), thus \(\int_{Q_{big}\backslash Q_{small}}E\equiv 0\) at each time according to Gronwall's inequality. This tells us as long as \(y_{0}\notin\frac{7}{8}\mathcal{X}(s)\), \(W(y_{0},s)=W_{\infty}\), \(Z(y_{0},s)=Z_{\infty}\), \(A(y_{0},s)=A_{\infty}\), thus we proved (IB-S).
### Lower bounds for lagrangian trajectories
**Lemma 10.2**.: Suppose \(|y_{0}|\geq l\), \(s_{0}\geq-\log\varepsilon\), then we have
\[|\Phi_{W}^{y_{0}}(s)|\geq|y_{0}|e^{\frac{\varepsilon-s0}{5}}\quad\text{ for all }s\geq s_{0}. \tag{10.15}\]
Proof.: It suffices to prove that \(y\cdot\mathcal{V}_{W}\geq\frac{1}{5}|y|^{2}\). Note that by definition of \(\mathcal{V}_{W}\), we can see that
\[y\cdot\mathcal{V}_{W}(y)\geq\frac{1}{2}|y|^{2}+y_{1}^{2}-\beta_{\tau}|y_{1}JW| -|y_{1}G_{W}|-|y_{2}h_{W}|. \tag{10.16}\]
We split the estimate of \(W\) into two cases: \(|y|\leq L\) and \(|y|>L\). If \(|y|\leq L\), by (B-\(\widetilde{W}\)-1)(2.43) we have
\[\begin{split}|W(y)|&\leq|W(y_{1},y_{2})-W(0,y_{2}) |+|W(0,y_{2})-\overline{W}(0,y_{2})|+\underbrace{[\overline{W}(0,y_{2})]}_{=0} \\ &\leq(1+\varepsilon^{\frac{1}{12}})|y_{1}|+\varepsilon^{\frac{1}{ 12}}|y_{2}|.\end{split} \tag{10.17}\]
If \(|y|>L\), from bootstrap assumption we have
\[|W(y)|\leq(1+\varepsilon^{\frac{1}{20}})\eta^{\frac{1}{6}}(y)\leq(1+ \varepsilon^{\frac{1}{20}})^{2}|y|. \tag{10.18}\]
Then appealing to (9.1)(9.3) we have the desired result.
**Lemma 10.3**.: Let \(\Phi\) denote either \(\Phi_{Z}^{y_{0}}\) or \(\Phi_{A}^{y_{0}}\). If
\[\kappa_{0}\geq\frac{3}{1-\max(\beta_{1},\beta_{2})}. \tag{10.19}\]
then for any \(0\leq\sigma_{1}<\frac{1}{2}\) and \(2\sigma_{1}<\sigma_{2}\), we have the bound
\[\int_{-\log\varepsilon}^{\infty}e^{\sigma_{1}s^{\prime}}\left(1+\left|\Phi_{1 }(s^{\prime})\right|\right)^{-\sigma_{2}}ds^{\prime}\leq C(\sigma_{1},\sigma_ {2}). \tag{10.20}\]
Proof.: The proof is the same as that in [13].
**Lemma 10.4**.: Let \(\Phi^{y_{0}}\) denote either \(\Phi_{Z}^{y_{0}}\) or \(\Phi_{A}^{y_{0}}\), then
\[\sup_{y_{0}\in\mathcal{X}_{0}}\int_{-\log\varepsilon}^{\infty}\left|\partial_ {1}W\right|\circ\Phi^{y_{0}}(s^{\prime})ds^{\prime}\lesssim 1. \tag{10.21}\]
Proof.: Using lemma 10.3 and bootstrap assumption of \(\partial_{1}W\), we can deduce the above inequality.
## 11. Closure of bootstrap argument for \(\partial_{1}A\)
Since the vorticity is purely transported by \(u\), the bootstrap of \(\partial_{1}A\) is easy to close from the bound of the vorticity and bootstrap assumptions, in no need of the evolution equation of \(\partial_{1}A\).
**Lemma 11.1** (Relating \(A\) and \(\Omega\)).: We have the following identity:
\[Je^{\frac{3}{2}s}\partial_{1}A=-\left(\alpha S\right)^{\frac{1}{\alpha}}\Omega -T_{2}e^{\frac{3}{2}s}\partial_{2}\left(\frac{e^{\frac{3}{2}s}W+\kappa+Z}{2} \right)-N_{2}e^{-\frac{3}{2}s}\partial_{1}A+U\cdot(N_{2}\partial_{x_{2}}T-T_{ 2}\partial_{x_{2}}N+J\partial_{x_{1}}T). \tag{11.1}\]
Proof.: Note that curl \(\hat{u}=\partial_{T}\hat{u}\cdot N-\partial_{T}\hat{u}\cdot T\). We compute each term as follows:
\[\begin{split}\partial_{T}\hat{u}&=T_{1}\partial_{ \hat{x}_{1}}\hat{u}+T_{2}\partial_{\hat{x}_{2}}\hat{u}=T_{1}\frac{1}{1+f_{x_{ 1}}}\partial_{x_{1}}\hat{u}+T_{2}\left(-\frac{f_{x_{2}}}{1+f_{x_{1}}}\partial_ {x_{1}}\hat{u}+\partial_{x_{2}}\hat{u}\right)\\ &=\frac{f_{x_{2}}}{\sqrt{1+f_{x_{2}}^{2}}}\frac{1}{1+f_{x_{1}}} \partial_{x_{1}}\hat{u}-\frac{f_{x_{2}}}{\sqrt{1+f_{x_{2}}^{2}}}\frac{1}{1+f_ {x_{1}}}\partial_{x_{1}}\hat{u}+\frac{\partial_{x_{2}}\hat{u}}{\sqrt{1+f_{x_{ 2}}^{2}}}=T_{2}\partial_{x_{2}}\hat{u},\end{split} \tag{11.2}\]
\[\begin{split}\partial_{N}\hat{u}&=N_{1}\partial_{ \hat{x}_{1}}\hat{u}+N_{2}\partial_{\hat{x}_{2}}\hat{u}=\frac{1}{\sqrt{1+f_{x_ {2}}^{2}}}\frac{1}{1+f_{x_{1}}}\partial_{x_{1}}\hat{u}-\frac{f_{x_{2}}}{\sqrt{1 +f_{x_{1}}^{2}}}(-\frac{f_{x_{2}}}{1+f_{x_{1}}}\partial_{x_{1}}\hat{u}+ \partial_{x_{2}}\hat{u})\\ &=\frac{\sqrt{1+f_{x_{2}}^{2}}}{1+f_{x_{1}}}\frac{1}{1+f_{x_{1}}} \partial_{x_{1}}\hat{u}-\frac{f_{x_{2}}}{\sqrt{1+f_{x_{2}}^{2}}}\partial_{x_{1 }}\hat{u}=J\partial_{x_{1}}\hat{u}+N_{2}\partial_{x_{2}}\hat{u}.\end{split} \tag{11.3}\]
Thus, we have
\[\begin{split}\text{curl }\hat{u}&=T_{2}\partial_{x_{2}} \hat{u}\cdot N-\left(J\partial_{x_{1}}\hat{u}+N_{2}\partial_{x_{2}}\hat{u} \right)\cdot T\\ &=T_{2}\partial_{x_{2}}(\hat{u}\cdot N)-T_{2}\hat{u}\partial_{x_{2 }}N-J\partial_{x_{1}}(\hat{u}\cdot T)+J\hat{u}\partial_{x_{1}}T-N_{2}\partial_ {x_{2}}(\hat{u}\cdot T)+N_{2}\hat{u}\cdot\partial_{x_{2}}T\\ &=T_{2}\partial_{x_{2}}\left(\frac{w+z}{2}\right)-T_{2}\hat{u} \partial_{x_{2}}N-J\partial_{x_{1}}a+J\hat{u}\partial_{x_{1}}T-N_{2}\partial_ {x_{2}}a+N_{2}\hat{u}\partial_{x_{2}}T\\ &=T_{2}\partial_{x_{2}}\left(\frac{w+z}{2}\right)-J\partial_{x_{1} }a-N_{2}\partial_{x_{2}}a+\hat{u}\cdot(N_{2}\partial_{x_{2}}T-T_{2}\partial_{x_ {2}}N+J\partial_{x_{1}}T).\end{split} \tag{11.4}\]
On the other hand, curl \(\hat{u}=\tilde{\rho}\tilde{\zeta}=\left(\alpha S\right)^{\frac{1}{\alpha}}\Omega\), thus we get the desired result.
With the help of this identity, we have
\[e^{\frac{3}{2}s}|\partial_{1}A|\lesssim\kappa_{0}^{\frac{1}{2}}+e^{\frac{\varepsilon }{2}}(e^{-\frac{\varepsilon}{2}}+M\varepsilon^{\frac{1}{2}}e^{-\frac{\varepsilon }{2}})+\varepsilon^{\frac{1}{2}}e^{\frac{\varepsilon}{2}}M\varepsilon^{-\frac{ 1}{2}}e^{\frac{\varepsilon}{2}}+M^{\frac{1}{4}}(\varepsilon^{\frac{1}{2}}M^{2 }\varepsilon-M^{2}\varepsilon+M^{2}\varepsilon^{\frac{2}{3}})\leq\frac{1}{2}M. \tag{11.5}\]
This improves the bootstrap bound for \(\partial_{1}A\).
## 12. Closure of bootstrap argument for \(Z\) and \(A\)
In this section we improve the bootstrap bound of \(Z\) and \(A\).
**Lemma 12.1** (Close \(Z\) bootstrap).: For the Riemann variable \(Z\), we have the improved bootstrap bound:
\[\begin{split}|Z\circ\Phi_{Z}^{y_{0}}(s)|&\leq\frac{ 1}{2}M\varepsilon,\\ e^{\frac{3}{2}s}\left|\partial_{1}Z\circ\Phi_{Z}^{y_{0}}(s) \right|&\leq\frac{1}{2}M^{\frac{1}{2}},\\ e^{\frac{\varepsilon}{2}}\left|\partial_{2}Z\circ\Phi_{Z}^{y_{0 }}(s)\right|&\leq\frac{1}{2}M\varepsilon^{\frac{1}{2}},\\ e^{\frac{3}{2}s}\left|\partial_{11}Z\circ\Phi_{Z}^{y_{0}}(s) \right|&\leq\frac{1}{2}M^{\frac{1}{2}},\\ e^{\frac{3}{2}s}\left|\partial_{12}Z\circ\Phi_{Z}^{y_{0}}(s) \right|&\leq\frac{1}{2}M,\\ e^{s}\left|\partial_{22}Z\circ\Phi_{Z}^{y_{0}}(s)\right|& \leq\frac{1}{2}M.\end{split} \tag{12.1}\]
Proof.: Since \(e^{\mu s}\partial^{\gamma}Z\) obeys
\[\partial_{s}(e^{\mu S}\partial^{\gamma}Z)+D_{Z}^{(\gamma,\mu)}(e^{\mu s} \partial^{\gamma}Z)+(\mathcal{V}_{Z}\cdot\nabla)(e^{\mu s}\partial^{\gamma}Z )=e^{\mu s}F_{Z}^{(\gamma)}, \tag{12.2}\]
by Gronwall's inequality we can see that
\[\begin{split} e^{\mu s}\left|\partial^{\gamma}Z\circ\Phi_{Z}^{y _{0}}(s)\right|&\lesssim\varepsilon^{-\mu}\left|\partial^{ \gamma}Z(y_{0},-\log\varepsilon)\right|\exp\left(-\int_{-\log\varepsilon}^{s} D_{Z}^{(\gamma,\mu)}\circ\Phi_{Z}^{y_{0}}(s^{\prime})ds^{\prime}\right)\\ &\quad+\int_{-\log\varepsilon}^{s}e^{\mu s^{\prime}}\left|F_{Z}^{ (\gamma)}\circ\Phi_{Z}^{y_{0}}(s^{\prime})\right|\exp\left(-\int_{s^{\prime}} ^{s}D_{Z}^{(\gamma,\mu)}\circ\Phi_{Z}^{y_{0}}(s^{\prime\prime})ds^{\prime \prime}\right)ds^{\prime},\end{split} \tag{12.3}\]
where
\[D_{Z}^{(\gamma,\mu)}=D_{Z}^{(\gamma)}-\mu=\frac{3}{2}\gamma_{1}+\frac{1}{2} \gamma_{2}+\beta_{2}\beta_{\tau}\gamma_{1}J\partial_{1}W-\mu. \tag{12.4}\]
If we require that \(\frac{3}{2}\gamma_{1}+\frac{1}{2}\gamma_{2}\geq\mu\), then we have
\[D_{Z}^{(\gamma,\mu)}\leq\beta_{2}\beta_{\tau}\gamma_{1}|J\partial_{1}W| \overset{|\gamma|\leq 2}{\leq}2|\partial_{1}W|. \tag{12.5}\]
Thus the damping term is bound by
\[\begin{split}\exp\left(-\int_{s^{\prime}}^{s}D_{Z}^{(\gamma,\mu) }\circ\Phi_{Z}^{y_{0}}(s^{\prime\prime})ds^{\prime\prime}\right)& \lesssim e^{-\left(\frac{3\gamma_{1}+\gamma_{2}}{2}-\mu\right)(s-s^{ \prime})}\exp\left(\int_{s^{\prime}}^{s}2|\partial_{1}W|\circ\Phi_{Z}^{y_{0} }(s^{\prime\prime})ds^{\prime\prime}\right)\\ \overset{\eqref{eq:Bootstrap bound of 2}}{\lesssim}e^{-\left(\frac{3 \gamma_{1}+\gamma_{2}}{2}-\mu\right)(s-s^{\prime})}.\end{split} \tag{12.6}\]
And finally we have
\[e^{\mu s}\left|\partial^{\gamma}Z\circ\Phi_{Z}^{y_{0}}(s)\right|\lesssim\ \varepsilon^{-\mu}\left|\partial^{\gamma}Z(y_{0},-\log\varepsilon)\right|+\int _{-\log\varepsilon}^{s}e^{\mu s^{\prime}}\left|F_{Z}^{(\gamma)}\circ\Phi_{Z}^{ y_{0}}(s^{\prime})\right|e^{-\left(\frac{3\gamma_{1}+\gamma_{2}}{2}-\mu\right)(s-s^{ \prime})}ds^{\prime}. \tag{12.7}\]
Next, for different multi-index \(\gamma\), we choose different \(\mu\) in the above inequality.
_Case 1._\(\gamma=(0,0)\). We set \(\mu=0\). From (3.37)(9.15), we have
\[|Z\circ\Phi_{Z}^{y_{0}}(s)|\lesssim\varepsilon+\int_{-\log\varepsilon}^{s}e^{-s ^{\prime}}ds^{\prime}\lesssim\varepsilon\leq\frac{1}{2}M\varepsilon. \tag{12.8}\]
_Case 2._\(\gamma=(1,0)\). We set \(\mu=\frac{3}{2}\). Also from (3.37)(9.15), we have
\[\begin{split} e^{\frac{3}{2}s}\,|Z\circ\Phi_{Z}^{y_{0}}(s)|& \lesssim\varepsilon^{-\frac{3}{2}}\varepsilon^{\frac{3}{2}}+\int_{-\log \varepsilon}^{s}e^{-\frac{3}{2}}e^{\frac{3}{2}\eta^{-\frac{1}{6}+\frac{2}{3(k- 2)}}}\circ\Phi_{Z}^{y_{0}}(s^{\prime})ds^{\prime}\\ &\lesssim 1+\int_{-\log\varepsilon}^{s}\left(1+|\Phi_{1}(s^{ \prime})|^{2}\right)^{-\frac{1}{6}+\frac{2}{3(k-2)}}ds^{\prime}\overset{\eqref{ eq:2}}{\lesssim}1\leq\frac{1}{2}M^{\frac{1}{2}}.\end{split} \tag{12.9}\]
_Case 3._\(\gamma=(2,0)\). We set \(\mu=\frac{3}{2}\) and we can duduce that
\[\begin{split} e^{\frac{3}{2}s}\,|\partial_{11}Z\circ\Phi_{Z}^{y_{0 }}(s)|&\lesssim\varepsilon^{-\frac{3}{2}}\varepsilon^{\frac{3}{2 }}+\int_{-\log\varepsilon}^{s}e^{\frac{3}{2}s^{\prime}}e^{-\frac{3}{2}s^{ \prime}}\left(1+M\eta^{-\frac{1}{3}}\circ\Phi(s^{\prime})\right)e^{-\frac{3}{ 2}(s-s^{\prime})}ds^{\prime}\\ &\lesssim 1+M\int_{-\log\varepsilon}^{s}e^{-\frac{1}{6}(s-s^{\prime})} \left(1+|\Phi_{1}(s^{\prime})|\right)^{-\frac{2}{3}}ds^{\prime}\overset{ \eqref{eq:2}}{\lesssim}1+Me^{-\frac{s}{6}}\leq\frac{1}{2}M^{\frac{1}{2}}.\end{split} \tag{12.10}\]
_Case 4._\(\gamma=(1,1)\). We set \(\mu=\frac{3}{2}\) and we can duduce that
\[\begin{split} e^{\frac{3}{2}s}\,|\partial_{12}Z\circ\Phi_{Z}^{y_{0 }}(s)|&\lesssim\varepsilon^{-\frac{3}{2}}\varepsilon^{\frac{3}{2 }}+\int_{-\log\varepsilon}^{s}e^{\frac{3}{2}s^{\prime}}e^{-\frac{3}{2}s^{ \prime}}\left(M^{\frac{1}{2}}+M^{2}\eta^{-\frac{1}{3}}\circ\Phi(s^{\prime}) \right)e^{-\frac{1}{2}(s-s^{\prime})}ds^{\prime}\\ &\lesssim 1+M^{\frac{1}{2}}+M^{2}\int_{-\log\varepsilon}^{s}e^{-\frac{1} {6}(s-s^{\prime})}\left(1+|\Phi_{1}(s^{\prime})|\right)^{-\frac{2}{3}}ds^{ \prime}\\ &\overset{\eqref{eq:2}}{\lesssim}\quad M^{\frac{1}{2}}+M^{2}e^{- \frac{s}{6}}\leq\frac{1}{2}M.\end{split} \tag{12.11}\]
_Case 5._\(\gamma=(0,2)\). We set \(\mu=1\) and we can duduce that
\[\begin{split} e^{\frac{3}{2}s}\,|\partial_{22}Z\circ\Phi_{Z}^{y_{ 0}}(s)|&\lesssim\varepsilon^{-\frac{1}{2}}\varepsilon+\int_{- \log\varepsilon}^{s}e^{s^{\prime}}M^{\frac{1}{4}}e^{-\left(\frac{3}{2}-\frac{1 }{k-3}\right)s^{\prime}}ds^{\prime}\lesssim\varepsilon^{\frac{1}{2}}+M^{ \frac{1}{4}}\varepsilon^{\frac{1}{2}-\frac{1}{k-3}}\leq\frac{1}{2}M.\end{split} \tag{12.12}\]
Next we close the bootstrap argument oF \(A\) by proving (IB-\(A\)).
**Lemma 12.2** (Close \(A\) bootstrap).: For the Riemann variable \(A\), we have the improved bootstrap bound:
\[\begin{split}|A\circ\Phi_{A}^{y_{0}}(s)|&\leq\frac{ 1}{2}M\varepsilon,\\ e^{\frac{s}{2}}\,|\partial_{2}A\circ\Phi_{A}^{y_{0}}(s)|& \leq\frac{1}{2}M\varepsilon^{\frac{1}{2}},\\ e^{s}\,|\partial_{22}A\circ\Phi_{A}^{y_{0}}(s)|&\leq \frac{1}{2}M.\end{split} \tag{12.13}\]
Proof.: As in the closure of \(Z\) bootstrap, if \(\mu=\frac{3\gamma_{1}+\gamma_{2}}{2}\), we have
\[e^{\mu s}\,|\partial^{\gamma}A\circ\Phi_{A}^{y_{0}}(s)|\lesssim\ \varepsilon^{-\mu}\,| \partial^{\gamma}A(y_{0},-\log\varepsilon)|+\int_{-\log\varepsilon}^{s}e^{ \mu s^{\prime}}\,\Big{|}F_{A}^{(\gamma)}\circ\Phi_{A}^{y_{0}}(s^{\prime}) \Big{|}\,ds^{\prime}. \tag{12.14}\]
For different multi-index \(\gamma\), we set different \(\mu\) in the above inequality.
_Case 1._\(\gamma=(0,0)\). We set \(\mu=0\). From (3.38)(9.16), we have
\[|A\circ\Phi_{A}^{y_{0}}(s)|\lesssim\varepsilon+\int_{-\log\varepsilon}^{s}M^{ \frac{1}{4}}e^{-s^{\prime}}ds^{\prime}\lesssim M^{\frac{1}{4}}\varepsilon\leq \frac{1}{2}M\varepsilon. \tag{12.15}\]
_Case 2._\(\gamma=(0,1)\). We set \(\mu=\frac{1}{2}\) and we can deduce that
\[e^{\frac{\varepsilon}{2}}\left|\partial_{2}A\circ\Phi_{A}^{y_{0}}(s)\right| \lesssim\varepsilon^{-\frac{1}{2}}\varepsilon+\int_{-\log\varepsilon}^{s}e^{ \frac{s^{\prime}}{2}}M^{\frac{1}{4}}e^{-s^{\prime}}ds^{\prime}\lesssim M^{ \frac{1}{4}}\varepsilon^{\frac{1}{2}}\leq\frac{1}{2}M\varepsilon^{\frac{1}{2}}. \tag{12.16}\]
_Case 3._\(\gamma=(0,2)\). We set \(\mu=1\) and we can deduce that
\[e^{s}\left|\partial_{22}A\circ\Phi_{A}^{y_{0}}(s)\right| \lesssim\varepsilon^{-1}\varepsilon+\int_{-\log\varepsilon}^{s}e^ {s^{\prime}}e^{-\left(1-\frac{3}{k-2}\right)s^{\prime}}\eta^{-\frac{1}{6}} \circ\Phi_{A}^{y_{0}}(s^{\prime})ds^{\prime} \tag{12.17}\] \[\lesssim 1+M^{\frac{1}{4}}\int_{-\log\varepsilon}^{s}e^{\frac{2}{ k-2}s^{\prime}}\left(1+\left|\Phi_{1}(s)\right|\right)^{-\frac{1}{3}}ds^{ \prime}\stackrel{{\eqref{eq:2.1}}}{{\lesssim}}1\leq\frac{1}{2}M.\]
## 13. Closure of bootstrap argument for \(W\) and \(\widetilde{W}\)
In this section we prove the improved bootstrap bounds (IB-\(\widetilde{W}^{0}\))(IB-\(\widetilde{W}\)-1)(IB-\(\widetilde{W}\)-2)(IB-\(\widetilde{W}\)-3) for \(W\) and \(\widetilde{W}\).
### Closure of bootstrap argument for high order derivatives of \(\widetilde{W}\)
As stated in (2.52), \(\partial^{\gamma}\widetilde{W}\) satisfies the equation
\[\partial_{s}\partial^{\gamma}\widetilde{W}+D_{\widetilde{W}}^{(\gamma)} \partial^{\gamma}\widetilde{W}+(\mathcal{V}_{W}\cdot\nabla)\partial^{\gamma} \widetilde{W}=\widetilde{F}_{W}^{(\gamma)}, \tag{13.1}\]
where the damping term has a lower bound according to (5.10)(2.42)(5.3):
\[D_{\widetilde{W}}^{(\gamma)} =\frac{3\gamma_{1}+\gamma_{2}-1}{2}+\beta_{\tau}J\left(\partial _{1}\overline{W}+\gamma_{1}\partial_{1}W\right) \tag{13.2}\] \[\geq\frac{3}{2}+\gamma_{1}-(1+\varepsilon^{\frac{1}{2}})\left(1+ \gamma_{1}(1+\varepsilon^{\frac{1}{2}})\right)\geq\frac{3}{2}-1+\gamma_{1}- \gamma_{1}-C\varepsilon^{\frac{1}{42}}\geq\frac{1}{3}.\]
From the equation of \(\partial^{\gamma}\widetilde{W}\), we have
\[\frac{d}{ds}\left|\partial^{\gamma}\widetilde{W}\circ\Phi_{W}^{y_{0}}\right|+ \left(D_{\widetilde{W}}^{(\gamma)}\circ\Phi_{W}^{y_{0}}\right)\left|\partial^ {\gamma}\widetilde{W}\circ\Phi_{W}^{y_{0}}\right|\leq\left|\widetilde{F}_{W}^ {(\gamma)}\circ\Phi_{W}^{y_{0}}\right|. \tag{13.3}\]
If \(|\gamma|=4\) and \(|y|\leq l\), from (9.17)(13.2), we have
\[e^{\frac{s}{3}}\left|\partial^{\gamma}\widetilde{W}\circ\Phi_{W}^{y_{0}}(s) \right|\leq\varepsilon^{-\frac{1}{3}}\varepsilon^{\frac{1}{3}}+Ce^{\frac{s}{3 }}\left(\varepsilon^{\frac{1}{7}}+\varepsilon^{\frac{1}{10}}(\log M)^{\gamma_ {2}-1}\right). \tag{13.4}\]
Thus for \(|\gamma|=4\) and \(|y|\leq l\), we have
\[\left|\partial^{\gamma}\widetilde{W}\circ\Phi_{W}^{y_{0}}(s)\right|\leq\frac{ 1}{4}\varepsilon^{\frac{1}{10}}(\log M)^{\gamma_{2}}. \tag{13.5}\]
Now we consider the case \(|\gamma|=3\), \(y=0\). Let \(y=0\) in (2.52), we have
\[\left|\partial_{s}\partial^{\gamma}\widetilde{W}^{0}\right| =\left|\widetilde{F}_{W}^{(\gamma),0}-G_{W}^{0}\partial_{1} \partial^{\gamma}\widetilde{W}^{0}-h_{W}^{0}\partial_{2}\partial^{\gamma} \widetilde{W}^{0}-(1-\beta_{\tau})\left(1+\gamma_{1}\right)\partial^{\gamma} \widetilde{W}^{0}\right| \tag{13.6}\] \[\lesssim e^{-\left(\frac{1}{2}-\frac{1}{k-3}\right)s}+Me^{-s} \varepsilon^{\frac{1}{10}}(\log M)^{4}+Me^{-s}\varepsilon^{\frac{1}{4}} \lesssim e^{-\left(\frac{1}{2}-\frac{1}{k-3}\right)s}.\]
Thus from (3.34)
\[|\partial^{\gamma}\widetilde{W}^{0}(s)|\leq|\partial^{\gamma}\widetilde{W}(- \log\varepsilon)|+Ce^{-\left(\frac{1}{2}-\frac{1}{k-3}\right)s}\leq\frac{1}{10 }\varepsilon^{\frac{1}{4}}. \tag{13.7}\]
Next, we consider the case \(|\gamma|\leq 3\), \(|y|\leq l\). For \(|\gamma|=3\), by (13.5)(13.7), we have
\[|\partial^{\gamma}\widetilde{W}|\leq\varepsilon^{\frac{1}{4}}+\frac{1}{2} \varepsilon^{\frac{1}{10}}(\log M)^{\gamma_{2}+1}|y|\leq\frac{1}{2}(\log M)^{4 }\varepsilon^{\frac{1}{10}}|y|+\frac{1}{2}M\varepsilon^{\frac{1}{4}}. \tag{13.8}\]
Now by induction and \(\partial^{\gamma}\widetilde{W}^{0}=0\) for \(|\gamma|\leq 2\), we can close the bootstrap argument of \(\partial^{\gamma}\widetilde{W}\) as in the case \(|\gamma|=3\).
### A general discusion of weighted estimates
In order to close the bootstrap argument for \(W\) and the rest part of \(\widetilde{W}\) case, we consider the evolution of \(q=\eta^{\mu}R\), where \(R\) is the derivatives of \(W\) and \(\widetilde{W}\), or them itself, and \(|\mu|\leq\frac{1}{2}\). Suppose \(R\) satisfies
\[\partial_{s}R+D_{R}R+\mathcal{V}_{W}\cdot R=F_{R}, \tag{13.9}\]
then \(q\) satisfies
\[\partial_{s}q+D_{q}q-\mathcal{V}_{W}\cdot\nabla q=\eta^{\mu}F_{R}, \tag{13.10}\]
where
\[D_{q}=D_{R}-\mu\eta^{-1}\mathcal{V}_{W}\cdot\nabla q=D_{R}-3\mu+3\mu\eta^{-1} -2\mu\eta^{-1}\left(y_{1}g_{W}+3y_{2}^{5}h_{W}\right). \tag{13.11}\]
By (5.10)(9.1)(9.3) and the bootstrap assumption for \(W\), one can see that \(|D_{\eta}|\leq 3\eta^{-\frac{1}{3}}\). Thus \(D_{q}\geq D_{R}-3\mu+3\mu\eta^{-1}-6|\mu|\eta^{-\frac{1}{3}}\).
By composing \(q\) with the trajectory of \(\mathcal{V}_{W}\), we have
\[|q\circ\Phi_{W}^{y_{0}}(s)|\leq\,|q(y_{0},s_{0})|\exp\left(-\int_{s_{0}}^{s}D_ {q}\circ\Phi_{W}^{y_{0}}(s^{\prime})ds^{\prime}\right)+\int_{s_{0}}^{s}\left| F_{q}^{(\gamma)}\circ\Phi_{W}^{y_{0}}(s^{\prime})\right|\exp\left(-\int_{s^{ \prime}}^{s}D_{q}\circ\Phi_{W}^{y_{0}}(s^{\prime\prime})ds^{\prime\prime} \right)ds^{\prime}, \tag{13.12}\]
where \((y_{0},s_{0})\) is the starting position and starting time of the trajectory. Note that \(s_{0}\) need not to be \(-\log\varepsilon\).
If \(|y_{0}|\geq l\), we have that
\[2\mu\int_{s^{\prime}}^{s}D_{\eta}\circ\Phi_{W}^{y_{0}}(s^{\prime \prime})ds^{\prime\prime} \stackrel{{|\mu|\leq\frac{1}{2}}}{{\leq}}\int_{s_{0}}^{s}3 \eta^{-\frac{1}{3}}\circ\Phi_{W}^{y_{0}}(s^{\prime})ds^{\prime} \tag{13.13}\] \[\leq 3\cdot 2^{\frac{1}{3}}\int_{s_{0}}^{\infty}\left(1+l^{2}e^{ \frac{2}{3}(s^{\prime}-s_{0})}\right)^{-\frac{1}{3}}ds^{\prime}\leq-30\log l,\]
consequently, we can bound \(q\) by
\[|q\circ\Phi_{W}^{y_{0}}| \leq l^{-30}|q(y_{0},s_{0})|\exp\left[-\int_{s_{0}}^{s}\left(D_{R}-3 \mu+3\mu\eta^{-1}\right)\circ\Phi_{W}^{y_{0}}(s^{\prime})ds^{\prime}\right] \tag{13.14}\] \[\quad+l^{-30}\int_{s_{0}}^{s}\left|F_{q}^{(\gamma)}\circ\Phi_{W} ^{y_{0}}(s^{\prime})\right|\exp\left[-\int_{s^{\prime}}^{s}\left(D_{R}-3\mu+3 \mu\eta^{-1}\right)\circ\Phi_{W}^{y_{0}}(s^{\prime\prime})ds^{\prime\prime} \right]ds^{\prime}.\]
We remark that as long as \(|y_{0}|\geq l\) and \(p>0\), one can verify that
\[\int_{s_{0}}^{\infty}\eta^{-p}\circ\Phi_{W}^{y_{0}}(s)ds\lesssim_{p}-\log l \tag{13.15}\]
If \(|y_{0}|\geq L\), we have another inequality:
\[2\mu\int_{s^{\prime}}^{s}D_{\eta}\circ\Phi_{W}^{y_{0}}(s^{\prime \prime})ds^{\prime\prime} \stackrel{{|\mu|\leq\frac{1}{2}}}{{\leq}}\int_{s_{0}}^{s}3 \eta^{-\frac{1}{3}}\circ\Phi_{W}^{y_{0}}(s^{\prime})ds^{\prime} \tag{13.16}\] \[\leq 3\cdot 2^{\frac{1}{3}}\int_{s_{0}}^{\infty}\left(1+L^{2}e^{ \frac{2}{3}(s^{\prime}-s_{0})}\right)^{-\frac{1}{3}}ds^{\prime}\leq CL^{-\frac{ 20}{3}}.\]
And \(q\) is bound by
\[\begin{split}|q\circ\Phi_{W}^{y_{0}}|&\leq e^{\varepsilon} |q(y_{0},s_{0})|\exp\left[-\int_{s_{0}}^{s}\left(D_{R}-3\mu+3\mu\eta^{-1}\right) \circ\Phi_{W}^{y_{0}}(s^{\prime})ds^{\prime}\right]\\ &\quad+e^{\varepsilon}\int_{s_{0}}^{s}\left|F_{q}^{(\gamma)} \circ\Phi_{W}^{y_{0}}(s^{\prime})\right|\exp\left[-\int_{s^{\prime}}^{s}\left( D_{R}-3\mu+3\mu\eta^{-1}\right)\circ\Phi_{W}^{y_{0}}(s^{\prime\prime})ds^{ \prime\prime}\right]ds^{\prime}.\end{split} \tag{13.17}\]
### Closure of bootstrap argument for \(\widetilde{W}\)
For different multi-index \(\gamma\), we choose different \(\mu\), and we will use (13.14) or (13.17), depending on the location of \(y\). We establish the estimates case by case.
_Case 1._\(|\gamma|=0\), \(l\leq|y|\leq L\). In this case we set \(\mu=-\frac{1}{6}\), thus we have \(q=\eta^{-\frac{1}{6}}\widetilde{W}\) and \(D_{R}-3\mu+3\mu\eta^{-1}=-\frac{1}{2}\eta^{-1}+\beta_{\tau}J\partial_{1} \overline{W}\). We estimate the damping term and the forcing term.
\[\begin{split}-\int_{s^{\prime}}^{s}\left(\beta_{\tau}J\partial_{1} \overline{W}-\frac{1}{2}\eta^{-1}\right)\circ\Phi_{W}^{y_{0}}(s^{\prime\prime })ds^{\prime\prime}&\leq(1+\varepsilon^{\frac{1}{2}})\int_{s_{0 }}^{s}\left|\partial_{1}\overline{W}\circ\Phi_{W}^{y_{0}}(s^{\prime\prime}) \right|ds^{\prime\prime}+\frac{1}{2}\int_{s_{0}}^{s}\eta^{-1}\circ\Phi_{W}^{y _{0}}(s^{\prime\prime})ds^{\prime\prime}\\ &\leq 2\int_{s_{0}}^{s}\eta^{-\frac{1}{3}}\circ\Phi_{W}^{y_{0}}(s^{ \prime\prime})ds^{\prime\prime}\leq-20\log l,\end{split} \tag{13.18}\]
\[\int_{s_{0}}^{s}\left|\left(\eta^{-\frac{1}{6}}\widetilde{F}_{W}\right)\circ \Phi_{W}^{y_{0}}(s^{\prime})\right|ds^{\prime}\lesssim\int_{s_{0}}^{s}M \varepsilon^{\frac{1}{6}}\eta^{-\frac{1}{3}}\circ\Phi_{W}^{y_{0}}(s^{\prime}) ds^{\prime}\leq-\varepsilon^{\frac{1}{6}}\log l. \tag{13.19}\]
According to lemma 10.2, it is possible to require that either \(|y_{0}|=l\) or \(s_{0}=-\log\varepsilon\), thus we can use the initial condition or bootstrap assumptions to bound \(|q(y_{0},s_{0})|\). From (13.14)(3.34)(B-\(\widetilde{W}\)-1), we have
\[\begin{split}\left|\eta^{-\frac{1}{6}}\widetilde{W}\circ\Phi_{W} ^{y_{0}}(s)\right|&\leq l^{-30}\left|\widetilde{W}(y_{0},s_{0}) \right|\eta^{-\frac{1}{6}}(y_{0})l^{-20}+l^{-30}l^{-20}(-\varepsilon^{\frac{1 }{6}})\log l\\ &\leq l^{-50}\eta^{-\frac{1}{6}}(y_{0})\max\left(\varepsilon^{ \frac{1}{10}}\eta^{-\frac{1}{6}}(y_{0}),2(\log M)^{4}\varepsilon^{\frac{1}{10} }l^{4}\right)-l^{-50}\varepsilon^{\frac{1}{6}}\log l\leq\frac{1}{2} \varepsilon^{\frac{1}{11}}.\end{split} \tag{13.20}\]
_Case 2._\(\gamma=(1,0)\), \(l\leq|y|\leq L\). Let \(\mu=\frac{1}{3}\), then we have \(D_{R}-3\mu+3\mu\eta^{-1}\geq\beta_{\tau}J(\partial_{1}\overline{W}+\partial_{1 }W)\), and
\[-\int_{s^{\prime}}^{s}\left(D_{R}-3\mu+3\mu\eta^{-1}\right)\circ\Phi_{W}^{y_{0 }}(s^{\prime\prime})ds^{\prime\prime}\leq 4\int_{s_{0}}^{\infty}\eta^{-\frac{1}{3}} \circ\Phi_{W}^{y_{0}}(s^{\prime\prime})ds^{\prime\prime}\leq-40\log l, \tag{13.21}\]
\[\int_{s_{0}}^{s}\left|F_{q}\circ\Phi_{W}^{y_{0}}(s^{\prime})\right|ds^{\prime} \lesssim\varepsilon^{\frac{1}{11}}\int_{s_{0}}^{s}\left(\eta^{\frac{1}{3}}\eta ^{-\frac{1}{2}}\right)\circ\Phi_{W}^{y_{0}}(s^{\prime})ds^{\prime}\lesssim- \varepsilon^{\frac{1}{11}}\log l. \tag{13.22}\]
Now we can bound \(q\) by
\[\begin{split}\left|\eta^{\frac{1}{3}}\widetilde{W}\circ\Phi_{W}^{ y_{0}}(s)\right|&\leq l^{-30}\left|\widetilde{W}(y_{0},s_{0}) \right|\eta^{\frac{1}{3}}(y_{0})l^{-40}+l^{-30}l^{-40}(-\varepsilon^{\frac{1}{1 }})\log l\\ &\leq l^{-70}\eta^{\frac{1}{3}}(y_{0})\max\left(\varepsilon^{ \frac{1}{11}}\eta^{-\frac{1}{3}}(y_{0}),2(\log M)^{4}\varepsilon^{\frac{1}{10} }l^{3}\right)-l^{-70}\varepsilon^{\frac{1}{11}}\log l\\ &\leq\frac{1}{2}\varepsilon^{\frac{1}{12}}.\end{split} \tag{13.23}\]
_Case 3._\(\gamma=(0,1)\), \(l\leq|y|\leq L\). Let \(\mu=0\), then we have \(D_{R}-3\mu+3\mu\eta^{-1}=\beta_{\tau}J\partial_{1}\overline{W}\), and \(|F_{q}|\lesssim\varepsilon^{\frac{1}{12}}\eta^{-\frac{1}{3}}\). The rest is almost the same as Case 2.
### Closure of bootstrap for \(W\)
Similarly, for different \(\gamma\) we choose different \(\mu\), and we will use (13.14) or (13.17), depending on the location of \(y\).
_Case 1._\(|\gamma|=2\), \(|y|\geq l\). Now \(R=\partial^{\gamma}W\), and we let
\[\mu=\begin{cases}\dfrac{1}{3},&\gamma=(2,0),(1,1)\\ \dfrac{1}{6},&\gamma=(0,2).\end{cases} \tag{13.24}\]
The damping term becomes
\[3\mu-D_{R}=\begin{cases}-\gamma_{1}+\dfrac{1}{2}-\beta_{\tau}\left(1+\gamma_{1 }\mathbbm{1}_{\gamma_{1}\geq 2}\right)J\partial_{1}W,&\gamma_{1}\geq 1\\ -\beta_{\tau}J\partial_{1}W,&\gamma_{1}=0.\end{cases} \tag{13.25}\]
When \(\gamma_{1}=0\), we have
\[\int_{s^{\prime}}^{s}\left(3\mu-D_{R}\right)\circ\Phi_{W}^{y_{0}}(s^{\prime \prime})ds^{\prime\prime}\leq 2\int_{s_{0}}^{\infty}|\partial_{1}W|\circ\Phi_{W}^ {y_{0}}(s^{\prime\prime})ds^{\prime\prime}\leq-20\log l, \tag{13.26}\]
and the forcing term is bound by
\[\int_{s_{0}}^{s}\left|\eta^{\frac{1}{6}}F_{W}^{(0,2)}\right|\circ\Phi_{W}^{y_{ 0}}(s^{\prime})ds^{\prime}\lesssim M^{\frac{2}{3}}\int_{s_{0}}^{s}\left(\eta^ {\frac{1}{6}}\eta^{-\frac{1}{3}+\frac{1}{3(4-3)}}\right)\circ\Phi_{W}^{y_{0}}( s^{\prime})ds^{\prime}\leq-M^{\frac{5}{6}}\log l. \tag{13.27}\]
Thus, we have that
\[\begin{split}\left|\eta^{\frac{1}{6}}\partial_{22}W\circ\Phi_{W} ^{y_{0}}(s)\right|&\leq l^{-30}\eta^{\frac{1}{6}}(y_{0})\left| \partial_{22}W(y_{0},s_{0})\right|l^{-20}-l^{-30}M^{\frac{5}{6}}\log l\\ &\leq l^{-50}\eta^{\frac{1}{6}}(y_{0})\max\left(\eta^{-\frac{1}{6 }}(y_{0}),\frac{6}{7}\eta^{-\frac{1}{6}}(y_{0})+2(\log M)^{4}e^{\frac{1}{10}} l^{2}\eta^{-\frac{1}{6}}(y_{0})\|\eta^{\frac{1}{6}}\|_{L^{\infty}(|y|\leq l)} \right)-l^{-50}M^{\frac{5}{6}}\log l\\ &\overset{\varepsilon\text{ small}}{\leq}-2l^{-50}M^{\frac{5}{6}} \log l\overset{M\text{ large}}{\leq}\frac{1}{2}M.\end{split} \tag{13.28}\]
When \(\gamma_{1}>0\), we have that
\[\begin{split}\exp\left(\int_{s^{\prime}}^{s}\left(3\mu-D_{R} \right)\circ\Phi_{W}^{y_{0}}(s^{\prime\prime})ds^{\prime\prime}\right)& \leq\exp\left\{3\int_{s^{\prime}}^{s}|\partial_{1}W|\circ\Phi_{W }^{y_{0}}(s^{\prime\prime})ds^{\prime\prime}+\int_{s^{\prime}}^{s}\left(\frac {1}{2}-1\right)ds^{\prime\prime}\right\}\\ &\leq\exp\left\{4\int_{s^{\prime}}^{s}\eta^{-\frac{1}{3}}\circ \Phi_{W}^{y_{0}}(s^{\prime\prime})ds^{\prime\prime}-\frac{1}{2}(s-s^{\prime}) \right\}\leq l^{-80}e^{-\frac{1}{2}(s-s^{\prime})},\end{split} \tag{13.29}\]
and \(|F_{q}|=\left|\eta^{\frac{1}{3}}F_{W}^{(\gamma)}\right|\lesssim\eta^{\frac{1}{ 3}}M^{\frac{1}{3}\gamma_{2}}\eta^{-\frac{1}{3}}\leq M^{\frac{1}{3}\gamma_{2}+ \frac{1}{6}}\). Thus, we have the bound for \(\partial^{\gamma}W\):
\[\begin{split}\left|\eta^{\frac{1}{3}}\partial^{\gamma}W\right| \circ\Phi_{W}^{y_{0}}(s)&\leq l^{-20}\eta^{\frac{1}{3}}(y_{0})| \partial^{\gamma}W(y_{0},s_{0})|l^{-80}e^{-\frac{1}{2}(s-s_{0})}+L^{-20}\int_{s_ {0}}^{s}M^{\frac{1}{3}\gamma_{2}+\frac{1}{6}}l^{-80}e^{-\frac{1}{2}(s-s^{\prime })}ds^{\prime}\\ &\leq l^{-100}\eta^{\frac{1}{3}}(y_{0})\max\left(\eta^{-\frac{1}{3 }}(y_{0}),C\eta^{-\frac{1}{2}}(y_{0})+2(\log M)^{4}e^{\frac{1}{10}}l^{2}\eta^{- \frac{1}{3}}(y_{0})\|\eta^{\frac{1}{3}}\|_{L^{\infty}(|y|\leq l)}\right)e^{- \frac{1}{2}(s-s_{0})}\\ &+l^{-101}M^{\frac{1}{3}\gamma_{2}+\frac{1}{6}}\\ &\leq l^{-100}\max\left(1,C+3(\log M)^{4}e^{\frac{1}{10}}l^{2} \right)+l^{-101}M^{\frac{1}{3}\gamma_{2}+\frac{1}{6}}\\ &\leq M^{\frac{1+\gamma_{2}}{3}}\underbrace{\left(CM^{-\frac{1}{3} }+l^{-101}M^{-\frac{1}{6}}\right)}_{<\frac{1}{2}\text{ when $M$ large}}\leq\frac{1}{2}M^{\frac{1+\gamma_{2}}{3}}.\end{split} \tag{13.30}\]
_Case 2._\(|\gamma|=0\) and \(|y|\geq L\). Let \(\mu=-\frac{1}{6}\). Now we have \(3\mu-D_{R}-3\mu\eta^{-1}=\frac{1}{2}\eta^{-1}\) and \(F_{q}=\eta^{-\frac{1}{6}}\left(F_{W}-e^{-\frac{\varepsilon}{2}}\beta_{\tau} \dot{\kappa}\right)\). And we bound the damping term and the forcing term by
\[\int_{s^{\prime}}^{s}\frac{1}{2}\eta^{-1}\circ\Phi_{W}^{y_{0}}(s^{\prime\prime}) ds^{\prime\prime}\leq\int_{s_{0}}^{\infty}\left(1+L^{2}e^{\frac{\delta}{8}(s^{ \prime\prime}-s_{0})}\right)^{-1}ds^{\prime\prime}\leq L^{-2}\int_{s_{0}}^{ \infty}e^{-\frac{\delta}{8}(s^{\prime\prime}-s_{0})}ds^{\prime\prime}\leq L^{- 1}=\varepsilon^{\frac{1}{10}}, \tag{13.31}\]
\[\int_{s_{0}}^{s}|F_{q}\circ\Phi_{W}^{y_{0}}(s^{\prime})|\,ds^{\prime}\lesssim \int_{s_{0}}^{s}\left(e^{-\frac{s^{\prime}}{2}}+Me^{-\frac{s^{\prime}}{2}} \right)\eta^{-\frac{1}{6}}\circ\Phi_{W}^{y_{0}}(s^{\prime})ds^{\prime}\lesssim M \int_{s_{0}}^{s}e^{-\frac{s^{\prime}}{2}}ds^{\prime}\leq\varepsilon^{\frac{1} {3}}. \tag{13.32}\]
Thus, we have that
\[\begin{split}\left|\eta^{-\frac{1}{6}}W\right|\circ\Phi_{W}^{y_{0 }}(s)&\leq e^{\varepsilon}\eta^{-\frac{1}{6}}(y_{0})|W(y_{0},s_{ 0})|e^{\varepsilon^{\frac{1}{10}}}+e^{\varepsilon}\varepsilon^{\frac{1}{3}}e ^{\varepsilon^{\frac{1}{10}}}\\ &\leq e^{\varepsilon}e^{\varepsilon^{\frac{1}{10}}}\eta^{-\frac{ 1}{6}}(y_{0})\max\left(\eta^{\frac{1}{6}}(y_{0})(1+\varepsilon^{\frac{1}{11}}),\eta^{\frac{1}{6}}(y_{0})+\varepsilon^{\frac{1}{12}}\eta^{-\frac{1}{3}}(y_{ 0})\right)+e^{\varepsilon}\varepsilon^{\frac{1}{3}}e^{\varepsilon^{\frac{1}{1 0}}}\\ &\leq 1+\varepsilon^{\frac{1}{19}}.\end{split} \tag{13.33}\]
_Case 3._\(\gamma=(1,0)\) and \(|y|\geq L\). In this case, we can see that \(q=\eta^{\frac{1}{3}}\partial_{1}W\), \(3\mu-D_{R}-3\mu\eta^{-1}=-\beta_{\tau}J\partial_{1}W-\eta^{-1}\leq-\beta_{ \tau}J\partial_{1}W\), and
\[\begin{split}\int_{s^{\prime}}^{s}\left(3\mu-D_{R}-3\mu\eta^{-1} \right)\circ\Phi_{W}^{y_{0}}(s^{\prime\prime})ds^{\prime\prime}& \leq 2\int_{s^{\prime}}^{s}|\partial_{1}W|\circ\Phi_{W}^{y_{0}}(s^{ \prime\prime})ds^{\prime\prime}\\ &\lesssim\int_{s_{0}}^{\infty}\left(1+L^{2}e^{\frac{\delta}{8}(s ^{\prime\prime}-s_{0})}\right)^{-\frac{1}{3}}ds^{\prime\prime}\lesssim L^{- \frac{2}{3}}\leq\varepsilon.\end{split} \tag{13.34}\]
The forcing term is bound by
\[\int_{s_{0}}^{s}|F_{q}\circ\Phi_{W}^{y_{0}}(s^{\prime})|\,ds^{\prime}\lesssim \int_{s_{0}}^{s}\varepsilon^{\frac{1}{4}}\left|\eta^{\frac{1}{3}}\eta^{-\frac{ 1}{2}+\frac{3}{3(k-2)}}\right|\circ\Phi_{W}^{y_{0}}(s^{\prime})ds^{\prime} \lesssim\varepsilon^{\frac{1}{4}}\int_{s_{0}}^{s}\eta^{-\frac{1}{12}}\circ \Phi_{W}^{y_{0}}(s^{\prime})ds^{\prime}\lesssim\varepsilon^{\frac{1}{12}}. \tag{13.35}\]
Thus we have the bound
\[\begin{split}\left|\eta^{\frac{1}{3}}\partial_{1}W\right|\circ \Phi_{W}^{y_{0}}(s)&\leq e^{\varepsilon}\eta^{\frac{1}{3}}(y_{0 })|\partial_{1}W(y_{0},s_{0})|e^{\varepsilon}+Ce^{\varepsilon}\varepsilon^{ \frac{1}{4}}e^{\varepsilon}\\ &\leq e^{\varepsilon}e^{2\varepsilon}\eta^{\frac{1}{3}}(y_{0})\max \left(\eta^{-\frac{1}{3}}(y_{0})(1+\varepsilon^{\frac{1}{12}}),\eta^{-\frac{ 1}{3}}(y_{0})+\varepsilon^{\frac{1}{12}}\eta^{-\frac{1}{3}}(y_{0})\right)+Ce ^{2\varepsilon}\varepsilon^{\frac{1}{4}}\\ &\leq 1+\varepsilon^{\frac{1}{13}}.\end{split} \tag{13.36}\]
_Case 4._\(\gamma=(0,1)\) and \(|y|\geq L\). Let \(\mu=0\), we have \(q=R=\partial_{2}W\), and \(3\mu-D_{R}-3\mu\eta^{-1}=-\beta_{\tau}J\partial_{1}W\). Thus we have the bound for damping term:
\[\int_{s^{\prime}}^{s}\left(3\mu-D_{R}-3\mu\eta^{-1}\right)\circ\Phi_{W}^{y_{0} }(s^{\prime\prime})ds^{\prime\prime}\leq\varepsilon. \tag{13.37}\]
The forcing term is bound by
\[\int_{s_{0}}^{s}|F_{q}\circ\Phi_{W}^{y_{0}}(s^{\prime})|\,ds^{\prime}\lesssim \int_{s_{0}}^{s}M^{2}\varepsilon^{\frac{1}{6}}\eta^{-\frac{1}{3}}\circ\Phi_{W}^{y _{0}}(s^{\prime})ds^{\prime}\leq\varepsilon^{\frac{1}{8}}. \tag{13.38}\]
Finally we have that
\[|\partial_{2}W|\circ\Phi_{W}^{y_{0}}(s)\leq e^{\varepsilon}|\partial_{2}W(y_{0},s _{0})|e^{\varepsilon}+e^{\varepsilon}\varepsilon^{\frac{1}{8}}e^{\varepsilon} \leq e^{2\varepsilon}\max\left(\frac{3}{4},\frac{2}{3}+\varepsilon^{\frac{1}{13}} \right)+e^{2\varepsilon}\varepsilon^{\frac{1}{8}}\leq\frac{5}{6}. \tag{13.39}\]
## 14. Proof of the main theorem
In this section we prove the main theorem, discuss the Holder regularity of \(w\) and deduce a lower bound of vorticity.
Proof of the main theorem.: The local well-posedness of \((u,\sigma)\) in physical variables implies the local well-posedness of \((W,Z,A,\kappa,\tau,\xi,n,\phi)\) in self-similar variables, and the global existence of \((W,Z,A,\kappa,\tau,\xi,n,\phi)\) in self-similar variables is obtained via the bootstrap bound.
Now we prove the solution has the desired blow-up behavior. From the bootstrap assumptions and \(\tau(t)-t=\int_{t}^{T_{*}}(1-\dot{\tau}(t^{\prime}))dt^{\prime}\) we can see that \(c(T_{*}-t)\leq\tau-t=e^{-s}\leq C(T_{*}-t)\). Since \(R(t)\in SO(2)\), using (5.10) and (5.23), we have that
\[|[(R(t)N)\cdot\nabla_{\mathrm{x}}]u|=|N\cdot\nabla_{\tilde{x}}\tilde{u}|=\left| \left(\frac{\sqrt{1+f_{x_{2}}^{2}}}{1+f_{x_{1}}}\partial_{x_{1}}-\frac{f_{x_{ 2}}}{\sqrt{1+f_{x_{2}}^{2}}}\partial_{x_{2}}\right)\ddot{u}\right|\leq(1+ \varepsilon^{\frac{2}{3}})(1+\varepsilon^{\frac{3}{4}})e^{s}+\varepsilon\leq \frac{C}{T_{*}-t}. \tag{14.1}\]
Similarly, we can see that the derivative of \(u\) that is aligned to the shock is bounded:
\[|[(R(t)T)\cdot\nabla_{\mathrm{x}}]u|=|T\cdot\nabla_{\tilde{x}}\tilde{u}|= \left|\frac{1}{\sqrt{1+f_{x_{2}}^{2}}}\partial_{x_{2}}\ddot{u}\right|\leq 1+ \varepsilon^{\frac{1}{2}}. \tag{14.2}\]
In a same way, we can prove that \(|[(R(t)N)\cdot\nabla_{\mathrm{x}}]\sigma|\leq\frac{C}{T_{*}-t}\) and \(|[(R(t)T)\cdot\nabla_{\mathrm{x}}]u|\leq C\). Consequently, we have that
\[|\nabla_{\mathrm{x}}u(t)|\leq|[(R(t)N)\cdot\nabla_{\mathrm{x}}]u|+|[(R(t)T) \cdot\nabla_{\mathrm{x}}]u|\leq\frac{C}{T_{*}-t}, \tag{14.3}\]
\[|\nabla_{\mathrm{x}}\sigma(t)|\leq|[(R(t)N)\cdot\nabla_{\mathrm{x}}]\sigma|+| [(R(t)T)\cdot\nabla_{\mathrm{x}}]\sigma|\leq\frac{C}{T_{*}-t}. \tag{14.4}\]
From the bootstrap assumptions \(|\dot{\xi}|\leq M^{\frac{1}{4}}\) and \(|\dot{n}_{2}|\leq M^{2}\varepsilon^{\frac{1}{2}}\), we know that both \(\xi\) and \(n\) have limits as \(t\to T_{*}\).
Next, by the definition of \(n\) and \(N\), and the coordinate transformations, we have that \(n(t)=R(t)N(0,t)\). Also we can see that
\[\begin{split}|[(R(t)N)\cdot\nabla_{\mathrm{x}}]u(\xi(t),t)|& =\left|\left(\frac{\sqrt{1+f_{x_{2}}^{2}}}{1+f_{x_{1}}}\partial_{ x_{1}}-\frac{f_{x_{2}}}{\sqrt{1+f_{x_{2}}^{2}}}\partial_{x_{2}}\right) \dot{u}(0,t)\right|\\ &=\left|\frac{-e^{s}+\partial_{x_{1}}z(0,t)}{2}\tilde{e}_{1}+ \partial_{x_{1}}a(0,t)\tilde{e}_{2}\right|\geq(\frac{1}{2}-\varepsilon^{\frac{ 1}{2}})e^{s}.\end{split} \tag{14.5}\]
Similarly, we have that
\[|[(R(t)N)\cdot\nabla_{\mathrm{x}}]\sigma(\xi(t),t)|=|\partial_{x_{1}}\hat{ \sigma}(0,t)|=\left|\frac{-e^{s}-\partial_{x_{1}}z(0,t)}{2}\right|\geq(\frac{ 1}{2}-\varepsilon^{\frac{1}{2}})e^{s}. \tag{14.6}\]
Thus, we can conclude that \(\|\nabla_{\mathrm{x}}u\|_{L^{\infty}}\geq|[(R(t)N)\cdot\nabla_{\mathrm{x}}]u( \xi(t),t)|\geq\frac{c}{T_{*}-t}\), and \(\|\nabla_{\mathrm{x}}\sigma\|_{L^{\infty}}\geq|[(R(t)N)\cdot\nabla_{\mathrm{x} }]\sigma(\xi(t),t)|\geq\frac{c}{T_{*}-t}\).
Next, we prove (3.25). This follows from that \(\|\partial_{x_{1}}w\|_{L^{\infty}(B_{x}(0,\delta))}\leq C(\delta)\). From (IB-\(W\)), we have that
\[\begin{split}\|\partial_{x_{1}}w\|_{L^{\infty}(B_{x}(0,\delta))}& \leq(1+\varepsilon^{\frac{1}{13}})e^{s}\left\|\frac{1}{(1+y_{1}^{2}+y_{2}^{ 6})^{1/3}}\right\|_{L^{\infty}_{y}(\{e^{-3s}y_{1}^{2}+e^{-s}y_{2}^{2}\leq\delta ^{2}\}^{c})}\\ &\leq 2\delta^{-2}(1+\varepsilon^{\frac{13}{13}})\frac{e^{s}}{(1+e^{3s })^{\frac{1}{3}}}\leq 3\delta^{-2}.\end{split} \tag{14.7}\]
Now we have complete the proof of the main shock formation result and (3.23)-(3.30). The Holder bound is left to the next subsection.
### Holder regularity for \(w\)
We now prove that Riemann invariant \(w\) posseses a uniform \(1/3\)-Holder bound up to the blow-up time.
**Proposition 14.1**.: For the Riemann variable \(w\), we have that \(w\in L^{\infty}([-\varepsilon,T_{*});C^{1/3})\).
Proof.: The proof of this proposition is the same as that in [13], and for the reader's convenience we outline the proof here.
Using the bootstrap assumptions we directly compute the \(C^{1/3}\) norm:
\[\begin{split}\frac{|w(x_{1},x_{2},t)-w(x_{1}^{\prime},x_{2}^{ \prime},t)|}{|x-x^{\prime}|^{1/3}}&=\frac{e^{-\frac{\varepsilon }{2}}|W(y,s)-W(y^{\prime},s)|}{[e^{-3s}(y_{1}-y_{1}^{\prime})^{2}+e^{-s}(y_{2} -y_{2}^{\prime})^{2}]^{1/6}}\\ &\leq\frac{|W(y_{1},y_{2},s)-W(y_{1}^{\prime},y_{2},s)|}{|y_{1}- y_{1}^{\prime}|^{1/3}}+e^{-\frac{\varepsilon}{3}}\frac{|W(y_{1}^{\prime},y_{2},s)-W (y_{1}^{\prime},y_{2}^{\prime},s)|}{|y_{2}-y_{2}^{\prime}|^{1/3}}\\ &\lesssim\frac{\int_{y_{1}^{\prime}}^{y_{1}}(1+z^{2})^{-1/3}dz}{| y_{1}-y_{1}^{\prime}|^{1/3}}+e^{-\frac{\varepsilon}{3}}|y_{2}-y_{2}^{\prime}|^{ 2/3}\stackrel{{ y\in\mathcal{X}(s)}}{{\lesssim}}1.\end{split} \tag{14.8}\]
Now we have proved \(w\) is uniformly Holder-\(1/3\) continuous with respect to \(x\), and one can check that the transformation \(\tilde{x}\mapsto x\), \(\mathrm{x}\mapsto\tilde{x}\) do not affect the Holder-\(1/3\) continuity of \(w\).
### Discussion of the vorticity
From (2.8), we know that in \(\tilde{x}\)-coordinate, the specific vorticity \(\tilde{\zeta}\) is purely transported by \(\tilde{u}+\tilde{v}\). From (5.19)(5.23) and the estimate (5.10) of \(|f|\), we can deduce that \(|\tilde{u}+\tilde{v}|\lesssim M^{\frac{1}{4}}\) on \(\{|\tilde{x}_{1}|\leq 10\varepsilon^{\frac{1}{2}},|\tilde{x}_{2}|\leq 10 \varepsilon^{\frac{1}{4}}\}\supset B_{\tilde{x}}(0,\varepsilon^{\frac{3}{4}})\). Note that \(|T_{*}-t_{0}|=|T_{*}+\varepsilon|\lesssim\varepsilon\), we have that if \(\tilde{\zeta}(\tilde{x},t_{0})\geq c_{0}\) for some \(c_{0}>0\) on \(B_{\tilde{x}}(0,\varepsilon^{\frac{3}{4}})\), then \(\tilde{\zeta}(\tilde{x},t)\geq c_{0}/2\) on \(B_{\tilde{x}}(0,\varepsilon^{\frac{3}{4}}/2)\).
From the bootstrap assumptions and (8.8) we have that
\[\left|S-\frac{\kappa_{0}}{2}\right|\lesssim|\kappa-\kappa|+e^{-\frac{ \varepsilon}{2}}|W|+|Z|\lesssim M\varepsilon+\varepsilon^{\frac{1}{6}} \lesssim\varepsilon^{\frac{1}{6}}.\]
Thus the sound speed \(\tilde{\sigma}\geq\frac{\kappa_{0}}{4}\), and \(|\tilde{\omega}|=|\tilde{\zeta}||\tilde{\rho}|=|\zeta|(\alpha|\sigma|)^{1/ \alpha}\geq\frac{c_{0}}{2}\cdot(\frac{\alpha\kappa_{0}}{4})^{1/\alpha}\) on \(B_{\tilde{x}}(0,\varepsilon^{\frac{3}{4}}/2)\).
The initial conditions stated in subsection 3.1 can not rule out the possibility that \(\tilde{\zeta}(\tilde{x},t_{0})\) have a positive lower bound on \(B_{\tilde{x}}(0,\varepsilon^{\frac{3}{4}})\), thus there do exist solutions satisfying the listed initial condition and present non-zero vorticity at the blow-up point.
## Data availability statement
Data sharing is not applicable to this article as no new data were created or analyzed in this study.
## Acknowledgements
The author is supported by the China Scholarship Council (File No.202106100096), and thank for the warm host of the department of mathematics of the National University of Singapore. The author is grateful to Prof. Xinliang An and Dr. Haoyang Chen for valuable instruction, discussions and suggestions; the author would also like to thank Prof. Lifeng Zhao and Yiya Qiu for helpful correspondence.
## Appendix A Toy model of 1D Burgers profile
Consider the following cauchy problem for the 1D Burgers equation:
\[\begin{cases}u_{t}+uu_{x}=0\\ u(x,0)=u_{0}(x):=-xe^{-x^{2}}.\end{cases}\] (A.1)
It is well-known for the Burgers equation that the blow-up time is \(T=-\frac{1}{\inf_{x\in\mathbb{R}}\partial_{x}u_{0}}=1\), and the blow-up point is \((x,t)=(0,1)\), and
\[\|\partial_{x}u(\cdot,t)\|_{L^{\infty}}\leq\frac{1}{1-t}.\] (A.2)
Now we claim that \(\frac{1}{\sqrt{1-t}}u\left((1-t)^{\frac{3}{2}}y,t\right)\) converges uniformly to a profile(a fixed stationary function) \(\overline{U}(y)\) on any compact set as \(t\to 1\). This fact characterizes the blow-up behavior of \(u\). We can formally write this fact as
\[u(x,t)\sim(1-t)^{\frac{1}{2}}\overline{U}\left((1-t)^{-\frac{3}{2}}x\right), \quad\text{as $t\to 1$}.\] (A.3)
To closely investigate this fact, we use the "self-similar transformation" \(y=(1-t)^{-\frac{3}{2}}x\). \(y\) is a "zoom-in" version of \(x\) in the sense that any compact set of \(y\) correspond to a set of \(x\) that converging to \(0\), thus in \(y\)-coordinate we can observe the behavior of \(u\) near the blow-up point in detail as \(t\to 1\).
For the sake of convenience we introduce the self-similar time \(s=-\log(1-t)\), thus \(1-t=e^{-s}\), this "self-similar time" has the advantage that \(t\to 1\) is equivalent to \(s\to\infty\). Now the self-similar transformation becomes \(y=e^{\frac{3}{2}s}x\) and we can rewrite \(\frac{1}{\sqrt{1-t}}u\left((1-t)^{\frac{3}{2}}y,t\right)\) in the self-similar coordinate as
\[U(y,s):=\frac{1}{\sqrt{1-t}}u\left((1-t)^{\frac{3}{2}}y,t\right)=e^{\frac{s}{ 2}}u(e^{-\frac{3}{2}s}y,1-e^{-s}).\] (A.4)
In this coordinate, the proposition we claimed becomes
\[\boxed{U(y,s)\overset{s\to\infty}{\rightrightarrows}\overline{U}(y)\quad \text{ $y\in K$, for all compact $K$}.}\] (A.5)
The following figures show the graphs of \(U\) and how \(U\) converges to \(\overline{U}\):
We now prove the convergence. Firstly, from (A.2) and the self-similar transformation we have that
\[\|\partial_{y}U(\cdot,s)\|_{L^{\infty}}\leq 1.\] (A.6)
From chain rule we can deduce that \(U(s,y)\) satisfies
\[\begin{cases}\left(\partial_{s}-\frac{1}{2}\right)U+\left(\frac{3}{2}y+U \right)\partial_{y}U=0\\ U(y,0)=U_{0}(y)=u_{0}(y)=-ye^{-y^{2}}.\end{cases}\] (A.7)
Ignoring the time-dependent term in the above equation, we have
\[-\frac{1}{2}W+\left(\frac{3}{2}y+W\right)\partial_{y}W=0.\] (A.8)
which is called the self-similar Burgers equation. Using ODE techniques we can find a first integral of this equation: \(y=-W_{C}(y)-CW_{C}(y)^{3}\). If we impose the constraint \(W^{\prime\prime\prime}_{C}(0)=6=u^{\prime\prime\prime}_{0}(0)\), we have \(C=1\). Thus we select \(\overline{U}\) to be the function that implicitly determined by the identity \(y=-\overline{U}(y)-\overline{U}(y)^{3}\), the solution of this cubic equation is
\[\overline{U}(y)=\left(-\frac{y}{2}+\left(\frac{1}{27}+\frac{y^{2}}{4}\right)^ {\frac{1}{2}}\right)^{\frac{1}{3}}-\left(\frac{y}{2}+\left(\frac{1}{27}+\frac {y^{2}}{4}\right)^{\frac{1}{2}}\right)^{\frac{1}{3}}.\] (A.9)
One can verify that \(\overline{U}(0)=U_{0}(0)\), \(\overline{U}^{\prime}(0)=U^{\prime}_{0}(0)\), \(\overline{U}^{\prime\prime}(0)=U^{\prime\prime}_{0}(0)\), and \(\overline{U}^{\prime\prime\prime}(0)=U^{\prime\prime\prime}_{0}(0)\). Thus we can check by the above explicit expression of \(\overline{U}\) that
\[\left|\overline{U}(y)-U_{0}(y)\right|=\left|\overline{U}(y)+ye^{-y^{2}}\right| \leq My^{4}.\] (A.10)
holds for some \(M>0\), and \(-1\leq\overline{U}_{y}\leq 0\).
Now we are ready to prove (A.5). Define \(\widetilde{U}(y)=U(y)-\overline{U}(y)\), then subtract (A.8) from (A.7) and we have
\[\begin{cases}\partial_{s}\widetilde{U}-\widetilde{D}\widetilde{U}+\left(\frac{3} {2}y+U\right)\widetilde{U}_{y}=0\\ \widetilde{D}=\frac{1}{2}-\overline{U}_{y}\\ \widetilde{U}(y,0)=\widetilde{U}_{0}(y):=u_{0}(y)-\overline{U}(y).\end{cases}\] (A.11)
Notice that (A.7) is a transport equation. We define its Lagrange trajectories by
\[\begin{cases}\frac{d}{ds}\Phi_{y_{0}}(s)=\left(\frac{3}{2}y+U\right)\circ\Phi_ {y_{0}}(s)\\ \Phi_{y_{0}}(0)=y_{0}.\end{cases}\] (A.12)
From (A.6) we have \(|U(y)|\leq|y|\), and \(\left(\frac{3}{2}y+U\right)\cdot y\geq\frac{1}{2}y^{2}\), thus
\[\frac{1}{2}\frac{d}{ds}\left|\Phi_{y_{0}}(s)\right|^{2}=\Phi_{y_{0}}(s)\frac{ d}{ds}\Phi_{y_{0}}(s)=\left[\left(\frac{3}{2}y+U\right)\cdot y\right]\circ\Phi_ {y_{0}}(s)\geq\frac{1}{2}\left|\Phi_{y_{0}}(s)\right|^{2}.\] (A.13)
If \(\Phi_{y_{0}}(s)=y\), from the above inequality we have \(e^{-s}|y|^{2}\geq|y_{0}|^{2}\). Rewrite (A.11) in terms of Lagrange trajectories we have
\[\frac{d}{ds}\widetilde{U}\circ\Phi_{y_{0}}(s)=\left(\frac{1}{2}-\overline{U}_ {y}\right)\circ\Phi_{y_{0}}(s)\cdot\widetilde{U}\circ\Phi_{y_{0}}(s).\] (A.14)
From \(-1\leq\overline{U}_{y}\leq 0\) we have
\[\frac{d}{ds}\left|\widetilde{U}\circ\Phi_{y_{0}}(s)\right|\leq\frac{3}{2} \left|\widetilde{U}\circ\Phi_{y_{0}}(s)\right|.\] (A.15)
thus we can conclude that
\[\begin{split}\left|\widetilde{U}(y,s)\right|&=\left| \widetilde{U}\circ\Phi_{y_{0}}(s)\right|\\ &\stackrel{{\eqref{eq:11}}}{{\leq}}e^{\frac{3}{2}s} \left|\widetilde{U}\circ\Phi_{y_{0}}(0)\right|\\ &=e^{\frac{3}{s}s}\left|\widetilde{U}(y_{0},0)\right|\\ &\leq e^{\frac{3}{s}s}My_{0}^{4}\\ &\stackrel{{\eqref{eq:12}}}{{\leq}}Me^{-\frac{\tau}{ 2}}y^{4}.\end{split}\] (A.16)
From this inequality we know that \(\widetilde{U}\) converge to \(0\) uniformly on any compact set, or equivalently it holds that \(U\rightrightarrows\overline{U}\) on any compact set.
Though we prove the convergence in the case of a specific initial datum, the proof can be modified to apply to almost all initial data. In fact, take any \(u_{0}\in C_{c}^{\infty}\), there exists a point \(x_{0}\in\mathbb{R}\) and an integer \(k\geq 1\), such that \(u_{0}^{\prime}(x_{0})=\inf_{x\in\mathbb{R}}u_{0}^{\prime}(x)\), and \(u_{0}^{(j)}(x_{0})=0\) holds for \(2\leq j\leq 2k\), while \(u_{0}^{(2k+1)}(x_{0})>0\). In this case, a rescaled version of the solution \(u\) will eventually converge to a solution \(\overline{U}\) of the self-similar Burgers equation \(-\frac{1}{2k}\overline{U}(y)+\left[(1+\frac{1}{2k})y+\overline{U}(y)\right] \overline{U}_{y}(y)=0\). In this sense, the self-similar Burgers equation plays a universal role in the blow-up of the Burgers equation.
## Appendix B Interpolation
Here we state the interpolation inequalities that are used in this paper.
**Lemma B.1** (Gagliardo-Nirenberg inequalities).: Suppose \(1\leq q,r\leq\infty\), \(1\leq p<\infty\), \(j<m\) are non-negative integers, \(\theta\in[0,1]\), and they satisfy the relations
\[\frac{1}{p}=\frac{j}{n}+\theta\left(\frac{1}{r}-\frac{m}{n}\right)+\frac{1- \theta}{q},\ \ \ \ \ \frac{j}{m}\leq\theta\leq 1.\] (B.1)
Then \(\|D^{j}u\|_{L^{p}(\mathbb{R}^{n})}\leq C\|D^{m}u\|_{L^{r}(\mathbb{R}^{n})}^{ \theta}\|u\|_{L^{q}(\mathbb{R}^{n})}^{1-\theta}\) holds for any \(u\in L^{q}(\mathbb{R}^{n})\) such that \(D^{m}u\in L^{r}(\mathbb{R}^{n})\), with two exceptional cases:
(1)if \(j=0\), \(q=\infty\) and \(rm<n\), then an additional assumption is needed: either \(u\) tends to \(0\) at infinity or \(u\in L^{s}(\mathbb{R}^{n})\) for some finite value of \(s\);
(2)if \(r>1\) and \(m-j-\frac{n}{r}\) is a non-negative integer, then the additional assumption \(\frac{j}{m}\leq\theta<1\) is needed.
A mainly used special case in this paper is that
\[\|D^{j}\varphi\|_{L^{\frac{m}{j}}(\mathbb{R}^{n})}\lesssim\|\varphi\|_{\dot{H }^{m}(\mathbb{R}^{n})}^{\frac{j}{m}}\|\varphi\|_{L^{\infty}(\mathbb{R}^{n})}^ {1-\frac{j}{m}}.\] (B.2)
holds for any \(u\in\dot{H}^{m}(\mathbb{R}^{n})\cap L^{\infty}(\mathbb{R}^{n})\).
**Lemma B.2**.: Suppose \(k\geq 4\), \(0\leq l\leq k-3\) are integers, \(q\in(4,2(k-1)]\), then
\[\left\|D^{2+l}\phi D^{k-1-l}\varphi\right\|_{L^{2}(\mathbb{R}^{2})}\lesssim_{k,q}\|D^{k}\phi\|_{L^{2}(\mathbb{R}^{2})}^{a_{2}(\mathbb{R}^{2})}\|D^{2}\phi\| _{L^{q}(\mathbb{R}^{2})}^{1-a}\|D^{k}\varphi\|_{L^{2}(\mathbb{R}^{2})}^{b}\|D ^{2}\varphi\|_{L^{q}(\mathbb{R}^{2})}^{1-b},\] (B.3)
holds for any \(\phi,\varphi\in\dot{H}^{k}(\mathbb{R}^{2})\cap\dot{W}^{2,q}(\mathbb{R}^{2})\), where \(a\), \(b\) are given by
\[a=\frac{\frac{1}{q}-\frac{1}{p}+\frac{l}{1}}{\frac{k}{2}+\frac{1}{q}-\frac{3} {2}},\ b=\frac{\frac{1}{q}-\frac{1}{2}+\frac{1}{p}+\frac{k-3-l}{2}}{\frac{k}{2 }+\frac{1}{q}-\frac{3}{2}},\] (B.4)
and \(p=\frac{2q(k-3)}{(q-3)l+2(k-3)}\). Moreover, we have that \(a+b=1-\frac{\frac{1}{q}-\frac{1}{2}}{\frac{k-3}{2}+\frac{1}{q}}\in(0,1)\) is independent of \(l\).
|
2309.16672 | Learning to Transform for Generalizable Instance-wise Invariance | Computer vision research has long aimed to build systems that are robust to
spatial transformations found in natural data. Traditionally, this is done
using data augmentation or hard-coding invariances into the architecture.
However, too much or too little invariance can hurt, and the correct amount is
unknown a priori and dependent on the instance. Ideally, the appropriate
invariance would be learned from data and inferred at test-time.
We treat invariance as a prediction problem. Given any image, we use a
normalizing flow to predict a distribution over transformations and average the
predictions over them. Since this distribution only depends on the instance, we
can align instances before classifying them and generalize invariance across
classes. The same distribution can also be used to adapt to out-of-distribution
poses. This normalizing flow is trained end-to-end and can learn a much larger
range of transformations than Augerino and InstaAug. When used as data
augmentation, our method shows accuracy and robustness gains on CIFAR 10,
CIFAR10-LT, and TinyImageNet. | Utkarsh Singhal, Carlos Esteves, Ameesh Makadia, Stella X. Yu | 2023-09-28T17:59:58Z | http://arxiv.org/abs/2309.16672v3 | # Learning to Transform for Generalizable Instance-wise Invariance
###### Abstract
Computer vision research has long aimed to build systems that are robust to spatial transformations found in natural data. Traditionally, this is done using data augmentation or hard-coding invariances into the architecture. However, too much or too little invariance can hurt, and the correct amount is unknown a priori and dependent on the instance. Ideally, the appropriate invariance would be learned from data and inferred at test-time.
We treat invariance as a prediction problem. Given any image, we use a normalizing flow to predict a distribution over transformations and average the predictions over them. Since this distribution only depends on the instance, we can align instances before classifying them and generalize invariance across classes. The same distribution can also be used to adapt to out-of-distribution poses. This normalizing flow is trained end-to-end and can learn a much larger range of transformations than Augerino and InstaAug. When used as data augmentation, our method shows accuracy and robustness gains on CIFAR 10, CIFAR10-LT, and TinyImageNet.
## 1 Introduction
One of the most impressive abilities of the human visual system is its robustness to geometric transformations. Objects in the visual world often undergo rotation, translation, etc., producing many variations in the observed image. Nonetheless, we classify them reliably and efficiently.
Any robust classifier must encode information about the expected geometric transformations, either explicitly (e.g., through architecture) or implicitly (e.g., invariant features). What would this knowledge look like for humans?
Scientists have extensively investigated this question [1]. We know that it generalizes to novel (but similar) categories, e.g., we can instantly recognize a new symbol from many poses after seeing it just once [2]. For unfamiliar categories or poses, we can learn the invariance over time [3]. Finally, while we quickly recognize objects in typical poses, we can also adapt to "out-of-distribution" poses with processes like mental rotation [4]. These properties help us robustly handle novel categories and novel poses (Figure 1).
In contrast, modern classifiers based on deep learning are brittle [5]. While these methods have achieved super-human accuracy on curated datasets like ImageNet [6], they are unreliable in the real world [7], showing poor generalization and even causing fatal outcomes in systems relying on computer vision [8]. Thus, robust classification has long been an aim of computer vision research [5, 9]. This paper asks:
_Can we replicate this flexible, generalizable, and adaptive invariance in artificial neural networks?_
For some transformations (e.g., translation), the invariance can be hard-coded into the architecture. This insight has led to important approaches like Convolutional Neural Networks [10, 11]. However, this approach imposes severe architecture restrictions and thus has limited applicability.
An alternative approach to robustness is data augmentation [12]. Input data is transformed through a predefined set of transformations, and the neural network learns to perform the task reliably despite these transformations. Its success
Figure 1: Our goal is to build flexible, adaptive, and generalizable invariances. **Flexible:** The ideal invariance is flexible and instance-dependent. Different objects in different poses require different degrees of invariance. Too much hurts accuracy, and too little hurts robustness. **Adaptive**: The model should adapt to unexpected (out-of-distribution) poses. The figure above shows mental rotation, a process by which humans align unfamiliar objects in unexpected poses to classify them. **Generalizable**: Knowledge of invariances should generalize from previous experience, e.g., learning bilateral symmetry for horses and transferring it to zebras.
and wide applicability have made it ubiquitous in deep learning. However, data augmentation is unreliable since the learned invariance breaks under distribution shifts and fails to transfer from head classes to tail classes in imbalanced classification settings [13].
Both these approaches _prescribe_ the invariances while assuming a known set of transformations. However, the correct set of invariances is often unknown _a priori_, and a mismatch can be harmful [14, 15, 12]. For instance, in fine-grained visual recognition, rotation invariance can help with flower categories but hurt animal recognition [16].
A recent line of methods [14, 17, 15] aims to _learn_ the useful invariances. Augerino [14] learns a range of transformations shared across the entire dataset, producing better generalizing models. However, these methods use a fixed range of transformations for all inputs, thus failing to be flexible. InstaAug [15] learns an instance-specific augmentation range for each transformation, achieving higher accuracy on datasets such as TinyImageNet due to its flexibility. However, since InstaAug learns a range for each parameter separately, it cannot represent multi-modal or joint distributions (e.g., it cannot discover rotations from the set of all affine matrices). Additionally, these approaches don't explore generalization across classes and adaptation to unexpected poses (Figure 1).
We take inspiration from Learned-Miller _et al_. [2] and model the relationship between the observed image and its class as a graphical model (Figures 2 and 5). We also represent the instance-wise distribution of transformations using a normalizing flow and apply it to robust classification. Our experiments show that the properties like adaptability and generalizability emerge naturally in this framework.
**Contributions: (1)** We propose a normalizing flow model to learn the image-conditional transformation distribution. **(2)** Our model can represent multi-modal and joint distributions over transformations, being able to model more complex invariances, and **(3)** helps achieve higher test accuracy on datasets such as CIFAR10, CIFAR10-LongTail (Figure 3), and TinyImageNet. Finally, **(4)** combined with our graphical model, this model forms a flexible, generalizable, and adaptive form of invariance. It can be used to **(a)** align the dataset and discover prototypes like congealing [2], **(b)** adapt to unexpected poses like mental rotation [3], and **(c)** transfer invariance across classes like GAN-based methods [13].
## 2 Related Work
**Mental rotation in humans**: Shepard and Metzler [4] were among the first to measure the amount of time taken by humans to recognize a rotated object. They found that the response time increased linearly with rotation, suggesting a dynamic process like mental rotation for recognizing objects in unfamiliar poses. Tarr and Pinker [1] further study mental rotation as a theory of invariant object recognition, contrasting it against invariant features and a multiple-view theory. Cooper and Shepard [18] found that revealing identity and orientation information beforehand helped the subjects make constant-time predictions. Hock and Tromley [19] found that the recognition time is nearly constant for characters perceived as "upright" over a large range of rotations. However, outside that range (and for characters with narrow "upright" ranges), the recognition time follows the same linear relationship, indicating mental rotation is needed when the object is detected as "not upright." Koriat and Norman [3] investigated mental rotation as a function of familiarity, finding that humans adapt to unfamiliar objects with practice, gaining robustness to small rotations around the upright pose. The response curve thus becomes flatter around the upright pose. These works suggest a flexible, adaptive, and general form of robustness in the human vision.
**Invariance in Neural Networks**: Neural networks invariant to natural transformations have long been a central goal in deep learning research [9]. Bouchacourt _et al_. [12] and Madan _et al_. [5] studied the invariances present in modern models. One of the earliest successes includes architectures like Convolutional Neural Networks [10, 11], and more recently, applications such as medical image analysis [20, 21, 22], cosmology [23, 24], and
Figure 3: Our method delivers strong gains for imbalanced classification. On CIFAR10-LT with 5000 to 500 instances per class from head to tail (black curve), our class-agnostic instance-wise transform distribution helps boost the classification accuracy by large margins (red bars) over the standard softmax baseline (blue bars) especially for the tail classes.
Figure 2: Our image classification pipeline. The normalizing flow model predicts a distribution over image transformations. Samples from this distribution are passed to a differentiable augmented, which transforms the input image into a set of augmented images. The images are passed to a classifier, and predictions are averaged. Crucially, the transform distribution \(g_{\phi}\) can generalize across classes and datasets.
physics/chemistry [25; 26; 27]. Kondor and Trivedi [28] and Cohen _et al_. [29] established a general theory of equivariant neural networks based on representation theory. Finzi _et al_. [30] combined equivariant and non-equivariant blocks through a residual connection.
The dominant way to add invariance into neural networks is data augmentation. Dao _et al_. [31] shows that to a first-order approximation, data augmentation is equivalent to averaging features over transformations. Bouchacourt _et al_. [12] found data augmentation to be crucial for invariance in many modern computer vision architectures. Zhou _et al_. [13] demonstrated a key failing of data augmentation in imbalanced classification and used a GAN to generate a broad set of variants for every instance. Our method is complementary to theirs and can be combined in future work. We also note that the experiments in this paper only use affine image transformations and yet achieve comparable accuracy to theirs on CIFAR10LT. Congealing [2] aligns all the images in a class, simultaneously producing a prototype and inferring the relative pose of each example. The aligned dataset can be used for robust recognition, and the learned pose distribution can be used for new classes. However, this method assumes the transformation distribution is class-wise, whereas we model it for every instance. Learned canonicalization [32] learns an energy function that is minimized at test time to align the input to a canonical orientation. Spatial Transformer Networks [33] predict a transformation from the input image in an attempt to rectify it and improve classification accuracy. However, STNs cannot represent a distribution of transformations. Probabilistic Spatial Transformer Networks [34] model the conditional distribution using a Gaussian distribution with mean and variance predicted by a neural network. In contrast, we use a normalizing flow model. We also study the generalizability as well as adaptation to unexpected poses.
**Augerino**: [14] aims to learn the ideal range of invariances for any given dataset. It uses the reparametrization trick and learns the range of uniform distribution over each transformation parameter separately (e.g., range of translations, rotations, etc.). This ability allows Augerino to learn the useful range of augmentations (and thus invariances) directly and produce more robust models with higher generalization. However, Augerino is sensitive to the regularization amount and the parametrization of the augmentation range (Table 3). LILA [17] tackles this problem using marginal likelihood methods. However, for both Augerino and LILA, the resulting invariance is shared among all classes, even though different classes (such as \(0\) and \(6\) in a digit classification setting) may have entirely different ideal augmentation distributions. Figure 4 illustrates how these limitations lead Augerino to learn an overly restricted augmentation range.
**InstaAug:**[15] fixes the inflexibility of Augerino by predicting the augmentation ranges for every instance and provides a theoretical argument connecting it to generalization error. In our knowledge, it is the first work to do so. This allows for larger effective ranges and, thus, impressive generalization gains in image classification and contrastive learning settings. However, while InstaAug is instance-wise, it models the range of each parameter separately (the _mean-field_ assumption). Thus, it cannot represent multi-modal or joint distributions (Figure 4). Like Augerino, the representational limitations greatly limit the set of learnable transformations, especially for complex augmentation classes like image cropping [15], necessitating tricks like selecting among a pre-defined set of crops. Furthermore, InstaAug is sensitive to parametrization (see Figure 7 and Table 3).
Figure 4: Our normalizing flow model can represent input-dependent, multi-modal, and joint distributions over augmentation parameters. **(top)** We illustrate three samples, each with a different set of correct augmentations. Augerino learns a range shared between all samples, so the learned range is too restrictive. InstaAug learns an instance-wise range but cannot handle a non-axis-aligned augmentation set (middle). In contrast, our model can adapt to the loss landscape and produce the largest possible set. **(middle)** Augerino [14] fails to learn augmentations in challenging settings. Learned rotation range for a version of Mario-Iggy with \(\pm 90^{\circ}\) rotation range. The class boundaries touch each other, so some instances lie close to the boundary, and thus, global augmentation schemes like [14; 17] are forced to learn a range of \(0\). Our method learns the correct range. **(bottom)** InstaAug fails to capture the distribution for a multi-modal version of the Mario-Iggy dataset.
## 3 Methods
We begin by describing our probabilistic model. We derive its inference equation and training loss and compare it to existing methods. We then construct a normalizing flow model to represent the conditional transform distribution. We also derive an analytical expression for the model's approximate invariance. Finally, we describe the mean-shift algorithm for adapting to out-of-distribution poses.
**Graphical model**: We follow the model described in Figure 5. Here, \(C\) refers to the class, \(I\) refers to the observed image, \(L\) refers to the latent image (equivalent to the prototype in [2]), and \(T\) refers to the unobserved transformation parameters connecting the latent image and the observed image. The latent image is produced by passing the pair \((I,T)\) through a differentiable augmenter \(\mathcal{A}\), which applies the transform to the observed image, i.e., \(L=\mathcal{A}_{T}(I)\).
One notable difference to Miller _et al_. [2] is that our distribution is instance-wise (similar to [15]), not class-wise. This allows for a more general conditional distribution model.
Given the values \(C,L,T,I\), the model defines a joint probability distribution \(P(C,L,T,I)\):
\[P(C,L,T,I)=P(C|L)P(L|T,I)P(T|I)P(I) \tag{1}\]
and the conditional class probability \(P(C|I)\) as:
\[P(C|I)=\int_{L,T}P(T|I)P(L|T,I)P(C|L)dLdT \tag{2}\]
Since \(L=\mathcal{A}_{T}(I)\), this can be further simplified to:
\[P(C|I) =\int_{T}P(T|I)P(C|L=\mathcal{A}_{T}(I))dT \tag{3}\] \[=\mathbb{E}_{T\sim P(T|I)}\big{[}P(C|L=\mathcal{A}_{T}(I))\big{]} \tag{4}\]
Thus, the predicted class probability is averaged over transformations sampled from the conditional transform distribution \(P(T|I)\). This is analogous to the idea of "test-time augmentations" used in image classification literature. Augerino assumes that the transformation \(T\) is independent of \(I\). InstaAug models \(T\) as a uniform distribution conditioned on \(I\). PSTN [34] arrives at the same expression and uses a Gaussian distribution. All these frameworks can be viewed as different approximations in this formulation. However, we also analyze the invariance properties of this formulation and applications of \(P(T|I)\).
**Neural network approximation**: We approximate each of the key distributions \(P(C|L)\) and \(P(T|I)\) with neural networks. Our \(f_{\theta}(C;L)\) is a simple classifier, and \(g_{\phi}(T;I)\) is a normalizing flow model [35] which takes in the image \(I\):
\[f_{\theta}(C;L)\approx P(C|L),\quad g_{\phi}(T;I)\approx P(T|I) \tag{5}\]
Since \(L=\mathcal{A}_{T}(I)\), we use \(f_{\theta}(C;L)\), \(f_{\theta}(C;T,I)\) and \(f_{\theta}(C;\mathcal{A}_{T}(I))\) interchangeably.
**Inference:** The expression for \(P(C|I)\) then becomes:
\[p_{\theta,\phi}(C|I) =\int_{T}g_{\phi}(T;I)f_{\theta}(C;\mathcal{A}_{T}(I))dT \tag{6}\] \[=\mathbb{E}_{T\sim g_{\phi}(T;I)}\big{[}f_{\theta}(C;\mathcal{A}_ {T}(I))\big{]} \tag{7}\]
This equation describes the act of sampling transformations from the normalizing flow model and averaging the classifier predictions over the sampled transformations.
**Classifier loss**: During training, we observe \((I,C)\) pairs. We train the classifier \(f_{\theta}\) by maximizing a lower bound to the average \(\log p_{\theta,\phi}(C|I)\). It is common to use Jensen's inequality to make this tractable:
\[\log p_{\theta,\phi}(C|I)\geq\mathbb{E}_{T\sim g_{\phi}(T;I)}\big{[}\log f_{ \theta}(C;\mathcal{A}_{T}(I))\big{]} \tag{8}\]
and maximize the resulting lower bound instead. This further reduces to the loss function \(\mathcal{L}_{\mathrm{classifier}}\):
\[\mathcal{L}_{\mathrm{classifier}}=\mathbb{E}_{T\sim g_{\phi}(T;I)}\big{[}- \log f_{\theta}(C;\mathcal{A}_{T}(I))\big{]} \tag{9}\]
which is simply the cross-entropy loss averaged over sampled augmentations.
**Augmenter loss**: Intuitively, we would like the transform distribution \(g_{\phi}\) to have a large diversity of augmentations and minimal classification loss (see Figure 6). However, in practice, minimizing the classification loss leads to \(g_{\phi}\) collapsing to a single peak (\(0\)-variance distribution) as the model overfits to the training data (as observed in Augerino [14] without regularization).
Since our normalizing flow model already produces log probability for each generated sample, _entropy regularization
Figure 5: Our graphical model inspired by Miller _et al_. [2]. Shaded nodes represent variables observed in data \((C,I)\). In contrast to Miller _et al_., we only model the inference process and assume that \(T\) is instance-wise, not classwise. Our flow model \(g_{\phi}\) predicts image-conditional transform, and the classifier \(f_{\theta}\) classifies the resulting image \(L\).
Figure 6: The ideal learned distribution maximizes the range while minimizing the overall classification loss
is a natural match for our method. We penalize the average \(\log g_{\phi}\) for sampled transformations:
\[\mathcal{L}_{\mathrm{augmenter}}=\mathcal{L}_{\mathrm{classifier}}+\alpha\mathbb{E}_{T\sim g_{\phi}}\big{[}\log g_{\phi}(T;I)\big{]} \tag{10}\]
This regularization is a generalization of the one used by Augerino, since for uniform distributions, \(\log p\propto-\log(\mathrm{width})\). InstaAug derives a similar expression as a Lagrange relaxation of entropy constraints and applies it to simple distributions like uniform and categorical.
We apply it to normalizing flows, which can model more general distributions, and our graphical model helps us understand this loss and connect it to variational inference.
**Understanding entropy regularization**: Here, we analyze the form of the distribution learned through entropy regularization. Consider the following loss:
\[\mathcal{L}_{\mathrm{augmenter}}[g_{\phi}]=\mathcal{L}_{\mathrm{classifier}}[g_{\phi}]-\alpha\mathbb{H}[g_{\phi}] \tag{11}\]
where \(\alpha\in\mathbb{R}^{+}\) is a regularization constant and \(\mathbb{H}[g_{\phi}]\) is the entropy of the distribution \(g_{\phi}\). This expression reduces to:
\[=\mathbb{E}_{T\sim g_{\phi}(T;I)}\big{[}\alpha\log g_{\phi}(T;I)-\log f_{ \theta}(C;\mathcal{A}_{T}(I))\big{]}\]
We rescale this loss by \(\lambda=\frac{1}{\alpha}\) to simplify:
\[\equiv\mathbb{E}_{T\sim g_{\phi}(T;I)}\big{[}\log g_{\phi}(T;I)-\lambda\log f _{\theta}(C;\mathcal{A}_{T}(I))\big{]}\]
Note that this loss is equivalent to a KL-divergence between \(g_{\phi}\) and a special target distribution \(\tilde{p}^{\lambda}_{\theta}(T|C,I)\):
\[\mathcal{L}_{\mathrm{augmenter}}[g_{\phi}]=\mathrm{KL}\left[g_{\phi}(T;I)|| \;\tilde{p}^{\lambda}_{\theta}(T|C,I)\right]\]
where the target distribution \(\tilde{p}^{\lambda}_{\theta}(T|C,I)\) is defined as:
\[\tilde{p}^{\lambda}_{\theta}(T|C,I)=\frac{1}{Z(\lambda)}f_{\theta}(C;T,I)^{\lambda}\]
where \(Z\in\mathbb{R}^{+}\) is a normalization constant and \(\lambda\in\mathbb{R}^{+}\) is a temperature constant. This distribution is formed by computing \(p_{\theta,\phi}(C|T,I)^{\lambda}\) over transforms \(T\) and normalizing them. Thus, it assigns a higher probability to the transformations with lower classification loss. \(\lambda\) here is analogous to the temperature parameter in softmax, and large values of \(\lambda\) make the distribution highly peaked. In contrast, small values suppress peaks and make the distribution less ill-behaved as a target. \(\lambda\to 0\) corresponds to a uniform distribution, whereas \(\lambda\rightarrow\infty\) collapses the distribution to the single transformation that minimizes the classification loss.
We also note that when \(\lambda=1\), the target distribution \(\frac{1}{Z}p_{\theta}(C|T,I)\) is exactly the posterior \(p_{\theta}(T|C,I)\), assuming a uniform prior for the unknown \(p_{\theta}(T|I)\). Different choices of this prior lead to other loss functions, like a Gaussian prior penalizing the transformation norm. However, we stick to the uniform prior for simplicity.
**Representing the conditional distribution**: Our approach uses parametrized differentiable augmentations similar to Augerino. However, instead of learning the global range of transformations, we predict a distribution over the transformations conditioned on the input image. We use an input-conditional normalizing flow model [35].
A normalizing flow model starts with a simple pre-defined probability distribution \(p_{0}\), e.g., Normal distribution. For a sample \(z_{0}\sim p_{0}\), it successively applies transformations \(f_{1},f_{2},\ldots,f_{K}\), producing a more complicated distribution by the end. The log probability density of the final sample is given by \(\log p(z_{k})\) = \(\log p_{0}(z_{0})-\log|\det\frac{dz_{k}}{dz_{0}}|\), and the architecture is designed to allow efficient sampling and computation of \(\log p\). We use the samples to augment the input (Figure 2) and \(\log p\) term in the loss. Our model is based on RealNVP [36], using a mixture of Gaussians as the base \(p_{0}\).
Given any input image \(I\), we use a convolutional feature extractor to extract an embedding vector \(e\). This embedding vector is then projected down to a scale and bias used by each layer of the normalizing flow and the base distribution. This normalizing flow model outputs samples \(s\) from the augmentation distribution and their corresponding log-probabilities \(\log p(s)\). These samples are passed to the differentiable augmentation, which transforms the input image to be processed by the model (Figure 2) using PyTorch's _grid_sample_. While we use affine image transformations for our experiments, our method generalizes to any differentiable transformation.
**Approximate invariance**: Here, we formalize the notion of approximate invariance and connect it to our classifier and flow model. Intuitively, the approximate invariance in our method comes from both the augmenter and the classifier. Their contributions can be divided into (1) the classifier's inherent insensitivity to transformations, (2) the width of the transform distribution being used for averaging, and (3) the canonicalization effect of the transform distribution. Each of these properties corresponds to a different theory of object recognition explained by Tarr and Pinker [1] and connected to deep neural networks by Kaba _et al_. [32]. We formalize this intuitive argument as follows: Given an input image \(I\), our model's output is the classifier prediction averaged over \(g_{\phi}(T;I)\), i.e. \(p_{\theta,\phi}(C|I)=\mathbb{E}_{T\sim g_{\phi}(T;I)}\big{[}f_{\theta}(C; \mathcal{A}_{T}(I))\big{]}\) (see equation 9). Let a new image \(I^{\prime}\) be formed by transforming the original image by a transformation \(\Delta T\), i.e. \(I^{\prime}=\mathcal{A}_{\Delta T}(I)\). Then:
\[p_{\theta,\phi}(C|I^{\prime})=\int_{T}g_{\phi}(T;\mathcal{A}_{\Delta T}(I))f_{ \theta}(C;\mathcal{A}_{T+\Delta T}(I))dT\]
\[=\int_{T}g_{\phi}(T-\Delta T;I^{\prime})f_{\theta}(C;\mathcal{A}_{T}(I))dT\]
Where the last step substitutes \(T\) for \(T+\Delta T\). Then, the change in prediction, denoted as \(\mathrm{err}(C;I,I^{\prime})\), is:
\[=\Big{|}\int_{T}\big{[}g_{\phi}(T-\Delta T;I^{\prime})-g_{\phi}(T;I)\big{]}f_{ \theta}(C;\mathcal{A}_{T}(I))dT\Big{|}\]
Next, we derive bounds on this quantity based on \(g_{\phi}\) and \(f_{\theta}\). Let \(S=\operatorname{supp}(g_{\phi}(:,I))\cup\operatorname{supp}(g_{\phi}(:,I^{ \prime}))\) is the support set of the transform distributions, i.e. all the samples for \(I\) and \(I^{\prime}\) are inside \(S\). We can thus limit the integration to \(S\):
\[=\Big{|}\int_{T\in S}\big{[}g_{\phi}(T-\Delta T;I^{\prime})-g_{\phi}(T;I)\big{]} f_{\theta}(C;\mathcal{A}_{T}(I))dT\Big{|}\]
Let's now quantify the behavior of \(f_{\theta}\) on \(S\). Let \(M\) be the maximum and \(m\) be the minimum of \(f_{\theta}\) on this set, i.e.
\[M=\max_{t\in S}f_{\theta}(C;\mathcal{A}_{T}(I)),\quad m=\min_{t\in S}f_{\theta }(C;\mathcal{A}_{T}(I)),\]
Note that the first term \(g_{\phi}(T-\Delta T;I^{\prime})-g_{\phi}(T;I)\) is the difference of two probability density functions and so integrates to \(0\). Thus, if we add a constant value to \(f_{\theta}\), it doesn't change the whole integral. Subtracting \(m\), we get:
\[\Big{|}\int_{T\in S}\big{[}g_{\phi}(T-\Delta T;I^{\prime})-g_{\phi}(T;I)\big{]} (f_{\theta}(C;T,I)-m)dT\Big{|}\]
Using \(|\!\int f(x)dx|\leq\int\!|f(x)|dx\) and \(|xy|=|x||y|\) we have:
\[\leq \int_{T\in S}\Big{|}g_{\phi}(T-\Delta T;I^{\prime})-g_{\phi}(T;I) \Big{|}\Big{|}f_{\theta}(C;T,I)-m\Big{|}dT\] \[\leq (M-m)\int_{T\in S}\Big{|}g_{\phi}(T-\Delta T;I^{\prime})-g_{\phi }(T;I)\Big{|}dT\] \[= 2(M-m)\operatorname{TV}[g_{\phi}(T-\Delta T;I^{\prime})\|\;g_{ \phi}(T;I)]\]
where \(\operatorname{TV}\) refers to the Total Variation Distance defined as \(\operatorname{TV}[\operatorname{p}\|\operatorname{q}]=\frac{1}{2}\int|p(x)-q (x)|dx\). In summary:
\[\operatorname{err}(C;I,I^{\prime})\leq 2(M-m)\operatorname{TV}[g_{\phi}(T- \Delta T;I^{\prime})\|g_{\phi}(T;I)]\]
Thus, the prediction change (\(\operatorname{err}(C;I,I^{\prime})\)) is upper bounded by two factors: **(1)**\(M-m\), which measures how much the classifier predictions change over the relevant range, and **(2)** the total variation distance between the original transform distribution \(g_{\phi}(T;I)\) and the new version \(g_{\phi}(T-\Delta T;I^{\prime})\). This result explains how the method achieves approximate invariance. If the classifier features are invariant to the input transformations, we get \(M-m\approx 0\), and thus \(error\approx 0\). The same is true if the transform distribution is approximately equivariant, i.e. \(g_{\phi}(T-\Delta T;I^{\prime})\approx g_{\phi}(T;I)\).
**Mean-shift for handling out-of-distribution poses**: While the conditional transformation distribution \(g_{\phi}(T;I)\) can adjust to in-distribution pose variation, this approach does not work for out-of-distribution poses (see Figure 10). We use a modified version of the well-known _mean-shift algorithm_. Instead of sampling points from a dataset and weighting them with a kernel, we directly use \(g_{\phi}\) samples.
The core idea is to push the image closer to a local mode where our models may work better. We start with image \(I_{0}\) and the transform parameter \(T_{0}=0\). Then, at every step:
\[T_{k}:=T_{k-1}+\gamma\mathbb{E}_{T\sim g_{\phi}(T;I_{k-1})}[T],\quad I_{k}:= \mathcal{A}_{T_{k}}(I_{0})\]
where \(\gamma\in\mathbb{R}^{+}\) is the step size. In summary, the algorithm repeatedly samples from the conditional distribution, computes the mean, and accumulates the result into \(T\).
Since our method learns an input-conditional probability distribution, the mean of the augmentation transformation \(\mathbb{E}_{T\sim g_{\phi}(T;I)}[T]\) for any given image is an estimate of the difference between the local mode and the current transform \(T\). Thus, each step moves the image closer to the local mode, which is the fixed point for this process.
## 4 Experiments
We benchmark accuracy on datasets such as CIFAR10 and TinyImageNet, and plot the learned transformation distribution for toy examples on Mario-Iggy [14] and MNIST. Finally, we test applications of the learned distribution. The code and scripts to reproduce all the results can be found at [https://github.com/sutkarsh/flow_inv/](https://github.com/sutkarsh/flow_inv/)
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & **CIFAR10** & **FMNIST** & **MNIST** & **CIFAR10-LT** \\ \hline Baseline & \(74.1\pm 0.5\) & \(89.6\pm 0.2\) & \(99.1\pm 0.02\) & \(70.8\pm 0.8\) \\ Augerino & \(79.0\pm 1\) & \(90.1\pm 0.1\) & \(98.3\pm 0.1\) & \(63.6\pm 1.3\) \\ LILA & \(84.2\pm 0.8\) & \(91.9\pm 0.2\) & \(\textbf{99.4}\pm 0.02\) & \(76.4\pm 0.9\) \\ Ours & \(\textbf{86.8}\pm\textbf{0.4}\) & \(\textbf{92.3}\pm\textbf{1.4}\) & \(\textbf{99.2}\pm 0.1\) & \(\textbf{78.1}\pm\textbf{1}\) \\ \hline Gain & (+2.6) & (+0.4) & (-0.2) & (+1.7) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Classification accuracy on the modified ResNet used by LILA [17]. Numbers for baselines reproduced from [17]. Our method helps the classifier achieve the highest test accuracy on CIFAR10 and CIFAR10-LT(rho=10). Imbalanced classification is particularly challenging since invariances learned through augmentations do not transfer from head classes to tail classes [13]. We note that our method is complementary to LILA and can be combined in future work.
achieve a \(7.8\%\) test accuracy gain compared to Augerino and \(2.6\%\) against LILA. We note that our method is still based on maximum likelihood; thus, LILA's marginal likelihood method is complementary to ours. These methods may be combined for even higher accuracy in future work. We also report the accuracies for MNIST and FashionMNIST.
**Imbalanced CIFAR-10 Classification**: Imbalanced classification is a challenging setting for invariance learning. As shown by [13], invariances learned through data augmentation do not transfer from head classes to tail classes. This is especially harmful since the tail classes, due to a small number of examples, benefit the most from the invariance. CIFAR10-LT is an imbalanced version of CIFAR10 where the smallest class is \(10\)x smaller than the largest. Here, we outperform Augerino by \(14.5\%\) and LILA by \(1.7\%\).
**Augerino 13-layer CIFAR10**: We also evaluate our method on Augerino's 13-layer network, re-using the same hyperparameters as the LILA experiments Section 4. Our method achieves \(94.3\%\) test accuracy (\(0.5\%\) gain).
**TinyImageNet Classification**: We evaluate our method against InstaAug on the TinyImageNet dataset. This 64x64 dataset contains 200 classes. The goal of this task is to learn cropping augmentations. A crop can be parametrized with four parameters: \((\text{center}_{x},\text{center}_{y},\text{width},\text{height})\), so we represent it with a 4-dimensional distribution. Please see the supplementary material for more details.
Cropping is a challenging augmentation to learn since the crop location and size are correlated. InstaAug's mean-field representation cannot represent this, so achieves low accuracy without the location-related parameterization (LRP). LRP consists of \(321\) pre-defined crops and predicts the probability of each crop. This approach does not scale to high dimensional distributions (e.g. specifying more transformations). In contrast, our method can achieve high accuracy without LRP, beating InstaAug by nearly \(11\%\) (Table 3).
**Learned invariance visualization** Mario-Iggy is a toy dataset from [14] consisting of rotated versions of two images. Upright and upside-down images are classified as different classes, and each sample lies within \(\pm 45^{\circ}\) of its class prototype. As the total range of rotations can be easily varied, this dataset is useful for studying learned invariance. We consider two variations: \(\pm 90^{\circ}\)**rotation range**, and **Multi-modal dataset with \(3\) modes**.
The ideal augmentation distribution for Mario-Iggy dataset is \(\pm 90^{\circ}\) around the class prototype. As the input image rotates, the augmentation distribution shifts such that the resulting augmented image distribution is constant. Our model trained on Mario-Iggy can reliably learn an invariant augmentation distribution (Figure 4). In the challenging multimodal distribution setting, our model can represent the three modes, whereas InstaAug fails.
**Representing joint distributions**: We test the ability of our normalizing flow to represent joint distributions by
\begin{table}
\begin{tabular}{c c c c} \hline \hline & No Aug. & Fast AA & Augerino & Ours \\ \hline Acc & 90.6 & 92.65 & 93.8 & \(\mathbf{94.3\pm 0.08}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Test accuracies for Augerino’s 13-layer model. Baseline numbers quoted from [14].
Figure 7: Our model learns the rotation constraint from data, while InstaAug fails to represent non-axis-aligned distributions. The goal of the “rotation discovery” task is to learn the joint distribution of affine matrix parameters such that the result is rotation. \((w_{1},w_{2})\) pairs on diagonal (i.e., \(w_{1}=-w_{2}\)) correspond to exact rotations and thus incur a small classification loss. **(a)** Relative error to the nearest rotation matrix. The ideal distribution of augmentations is in the form of a diagonal strip. **(b)** InstaAug produces a small square as its mean-field parametrization is unable to represent correlations between two parameters. **(c)** Our model learns to produce samples on the diagonal and learns a much larger range than InstaAug. **(d)** We plot the histogram of relative errors of the produced samples to the nearest rotation matrix. It is much smaller than the random affine baseline. Our model learns the joint distribution and discovers rotations from the full set of affine parameters, while InstaAug fails.
intentionally sampling from a larger set of transformations and letting the model learn the useful subset. Specifically, we start from the Lie algebra parametrization of affine transforms (used by Augerino). For rotation by \(r\) radians, the transformation matrix is:
\[T_{\mathrm{Augrino}}(r)=\exp\left(\left[\begin{matrix}0&r&0\\ -r&0&0\\ 0&0&1\end{matrix}\right]\right) \tag{12}\]
For this experiment, we generalize this formulation as:
\[T_{\mathrm{Decoupled}}(a,b,c,d,e,f)=\exp\left(\left[\begin{matrix}a&b&c\\ d&e&f\\ 0&0&1\end{matrix}\right]\right) \tag{13}\]
This matrix represents a rotation if \(b=-d\). Since the Mario-Iggy dataset only contains rotations, the goal is to produce samples such that \(b=-d\). Samples that do not follow this constraint will be out-of-distribution. Figure 7 shows that, unlike our model, InstAug [15] fails to learn rotation transforms for Mario-Iggy, even though skewed samples incur a higher loss. This is due to InstAug's mean-field model, which predicts the range for each parameter separately, thus preventing it from following the \(b=-d\) constraint. In contrast, our model learns to represent this joint distribution. We further test our model's ability to learn the rotation constraint on all \(6\) affine parameters. Figure 7 also shows the deviation of sampled transformations from a true rotation matrix. Our learned distribution is concentrated close to the rotation transformations, showing that our method can start from a large group of transformations and learn to constrain it to only what is useful for the dataset and task.
**Learning selective invariance for MNIST**: We test our model's selective invariance ability on the MNIST dataset (specifically \(0\),\(1\),\(5\),\(6\),\(9\)) and visualize the augmentation range for a few examples as well as class averages (see Figure 8). For digits \(0\), \(1\), and \(5\), which can be recognized from any rotation, the learned rotation range corresponds to the entire \(360^{\circ}\), whereas for \(6\) and \(9\), which may be confused with each other, the range is only \(180^{\circ}\). In contrast, augrino learns a constant range. We find the same trend at the class level.
**Generalizing invariance across classes**: Zhou _et al_. [13] shows that invariances learned from head classes fail to transfer to tail classes. This is a major drawback of traditional data augmentation. We test generalization across classes by plotting the same metric as [13] (expected KL divergence) across a range of rotations for CIFAR10-LT and RotMNIST
Figure 8: Our method learns flexible instance-wise augmentation distributions. We illustrate learned invariance for a subset of MNIST digits (0,1,5,6,9). The classes \(0\),\(1\),\(5\) can be learned with full invariance, whereas \(6\) and \(9\) require partial invariance (\(\pm 90^{\circ}\)). Our model (top) can learn the correct instance-dependent range, whereas Augerino (middle) instead learns a much narrower shared invariance for all classes. (bottom) A plot of the classwise learned rotational invariance for our model over time. Classes 0, 1, and 5 achieve close to full rotational invariance, whereas 6 and 9 achieve close to \(\pm 90^{\circ}\) rotational invariance.
Figure 9: Invariance transfer from head classes to tail classes in imbalanced classification. We follow Zhou _et al_. [13] (Fig 3) and plot the expected KL-divergence under image rotations for RotMNIST-LT and CIFAR10-LT (lower is better). RotMNIST-LT is a long-tail version of the MNIST dataset where each image has been randomly rotated. As Zhou _et al_. [13] shows, neural networks learn rotational invariance for head classes (indicated by low eKLD) but fail to transfer this invariance to tail classes. This problem persists for Augerino to a lesser extent. In contrast, our method successfully transfers invariance across classes. This effect is even more pronounced for CIFAR10-LT (\(\pm 10^{\circ}\) rotations)
LT classifiers. Since RotMNIST-LT is a rotationally invariant dataset, we rotate all the images randomly in the \(\pm 180^{\circ}\) range, whereas for CIFAR10-LT we use a \(\pm 10^{\circ}\) range. Our model achieves significantly lower eKLD, especially for tail classes (Figure 9), indicating higher robustness.
**Aligning image datasets like in Congealing [2]**: We apply the mean-shift algorithm using the augmentation distribution trained on the Mario-Iggy (\(45^{\circ}\)) dataset. The Mario-Iggy dataset contains rotated versions of the Mario image with one unknown prototype, making it ideal for this test.
For each image, we apply the mean-shift algorithm. Each step moves the image closer to the local mode. We apply this procedure for \(50\) iterations for every image separately. This process results in all the images in a small neighborhood agglomerating to the local prototype (Figure 10).
We also tested this approach on MNIST, an out-of-distribution dataset for the mario-iggy model, and added \(\pm 45^{\circ}\) rotations for an additional challenge. Surprisingly, the method still aligns images and discovers prototypes (Figure 10) despite not being trained on any MNIST images.
**Robustness to out-of-distribution poses**: We benchmark our model's ability to handle out-of-distribution poses on CIFAR10 and measure how the mean-shift method helps the model adapt to unexpected poses. We plot the classification accuracy curves in Figure 10 as the inputs rotate. For the modified mean-shift method, we sample \(100\) transform samples, \(\gamma=0.1\), and \(10\) iterations. The fully-invariant baseline is robust but inaccurate. Augerino, which induces invariance to a small range of rotations, fails for large rotations. Our model without mean-shift also fails under large rotations. However, with mean-shift, it is accurate and robust.
**Summary**: We propose normalizing flows to learn the instance-wise distribution of image transformations. It helps us make robust and better generalizing classifiers, perform test-time alignment, discover prototypes, transfer invariance, and achieve higher test accuracy. These results highlight the potential of flexible, adaptive, and general invariance in computer vision.
**Acknowledgements**: This work was supported, in part, by the BAIR/Google fund.
Figure 10: The conditional augmentation distribution can be used to align an image dataset, discover prototypes similar to congealing [2], and adapt to out-of-distribution poses. **(a)** Example of the mean shift algorithm aligning a digit belonging to an unseen class. **(b-d)** Figures showing the modified mean-shift algorithm. For a given input, we repeatedly compute the mean of the conditional transform distribution and perturb the input in that direction, pushing the input close to a local mode. As a result, the mean of the transformation distribution slowly shifts to \(0\) while the estimated pose gets closer to the true pose. **(e)** Modified mean-shift algorithm can add robustness against unexpected poses without reducing accuracy. We plot each CIFAR10 model’s accuracy as images rotate at test time. Augerino is susceptible to large rotations since they are out-of-distribution for CIFAR10. The baseline trained with augmentations is robust but inaccurate. Our method with mean-shift achieves high accuracy for both in-distribution and out-of-distribution rotations. **(f)** Demonstration of an augmentation distribution aligning rotated (\(\pm 90^{\circ}\)) versions of a single image. We separately apply mean-shift to each rotated image and observe that they converge to the same mode. Unlike [2], there is no joint optimization, and each image is “aligned” separately. This alignment also works for MNIST images even though the model has only trained on Mario-Iggy. **(g)** We apply the model trained on Mario-Iggy to align each class in the MNIST test set, and we make the task more challenging by adding \(\pm 45^{\circ}\) rotations to each image. The top row shows the average class image before alignment, and the bottom row shows images after alignment. We successfully discover prototypes for \(0,1,3,8,9\), whereas for classes like \(4,6\), the model fails due to multiple possible modes. |
2305.19694 | Hypothesis Transfer Learning with Surrogate Classification Losses:
Generalization Bounds through Algorithmic Stability | Hypothesis transfer learning (HTL) contrasts domain adaptation by allowing
for a previous task leverage, named the source, into a new one, the target,
without requiring access to the source data. Indeed, HTL relies only on a
hypothesis learnt from such source data, relieving the hurdle of expansive data
storage and providing great practical benefits. Hence, HTL is highly beneficial
for real-world applications relying on big data. The analysis of such a method
from a theoretical perspective faces multiple challenges, particularly in
classification tasks. This paper deals with this problem by studying the
learning theory of HTL through algorithmic stability, an attractive theoretical
framework for machine learning algorithms analysis. In particular, we are
interested in the statistical behaviour of the regularized empirical risk
minimizers in the case of binary classification. Our stability analysis
provides learning guarantees under mild assumptions. Consequently, we derive
several complexity-free generalization bounds for essential statistical
quantities like the training error, the excess risk and cross-validation
estimates. These refined bounds allow understanding the benefits of transfer
learning and comparing the behaviour of standard losses in different scenarios,
leading to valuable insights for practitioners. | Anass Aghbalou, Guillaume Staerman | 2023-05-31T09:38:21Z | http://arxiv.org/abs/2305.19694v2 | # Hypothesis Transfer Learning with Surrogate Classification Losses:
###### Abstract
_Hypothesis transfer learning_ (HTL) contrasts domain adaptation by allowing for a previous task leverage, named the source, into a new one, the target, without requiring access to the source data. Indeed, HTL relies only on a hypothesis learnt from such source data, relieving the hurdle of expansive data storage and providing great practical benefits. Hence, HTL is highly beneficial for real-world applications relying on big data. The analysis of such a method from a theoretical perspective faces multiple challenges, particularly in classification tasks. This paper deals with this problem by studying the learning theory of HTL through algorithmic stability, an attractive theoretical framework for machine learning algorithms analysis. In particular, we are interested in the statistical behaviour of the regularized empirical risk minimizers in the case of binary classification. Our stability analysis provides learning guarantees under mild assumptions. Consequently, we derive several complexity-free generalization bounds for essential statistical quantities like the training error, the excess risk and cross-validation estimates. These refined bounds allow understanding the benefits of transfer learning and comparing the behaviour of standard losses in different scenarios, leading to valuable insights for practitioners.
Machine Learning, ICML
## 1 Introduction
Traditional supervised machine learning methods share the common assumption that training data and test data are drawn from the same underlying distribution. However, this assumption is often too restrictive to hold in practice. In many real-world applications, a hypothesis is learnt and deployed in different environments that exhibit a distributional shift. A more realistic assumption is that the marginal distributions of training (_source_) and testing (_target_) domains are different but related. This is the framework of _domain adaptation_ (DA), where the learner is provided little or no labeled data from the target domain but a large amount of data from the source domain. This problem arises in various real-world applications like natural language processing (Dredze et al., 2007; Ruder et al., 2019), sentiment analysis (Blitzer et al., 2007; Liu et al., 2019), robotics (Zhang et al., 2012; Bousmalis et al., 2018) and many other areas.
Several works shed light on the theory of DA (Blitzer et al., 2007; Mansour et al., 2009; Ben-David et al., 2010; Zhang et al., 2012; Cortes et al., 2015; Zhang et al., 2019) and suggest schemes that generally rely on minimizing some similarity distances between the source and the target domains. However, the theoretical analysis shows that a DA procedure needs many unlabeled data from both domains to be efficient. Besides, even when unlabeled data are abundant, minimizing a similarity distance can be time-consuming in many scenarios.
To tackle this practical limitation, a new framework that relies only on the source hypothesis was introduced, the so-called _hypothesis transfer learning_ (HTL) (Li & Bilmes, 2007; Orabona et al., 2009; Kuzborskij & Orabona, 2013; Perrot & Habrard, 2015; Kuzborskij & Orabona, 2017; Du et al., 2017). HTL is tailored to the scenarios where the user has no direct access to the source domain nor to the relatedness between the source and target environments. As a direct consequence, HTL does not introduce any assumptions about the similarity between the source and target distributions. It has the advantage of not storing abundant source data in practice.
In this work, we analyze HTL through Regularized Empirical Risk Minimization (RERM) in the binary classification framework. Our working assumptions encompass many widely used _surrogate_ losses, such as the exponential loss used by several boosting algorithms like AdaBoost (Freund & Schapire, 1997), the logistic loss, the softplus loss, which serves as a smooth approximation of the hinge loss
(Dugas et al., 2000), the mean squared error (MSE) and the squared hinge that represents the default losses for least squares/modified least squares algorithms (Rifkin et al., 2003). The attractive quality of these surrogate losses is that they are _classification calibrated_(Zhang, 2004; Bartlett et al., 2006). In other words, they represent a convex upper bound for the classification error and minimizing the expected risk regarding a surrogate loss yields a predictor with sound accuracy.
This paper's theoretical analysis uses the notion of _algorithmic stability_. Formally, assuming that one has access to a small labeled set, we derive many complexity-free generalisation bounds that depend only on the source hypothesis's quality. In particular, such an analysis allows us to compare the behavior of different losses in different scenarios and to answer some practical questions such as: _which surrogate loss is recommended when the source and target domains are related? Which surrogate loss is robust to heavy distribution shift?_
The notion of algorithmic stability and its consequences in learning theory has received much attention since its introduction in (Devroye and Wagner, 1979). It allows obtaining complexity-free generalization bounds for a large class of learning algorithms such as k-nearest-neighbours (Devroye and Wagner, 1979), empirical risk minimizers (Kearns and Ron, 1999), Support Vector Machine (Bousquet and Elisseeff, 2002), Bagging (Elisseeff et al., 2005), RERM (Zhang, 2004; Wibisono et al., 2009), stochastic gradient descent (Hardt et al., 2016), neural networks with a simple architecture (Charles and Papailiopoulos, 2018), to name but a few. For an exhaustive review of the different notions of _stability_ and their consequences on the generalization risk of a learning algorithm, the reader is referred to (Kutin and Niyogi, 2002).
Only a few works derive theoretical guarantees for RERM in the HTL framework and are all formalized in a regression setting. A stability analysis has been provided for the HTL algorithm in the case of RLS for regression in Kuzborskij and Orabona (2013) limited to the least-squares loss. Later, Kuzborskij and Orabona (2017) considered the class of smooth losses and obtained statistical rates on the empirical risk, being a particular case of the stability guarantees. However, this smoothness assumption may be considered strong since it is not satisfied for hypotheses learnt from the exponential loss or vacuously satisfied for hypotheses learnt from the softplus loss. Besides, Du et al. (2017) proposed a novel algorithm to adapt the source hypothesis to the target domain. Nonetheless, the theoretical guarantees they derived are obtained with several strong assumptions, unverifiable in practice. The obtained bounds depend on many unknown parameters (for further details, see Section 3, where all these assumptions are explicitly listed and discussed). Other theoretical results studying HTL outside the framework of RERM can be found (Li and Bilmes, 2007; Morvant et al., 2012; Perrot and Habard, 2015; Dhoub and Redko, 2018). However, most of these theoretical results depend on a complexity/distance measure or/and are valid on a different framework than classification. For example, Perrot and Habard (2015) explores the notion of algorithmic stability in _metric learning_ with Lipschitz loss functions to study the excess risk of some algorithms. The obtained bounds are not intuitive as they depend on the Lipschitz constant and cannot be easily extended to many usual classification losses. Furthermore, the proof techniques in the latter work are far from ours.
On the other hand, when the source is known, many theoretical guarantees can be found in the domain adaptation literature, see e.g. Mansour et al. (2009); Ben-David et al. (2010); Zhang et al. (2012); Cortes et al. (2015) and Zhang et al. (2019), among others. Their rates involve the complexity of the hypothesis class and the distance between the source and the target distribution that may be unknown in practice and drastically deteriorate the rates.
Another related subject is _meta learning_, broadly described as leveraging data from pre-existing tasks to derive algorithms or representations that yield superior results on unencountered tasks. Many theoretical works such as (Khodak et al., 2019; Balcan et al., 2019; Denevi et al., 2019) or (Denevi et al., 2020) have studied this problem. Yet, the obtained theoretical guarantees in the latter works depend on the smoothness parameters of the loss function and the regularizers. The proof techniques from the present paper can be incorporated into the proof of the latter references to obtain more sharp and intuitive learning bounds, that is, bounds exclusively depending on the quality of the source hypothesis.
ContributionsIn this paper, we investigate the statistical risk of the hypothesis transfer learning procedure dedicated to the binary classification task. To that end, we adopt the angle of algorithmic stability that offers an appealing theoretical framework to analyze such a method. This is the first work exploring algorithmic stability for HTL with the usual classification loss functions. In this paper, we provide a (pointwise) hypothesis stability analysis of the HTL in the classification framework for any losses satisfying mild conditions. Furthermore, we show that our main assumptions are valid for the most popular classification losses and derive their associated constants. Based on these stability results, we investigate the statistical behavior of the generalization gap and the excess risk of the HTL procedure. We provide an intuitive finite-sample analysis of these quantities and highlight the statistical behavior of common losses.
## 2 Background and Preliminaries
In this section, we start by recalling the framework of Hypothesis transfer learning and describe the concept of stability.
### Hypothesis Transfer Learning
Considering the source and target domains, hypothesis transfer learning leverages the learnt hypothesis with the source dataset, without having access to the raw source data or any information between source and target domains, to solve a machine learning task on the target domain. Formally, we denote by \(\mathcal{Z}_{S}\) and \(\mathcal{Z}_{T}\) the source and target domains and assume that we have access to \(n\in\mathbb{N},n\geq 1\) i.i.d. observations \(\mathcal{D}_{T}=Z_{1},\dots,Z_{n}\in\mathcal{Z}_{T}\) with a distribution \(P_{T}\) lying in the target domain and a source hypothesis \(h_{S}\) learnt from \(m\in\mathbb{N},m\geq 1\) i.i.d. observations \(\mathcal{D}_{S}=Z_{1}^{S},\dots,Z_{m}^{S}\in\mathcal{Z}_{S}\) drawn from the source distribution \(P_{S}\). In the HTL framework, we do not have access to the source observations but only to the resulting source hypothesis \(h_{S}\). It is worth noting that \(n\ll m\) in many practical scenarios. In this paper, we focus on the binary classification task. Therefore, our domains consist of a Cartesian product of a source/target covariate space \(\mathcal{X}_{S}/\mathcal{X}_{T}\) and the set \(\{-1,1\}\), i.e. \(\mathcal{Z}_{S}=\mathcal{X}_{S}\times\{-1,1\}\) and \(\mathcal{Z}_{T}=\mathcal{X}_{T}\times\{-1,1\}\). In addition, we assume that \(\mathcal{X}_{T}\subset\mathcal{X}_{S}\subset\mathbb{R}^{d}\). Consider two classes of hypotheses \(\mathcal{H}_{S}\) and \(\mathcal{H}_{T}\), an HTL algorithm aims to use a source hypothesis \(h_{S}\in\mathcal{H}_{S}\) learnt on \(\mathcal{D}_{S}\) to improve the performance of a classification algorithm over \(\mathcal{D}_{T}\). Precisely, it is defined as a map
\[\mathcal{A}:(\mathcal{Z}_{T})^{n}\times\mathcal{H}_{S} \rightarrow\mathcal{H}_{T}\] \[(\mathcal{D}_{T},h_{S}) \mapsto h_{T}.\]
Throughout the paper, we assume that \(h_{S}\) is given and fixed, and we use the shorthand notation \(\mathcal{A}(\mathcal{D}_{T})\) instead of \(\mathcal{A}(\mathcal{D}_{T},h_{S})\) for the sake of clarity.
Let \(\ell:\mathcal{H}_{T}\times\mathcal{Z}_{T}\mapsto\mathbb{R}_{+}\) denote a loss function so that \(\ell(h_{T},Z)\) is the error of \(h_{T}\in\mathcal{H}_{T}\) on the observation \(Z=(X,Y)\in\mathcal{Z}_{T}\). In this work, we assume that \(\ell(h_{T},Z)=\phi\left(h_{T}(X)Y\right)\) for some non negative convex function \(\phi\). The generalization risk of the predictor \(\mathcal{A}(\mathcal{D}_{T})\) is denoted by
\[\mathcal{R}\big{[}\mathcal{A}\left(\mathcal{D}_{T}\right)\big{]} =\mathbb{E}_{Z\sim P_{T}}\left[\ell\left(\mathcal{A}\left(\mathcal{ D}_{T}\right),Z\right)\right]\] \[=\mathbb{E}\left[\ell\left(\mathcal{A}\left(\mathcal{D}_{T}\right), Z\right)\mid\mathcal{D}_{T}\right].\]
Notice that the randomness in the latter expectation stems from the novel observation \(Z\) only while the trained algorithm \(\mathcal{A}(\mathcal{D}_{T})\) is fixed. Its empirical counterpart, the _training_ error of \(\mathcal{A}\left(\mathcal{D}_{T}\right)\) writes as
\[\widehat{\mathcal{R}}\big{[}\mathcal{A}(\mathcal{D}_{T})\big{]}=\frac{1}{n} \sum_{i=1}^{n}\ell(\mathcal{A}(\mathcal{D}_{T}),Z_{i}).\]
The latter estimate is known to be optimistic since most learning algorithms are conceived to minimize the training loss. Thus, a more reliable estimate would be the _deleted_ estimate or the so-called leave-one-out (_l.o.o._) estimate:
\[\widehat{\mathcal{R}}_{\mathrm{loo}}\big{[}\mathcal{A}(\mathcal{D}_{T})\big{]} =\frac{1}{n}\sum_{i=1}^{n}\ell\left(\mathcal{A}(\mathcal{D}_{T}^{\setminus i }),Z_{i}\right), \tag{2.1}\]
where \(\mathcal{D}_{T}^{\setminus i}=\mathcal{D}_{T}\setminus\{Z_{i}\}\) denotes the dataset \(\mathcal{D}_{T}\) with the \(i\)'th element removed.
**Remark 2.1** (accelerated _l.o.o._).: _At first sight, one can notice that computing the l.o.o. risk measure is a heavy task in practice since one needs to train the algorithm \(n\) times. However, in our case, one can use the closed form formula of the l.o.o. estimate for RERM algorithms derived in Wang et al. (2018)._
### Algorithmic Stability
In this part, we briefly recall important notions of stability that will be used in the paper. The notion of _stability_ was first introduced in Devroye and Wagner (1979) to derive non-asymptotic guarantees for the leave-one-out estimate. Let denote by \([n]\) the set of indices \(\{1,\dots,n\}\). The algorithm \(\mathcal{A}\) is called stable if removing a training point \(Z_{i}\), \(i\in[n]\), from the \(\mathcal{D}_{T}\) or replacing \(Z_{i}\) with an independent observation \(Z^{\prime}\) drawn from the same distribution does not alter the risk of the output. Later, Bousquet and Elisseeff (2002) introduced the strongest notion of stability, namely _uniform stability_, an assumption used to derive probability upper bounds for the training error and the _l.o.o._ estimate (Bousquet and Elisseeff, 2002; Elisseeff et al., 2005; Hardt et al., 2016; Bousquet et al., 2020; Klochkov and Zhivotovskiy, 2021). Equipped with the above notations, _uniform_ stability, also called _leave-one-out_ stability, can be defined as follows.
**Definition 2.1**.: The algorithm \(\mathcal{A}\) is said to be \(\beta(n)\)_-uniformly_ stable with respect to a loss function \(\ell\) if, for any \(i\in[n]\) and \(Z\in\mathcal{Z}_{T}\), it holds:
\[\left|\ell\left(\mathcal{A}(\mathcal{D}_{T}),Z\right)-\ell\left(\mathcal{A}( \mathcal{D}_{T}^{\setminus i}),Z\right)\right|\leq\beta(n).\]
In practice, uniform stability may be too restrictive since the bound above must hold for all \(Z\), irrespective of its marginal distribution. While weaker, the following notion of stability is still enough to control the leave-one-out deviations (Devroye and Wagner, 1979; Bousquet and Elisseeff, 2002; Elisseeff et al., 2005; Kuzborskij and Orabona, 2013).
**Definition 2.2**.: The algorithm \(\mathcal{A}\) has a _hypothesis_ stability \(\beta(n)\) with respect to a loss function \(\ell\) if, for any \(i\in[n]\), it holds:
\[\left\|\ell\left(\mathcal{A}(\mathcal{D}_{T}),Z\right)-\ell\left(\mathcal{A}( \mathcal{D}_{T}^{\setminus i}),Z\right)\right\|_{1}\leq\beta(n),\]
where \(\left\|X\right\|_{q}=\left(\mathbb{E}\left[\left|X\right|^{q}\right]\right)^{1/q}\) is the \(L_{q}\) norm of \(X\).
We now recall a direct analogue of hypothesis stability: the _pointwise hypothesis stability_. The latter property is used to derive PAC learning bounds for the training error (Bousquet & Elisseeff, 2002; Elisseeff et al., 2005; Charles & Papailiopoulos, 2018).
**Definition 2.3**.: The algorithm \(\mathcal{A}\) has a _pointwise hypothesis_ stability \(\gamma(n)\) with respect to a loss function \(\ell\) if, for any \(i\in[n]\), it holds:
\[\left\|\ell\left(\mathcal{A}(\mathcal{D}_{T}),Z_{i}\right)-\ell\left( \mathcal{A}(\mathcal{D}_{T}^{\setminus i}),Z_{i}\right)\right\|_{1}\leq\gamma (n).\]
Note that the approach based on stability does not refer to a complexity measure like the VC dimension or the Rademacher complexity. There is no need to prove uniform convergence, and the generalization error (cf. Equation 4.1 below) depends directly on the stability parameter. Our work aims to use the notion of algorithmic stability to derive sharper bounds for the HTL problem. More precisely, the magnitude of the obtained bounds is directly related to the quality of \(h_{S}\) on the target domain (represented by \(\mathcal{R}[h_{S}]\)) instead of the complexity of the hypothesis class (Ben-David et al., 2010; Zhang et al., 2012; Cortes et al., 2015; Zhang et al., 2019).
### Working Framework
This paper analyses hypothesis transfer learning through regularised empirical risk minimization (RERM). In particular, it includes the popular Regularized Least Squares (RLS) with biased regularization (Orabona et al., 2009) that has been analyzed in Kuzborskij & Orabona (2013) and Kuzborskij & Orabona (2017). Formally, we consider the following algorithm \(\mathcal{A}\) such that:
\[\mathcal{A}(\mathcal{D}_{T},h_{S})=\hat{h}(\cdot\,;\mathcal{D}_{T})+h_{S}( \cdot), \tag{2.2}\]
where the function \(\hat{h}:\mathbb{R}^{d}\rightarrow\mathbb{R}\) is obtained from the target set of data via the minimization problem:
\[\hat{h} =\operatorname*{arg\,min}_{h\in\mathcal{H}}\frac{1}{n}\sum_{i=1}^ {n}\phi\left(\left(h\left(X_{i}\right)+h_{S}\left(X_{i}\right)\right)Y_{i} \right)+\lambda\|h\|_{k}^{2}\] \[=\operatorname*{arg\,min}_{h\in\mathcal{H}}\widehat{\mathcal{R}} (h+h_{S})+\lambda\|h\|_{k}^{2}, \tag{2.3}\]
with the family of hypotheses \(\mathcal{H}\) being a reproducing kernel Hilbert space (RKHS) endowed with a kernel \(k\), an inner product \(\langle\cdot,\cdot\rangle\) and a norm \(\|\cdot\|_{k}\). The resulting map arising from the HTL is the sum of the source hypothesis \(h_{S}\) and the target hypothesis \(\hat{h}\) where \(\hat{h}\) is learnt involving the source map.
It is worth noting that our analysis encompasses the least square with biased regularization (Scholkopf et al., 2001; Orabona et al., 2009) commonly studied in transfer learning (Kuzborskij & Orabona, 2013, 2017), briefly recalled below.
**Remark 2.2** (link with RLS).: _The RLS with biased regularization is a particular case of the proposed algorithm 2.2. Indeed, by choosing \(k\) as the linear kernel \(k(x_{1},x_{2})=x_{1}^{\top}x_{2}\) and the loss \(\phi(x)=\left(1-x\right)^{2}\), it is equivalent to_
\[\mathcal{A}=\hat{h}+h_{S},\]
_with \(\hat{h}(x)=\hat{u}^{\top}x\) and_
\[\hat{u}=\operatorname*{arg\,min}_{u\in\mathbb{R}^{d}}\frac{1}{n}\sum_{i=1}^{n }\left(u^{\top}X_{i}+h_{S}(X_{i})-Y_{i}\right)^{2}+\lambda\|u\|_{2}^{2}. \tag{2.4}\]
_Furthermore, if \(h_{S}(x)=v^{\top}x\) is a linear classifier with \(v\in\mathbb{R}^{d}\), then_
\[\hat{u}=\operatorname*{arg\,min}_{u\in\mathbb{R}^{d}}\frac{1}{n}\sum_{i=1}^{ n}\left(u^{\top}X_{i}-Y_{i}\right)^{2}+\lambda\|u-v\|_{2}^{2},\]
_which is the original form of biased regularisation algorithms (Scholkopf et al., 2001; Orabona et al., 2009). See Appendix A.1 for technical details._
## 3 Stability Analysis
The subsequent analysis requires technical assumptions, listed below. We assume that the source hypothesis and the kernel \(k\) are bounded, as stated in the following assumptions.
**Assumption 1**.: _The source hypothesis is bounded on the target space:_
\[\left\|h_{S}\right\|_{\infty}=\sup_{x\in\mathcal{X}_{T}}|h_{S}(x)|<\infty.\]
**Assumption 2**.: _The kernel \(k\) is bounded:_
\[\sup_{x_{1},x_{2}\in\mathcal{X}_{T}}k(x_{1},x_{2})\leq\kappa.\]
The boundness of the kernel is a common and mild assumption (see e.g. Bousquet & Elisseeff, 2002; Zhang, 2004; Wibisono et al., 2009). It is satisfied by many usual kernels like the Gaussian kernel and the sigmoid kernel. Furthermore, when \(\mathcal{X}_{T}\) is bounded, then polynomial kernels are also bounded.
We now investigate the accuracy of the HTL proposed framework and provide general stability results under slight assumptions. Furthermore, we show that these assumptions are satisfied by most of the popular ML surrogate losses used in practice and derive precisely the associated constants involved in our theoretical results.
### Hypothesis Stability
This section analyzes the hypothesis stability of general surrogate ML losses for the proposed HTL framework. To study the stability of Algorithm 2.2, we start by showing that the solution of the optimization problem 2.3 lies in the sphere with a data-driven radius, as stated in the following lemma.
**Lemma 3.1**.: _Suppose that Assumptions 1 and 2 are satisfied. Then the solution of Equation (2.3) lies in the set \(\{h\in\mathcal{H},\;\|h\|_{\infty}\leq\hat{r}_{\lambda}\}\) with_
\[\hat{r}_{\lambda}=\kappa\sqrt{\alpha\hat{\mathcal{R}}\left[h_{S} \right]},\]
_where \(\alpha=\kappa/\lambda\)._
Proof.: The proof is postponed in the Appendix B.1.
This lemma ensures that the norm of the solution of the optimisation problem 2.3 decreases when the quality of \(h_{S}\) increases. In the rest of the paper, for a given index \(i\in[n]\), we denote by \(\hat{r}_{\lambda}^{i}=\kappa\sqrt{\alpha\hat{\mathcal{R}}^{\setminus i}\left[h _{S}\right]}\), \(\hat{\mathcal{R}}^{\setminus i}\) the training error with the \(i\)'th sample removed and \(\hat{\rho}_{\lambda}^{i}=\max\left(\hat{r}_{\lambda},\hat{r}_{\lambda}^{i}\right)\).
Before stating our main theorem, we first require an additional assumption involving the empirical radius obtained in Lemma 3.1.
**Assumption 3**.: _The function \(\phi\) is differentiable and convex. Furthermore, \(\forall i\in[n]\), it holds:_
\[\mathbb{E}\left[\sup_{|y^{\prime}|,|y|\leq\hat{\rho}_{\lambda}^{ i}}|\phi^{\prime}(h_{S}(X^{\prime})Y^{\prime}+y^{\prime})\phi^{\prime}(h_{S}(X)Y+ y)|\right]\\ \leq\Psi_{1}\left(\mathcal{R}\left[h_{S}\right]\right),\]
_where \(Z=(X,Y)\), \(Z^{\prime}=(X^{\prime},Y^{\prime})\) are two samples drawn from \(P_{T}\) independent of \(\mathcal{D}^{\setminus i}\) and \(\Psi_{1}\) is a decreasing function verifying \(\Psi_{1}(0)=0\)._
The bound stated in the theorem below reveals the generalisation properties of the presented HTL procedure through the stability framework.
**Proposition 3.1**.: _Suppose that Assumptions 1, 2 and 3 are satisfied. Then the algorithm \(\mathcal{A}\) (cf. Equation (2.2)) is hypothesis stable with parameter_
\[\beta(n)=\frac{\alpha\left(\Psi_{1}\left(\mathcal{R}\left[h_{S} \right]\right)\wedge\left\|\phi^{\prime}\right\|_{\infty}^{2}\right)}{n}.\]
Proof.: The proof is postponed to the Appendix B.2.
We obtain a stability rate of order \(\mathcal{O}\left(\frac{\Psi_{1}\left(\mathcal{R}\left[h_{S}\right]\right) \alpha}{n}\right)\) for any losses satisfying Assumption 3. It naturally depends on the risk of the source classifier, where the expectation is taken on the target data distribution. Therefore, the source task directly influences the rate of the HTL classifier. The standard stability rate of RERM without transfer learning (without source) is of order \(\mathcal{O}(\alpha/n)\), see Theorem 4.3 in Zhang (2004) or Theorem 3.5 in Wibisono et al. (2009). A relevant source hypothesis allows us to obtain faster rates than in standard RERM. Thus, one can directly notice the benefits of using a _good_ source hypothesis on the stability of RERM. The negative transfer, i.e. the source hypothesis has a negative effect and deteriorates the target learner, is analyzed and discussed in Section 4.1.
**Remark 3.1** (Related Work).: _The only existing result studying hypothesis stability in HTL is in Kuzborskij & Orabona (2013). However, the analysis is only in a regression framework with the mean squared error loss. The proof techniques in Kuzborskij & Orabona (2013) rely heavily on the closed-form formulas of the ordinary least square estimate, which does not hold in a general setting like ours. Furthermore, we obtain equivalent (up to constants) stability rates as in Kuzborskij & Orabona (2013). More details are given in Section 3.3 where we explicit constants \(\Psi_{1}\) for most of popular losses._
**Existing assumptions in DA and HTL literature** Statistical guarantees obtained in these fields generally assume that the loss function verifies a smoothness condition. For example, in Mansour et al. (2009) and Cortes et al. (2015), their analysis supposes that \(\ell\) verifies the triangle inequality, which holds only for the MSE and squared hinge. Moreover, the obtained upper bounds in these works depend on the complexity of \(\mathcal{H}\) and some discrepancy distances between the source and target distributions \(P_{S}\) and \(P_{T}\), which deteriorates the statistical rates. In Kuzborskij & Orabona (2017), they suppose that the derivative of the loss is Lipschitz which is not the case for the exponential. Furthermore, even if the loss satisfies this smoothness assumption, their constants depend heavily on the smoothness parameter, and it would yield vacuous bounds in many practical situations. For example, the softplus function \(\psi_{s}(x)=s\log(1+e^{\frac{1-x}{s}})\) with small values of \(s\) serves as an approximation of the hinge loss \(\max(0,1-x)\) and is \(1/s\) Lipschitz. This function converges to the Hinge loss when \(s\to 0\) and usual choices of \(s\) are usually close to \(0\). Therefore, the Lipschitz constant of the derivative \(1/s\) verifies \(1/s\gg 1\), and the bounds from Kuzborskij & Orabona (2017) become vacuous. Besides, Du et al. (2017) made several assumptions about the true regression function of both the source and target domains. To clarify, by the true regression function, \(f\), we refer to the actual model denoted by \(Y=f(X)\). However, these assumptions are challenging to empirically confirm due to their reliance on the real source and target distributions, which generally remain unknown. Moreover, the theoretical
guarantees achieved depend on several constants, also derived from the true distribution, that makes quantifying the bounds magnitude a complex task.
To our best knowledge, the vast majority of existing theoretical results from the HTL literature have similar assumptions to those discussed above. However, in this work, our assumptions are flexible: we only require the differentiability of the loss and a _local_ majorant of the derivative, which will make the analysis more flexible and more suited for the usual classification losses.
To understand the intuition behind Assumption 3 notice that, when \(\mathcal{R}[h_{S}]\to 0\), \(\phi(h_{S}(X)Y)\) approaches the minimum then \(\phi^{\prime}(h_{S}(X)Y)\) approaches 0 (in expectation). Thus, the function \(\Psi_{1}\) can be seen as a function that dictates the rate of convergence of the derivative to \(0\) as \(h_{S}\) approaches the optimal hypothesis. One must note that the latter assumption is verified for many loss functions, namely any loss satisfying the following inequality \(|\phi^{\prime}(x)|\leq\Psi(\phi(x))\) for some concave loss function \(\Psi\). The function \(\Psi\) effectively mediates between \(\phi\) and \(\phi^{\prime}\). As an example, in the context of Mean Squared Error (MSE) loss, it is straightforwardly observable that \(|\phi^{\prime}(x)|\leq\sqrt{\phi(x)}\). Thus \(\phi^{\prime}\) is directly linked to \(\phi(x)\) via the square root function.
**Remark 3.2** (score scaling).: _REMM for regression (cf. Equation 2.4) is equivalent to fitting a predictor on the residuals \(Y_{i}-h_{S}(X_{i})\). However, in the classification case, if we follow the standard approach that \(h_{S}:\mathcal{X}\mapsto\mathcal{Y}=\{-1,1\}\) is a binary classifier (Mansour et al., 2009; Cortes et al., 2015), then latter residuals are either \(1\) or \(0\). Thus, this won't provide enough information for many losses to improve the training. To see this, see the example of the logistic loss and notice that \(\phi(1)=\log(1+e^{-1})\) and \(\phi(-1)=\log(1+e^{1})\). Therefore, in the best case scenario, \(\mathcal{R}[h_{S}]=\log(1+e^{-1})\), which is far from the minimum (that is zero). To tackle this problem, we suggest taking the score learned on the source, which is more informative, especially when the loss function used to train the algorithm on the source has the same minimum as the loss used to train on the target. Note that one can also think of transforming the score, for example, if \(\phi\) is the logistic loss \(\phi(x)=\log(1+e^{-x})\) and \(h_{S}\in]-1,1[\) we can use an increasing transformation function to an interval \(]-C,C[\) with \(C>>1\) in order to adapt to the target loss which is nearly 0 for large values \(x\)._
### Pointwise Hypothesis Stability
To go further than the widely used hypothesis stability, we analyze our HTL problem through the angle of pointwise hypothesis stability. Results presented in this part will be the cornerstone of those shown in Section 4. To analyze the pointwise hypothesis stability of Algorithm 2.2, we require a direct analogue of Assumption 3, involving the data-driven radius provided in Lemma 3.1.
**Assumption 4**.: _The function \(\phi\) is differentiable and convex. Furthermore, \(\forall i\in[n]\), it holds:_
\[\mathbb{E}\left[\sup_{|y^{\prime}|,|y|\leq\beta_{\lambda}^{i}}| \phi^{\prime}(h_{S}(X)Y+y^{\prime})\phi^{\prime}(h_{S}(X)Y+y)|\right]\\ \leq\Psi_{2}\left(\mathcal{R}\left[h_{S}\right]\right).\]
_where \(Z=(X,Y)\) is a sample drawn from \(P_{T}\) independent of \(\mathcal{D}^{\setminus i}\) and \(\Psi_{2}\) is a decreasing function verifying \(\Psi_{2}(0)=0\)._
Under the latter assumption, the following proposition is obtained in a similar manner to Proposition 3.1.
**Proposition 3.2**.: _Suppose that Assumptions 1, 2 and 4 are satisfied. Then the algorithm \(\mathcal{A}\) (cf. Equation (2.2)) is pointwise hypothesis stable with parameter_
\[\gamma(n)=\frac{\alpha\left(\Psi_{2}\left(\mathcal{R}\left[h_{S}\right]\right) \wedge\left\|\phi^{\prime}\right\|_{\infty}^{2}\right)}{n}.\]
Proof.: The proof is postponed to the Appendix B.3.
Again, this result shows the benefits of using a good hypothesis on the pointwise hypothesis stability of RERM. This stability result, combined with that of Proposition 3.1, can be leveraged to propose new convergence results on the generalisation gap and the excess risk of this HTL problem for a wide class of losses, as shown in Section 4. In the sequel, we explicitly compute the functions \(\Psi_{1}\) and \(\Psi_{2}\) for many widely used classification losses.
### Deriving Constants for Popular Losses
As the results of Propositions 3.1 and 3.2 are general and stated for any losses satisfying Assumptions 3 and 4, it is the purpose of this part to investigate our results with widespread machine learning losses. To that end, we first show that these Assumptions are satisfied for the most popular losses. Second, we derive constants involved in these two statistical rates. In particular, we focus on the five following losses:
* Exponential: \(\phi(x)=e^{-x}\).
* Logistic: \(\phi(x)=\log\left(1+e^{-x}\right)\).
* Mean Squared Error: \(\phi(x)=(1-x)^{2}\).
* Squared Hinge: \(\phi(x)=\max(0,1-x)^{2}\).
* Softplus: \(\phi_{s}(x)=s\log\left(1+e^{\frac{1-x}{s}}\right)\), for some \(s>0\).
In the next proposition, we show that most of classical losses verifies Assumptions 3, 4 and we detail their associated functions \(\Psi_{1}\) and \(\Psi_{2}\).
**Proposition 3.3**.: _The exponential, logistic, squared hinge, MSE and softplus losses satisfy Assumptions 3 and 4 with corresponding functions \(\Psi_{1}\) and \(\Psi_{2}\) listed in Table 1._
Proof.: The proof is postponed to the Appendix B.4.
This result shows that bounds derived in Propositions 3.1 and 3.2 are therefore valid under mild assumptions. Indeed, our results only require the kernel and the source hypothesis to be bounded, classical in the HTL framework. Thus, we obtain the first stability result in HTL without limiting assumptions, which remains valid in a practical setting.
As shown in Table 1, functions \(\Psi_{1}\) and \(\Psi_{2}\) are linear for the square hinge and the MSE losses. Besides, for the softplus and logistic losses, we have \(\left\|\phi^{\prime}\right\|_{\infty}=1\) and their stability parameters capped by \(\alpha/n\). Thus, the impact of an irrelevant source hypothesis \(h_{S}\) with large \(\mathcal{R}[h_{S}]\) remains negligible on the stability of RERM when using these losses. In contrast, for the exponential loss, the functions \(\Psi_{1}\) and \(\Psi_{2}\) are roughly exponential, and the corresponding convergence rate deteriorates quickly as \(\mathcal{R}[h_{S}]\) increases. This is indeed not surprising since a prediction in the wrong direction (\(\mathrm{sign}(h_{S}(X))\neq Y\)) would increase the loss \(e^{-h_{S}(X)Y}\) exponentially fast. In the particular case of the MSE, we obtain the same stability rate \(\mathcal{O}\left(\frac{\alpha\mathcal{R}[h_{S}]}{n}\right)\) as in the regression framework (Kuzborski & Orabona, 2013). In the next section, we shall discuss the implications of these stability rates on the _generalization gap_(Hardt et al., 2016; Charles & Papailiopoulos, 2018), cross-validation schemes and the excess risk of Algorithm 2.2.
## 4 Generalisation Guarantees for HTL with Surrogate Losses
In this part, we leverage the stability results provided in Section 3 in several statistical errors commonly used.
### Generalization Gap
Here we investigate the accuracy of the algorithm \(\mathcal{A}\) through the generalization gap. Precisely, this gap is defined as the expected error between the empirical risk and the theoretical risk of the algorithm \(\mathcal{A}\):
\[\mathcal{E}_{\text{gen}}=\left|\mathbb{E}\left[\widehat{\mathcal{R}}\left[ \mathcal{A}(\mathcal{D}_{T})\right]-\mathcal{R}\left[\mathcal{A}(\mathcal{D}_ {T})\right]\right]\right|.\]
To discuss the impact of \(h_{S}\) on the generalization gap, it suffices to analyse the stability parameters \(\beta(n)\) and \(\gamma(n)\). Indeed, \(\mathcal{E}_{\text{gen}}\) is directly linked to these quantities, as stated in the following theorem.
**Theorem 4.1**.: _Suppose that \(\mathcal{A}\) has a hypothesis stability \(\beta(n)\) and a pointwise hypothesis stability \(\gamma(n)\). Then, it holds:_
\[\mathcal{E}_{\text{gen}}\leq\beta(n)+\gamma(n).\]
_Furthermore, suppose that Assumptions 1, 2, 3 and 4 are satisfied. Thus, \(\beta(n)\) and \(\gamma(n)\) are given by Propositions 3.1 and 3.2 and the generalization gap of \(\mathcal{A}\) (cf. Equation (2.2)) is upper-bounded as:_
\[\mathcal{E}_{\text{gen}}\leq\alpha\frac{\left(\Psi_{1}\left(\mathcal{R}\left[ h_{S}\right]\right)+\Psi_{2}\left(\mathcal{R}\left[h_{S}\right]\right)\right) \wedge\left(2\left\|\phi^{\prime}\right\|_{\infty}^{2}\right)}{n}.\]
Proof.: The proof is postponed to the Appendix B.5.
When the source hypothesis is relevant, the risk \(\mathcal{R}[h_{S}]\) is close to zero so that \(e^{\mathcal{R}[h_{S}]}-1\approx\mathcal{R}[h_{S}]\) and \(e^{\alpha\mathcal{R}[h_{S}]}\approx 1\). Equipped with Table 1, this theorem yields the following upper bounds for \(\mathcal{E}_{\text{gen}}\):
* MSE, Sq. hinge: \(\mathcal{E}_{\text{gen}}=\mathcal{O}\left(\frac{\alpha\mathcal{R}[h_{S}]}{n}\right)\).
* Logistic: \(\mathcal{E}_{\text{gen}}=\mathcal{O}\left(\alpha\frac{\sqrt{\mathcal{R}[h_{S }]}\wedge 2}{n}\right)\).
* Softplus: \(\mathcal{E}_{\text{gen}}=\mathcal{O}\left(\alpha\frac{\left(\sqrt{\mathcal{R}[h _{S}]}/s\right)\wedge 2}{n}\right)\).
* Exponential: \(\mathcal{E}_{\text{gen}}=\mathcal{O}\left(\frac{\alpha M_{S}\mathcal{R}[h_ {S}]}{n}\right)\).
Thus, if \(\mathcal{R}[h_{S}]\) is small, the exponential, the squared hinge and the MSE losses have the fastest generalization gap rate. Therefore, our analysis suggests that the user should privilege using the latter losses if one disposes of a good hypothesis \(h_{S}\).
**Negative learning** The phenomenon of negative transfer occurs when the hypothesis \(h_{S}\) learned from the source domain has a detrimental effect on the target learner. In such a case, training without using \(h_{S}\) on the target domain would yield a better learner. We refer the reader to
Weiss et al. (2016) and Wang et al. (2019) for further details about this topic. For the softplus and the logistic losses, the generalization gap remains bounded by \(\mathcal{O}(\alpha/n)\) even if \(\mathcal{R}[h_{S}]\to\infty\). As a consequence, Algorithm 2.2 with the softplus and logistic losses is robust to negative learning since the generalization gap still achieves the same rate of convergence \(\mathcal{O}(\alpha/n)\) as a standard RERM algorithm with no source information _i.e._\(h_{S}=0\) (see _e.g._ Zhang, 2004; Wibisono et al., 2009). Finally, we must highlight that one should avoid using the exponential loss when the source and target domains are unrelated due to the presence of the term \(e^{\alpha\mathcal{R}[h_{S}]}\) in the corresponding upper bound.
**Remark 4.1** (cross validation procedures).: _The notion of stability has many attractive qualities. In particular, it yields complexity-free bounds for cross-validation methods. (see e.g. Bousquet and Elisseeff, 2002; Kumar et al., 2013; Celisse and Mary-Huard, 2018). For example, one can easily show that_
\[\mathbb{E}\left[\left|\widehat{\mathcal{R}}_{\mathrm{loo}}\left[\mathcal{A} \left(\mathcal{D}_{T}\right)\right]-\mathcal{R}\left[\mathcal{A}\left( \mathcal{D}_{T}\right)\right]\right|\right]\leq\beta(n).\]
_Proposition 3.1 shows that the quality of risk estimation with \(\mathrm{l.o.o.}\) depends directly on the quality of the source predictor \(h_{S}\). Note that the same conclusion holds for model selection with \(\mathrm{l.o.o.}\) cross-validation: Given a family of source hypotheses, the quality of the model selection procedure depends directly on the quality of the provided learners independently of the complexity of \(\mathcal{H}_{T}\). Besides, using the same proof techniques, we can show that Algorithm 2.2 is \(L_{2}\) stable with stability parameter depending on \(\Psi\left(\mathcal{R}\left[h_{S}\right]\right)\). \(L_{2}\) stability is similar to hypothesis stability, where the \(L_{1}\) moment is replaced by the \(L_{2}\) moment in Definition 2.2. The latter notion allows obtaining theoretical guarantees regarding the K-fold and the \(\mathrm{l.o.o.}\) schemes. It also derives asymptotic confidence intervals for cross-validation procedures in risk estimation and model selection (Rayle et al., 2020; Austern and Zhou, 2020). In our particular case, Proposition 3.1 implies that the tightness of the confidence intervals of cross-validation methods depends only on the quality of \(h_{S}\)._
### Excess Risk
In this section we analyse the excess risk of Algorithm 2.2 defined as:
\[\mathcal{E}_{\text{ex}}=\mathbb{E}\left[\mathcal{R}\left[\mathcal{A}\right] -\mathcal{R}\left[h^{*}+h_{S}\right]\right],\]
where \(h^{*}=\arg\min_{h\in\mathcal{H}}\mathcal{R}\left[h_{S}+h\right]\). To this end, we start by showing that \(\mathcal{E}_{\text{ex}}\) depends on the upper bounds of the _(pointwise) hypothesis stability_ and the regularization parameter \(\lambda\). Further, we derive precise finite-sample rates for the surrogate losses introduced in Section 3.3.
**Theorem 4.2**.: _Suppose that \(\left\|h^{*}\right\|_{k}<\infty\). Then, the excess risk of algorithm 2.2 verifies,_
\[\mathcal{E}_{\text{ex}}\leq\gamma(n)+\beta(n)+\lambda\left\|h^{*}\right\|_{k} ^{2}.\]
_Making \(\lambda\) varying with the sample size \(n\), we obtain various consistent bounds for different losses. In the sequel, we assume that \(\kappa\leq 1\) and \(M_{S}\leq 1\) to avoid notional burden. When \(\phi\) is either the MSE or the squared hinge and \(\lambda=\sqrt{\frac{\mathcal{R}[h_{S}]}{\sqrt{n}}}\), it holds:_
\[\mathcal{E}_{\text{ex}}\leq\mathcal{O}\left(\sqrt{\frac{\mathcal{R}[h_{S}]}{ \sqrt{n}}}\right).\]
_Furthermore, if \(\phi\) is the exponential loss and \(n\geq\frac{M_{S}^{2}\ln(n)^{2}}{\mathcal{R}[h_{S}]}\), picking \(\lambda=4\frac{\sqrt{\mathcal{R}[h_{S}]}\wedge 1}{\ln(n)}\) yields:_
\[\mathcal{E}_{\text{ex}}\leq\mathcal{O}\left(\frac{\sqrt{\mathcal{R}[h_{S}]} \wedge 1}{\ln(n)}\right),\]
_otherwise picking \(\lambda=\frac{\ln(n)^{2}}{\sqrt{n}}\) gives:_
\[\mathcal{E}_{\text{ex}}\leq\mathcal{O}\left(\frac{\ln(n)^{2}}{\sqrt{n}}\right).\]
_Suppose that the function \(\phi\) is the logistic loss or the softplus. Then the choice \(\lambda=\frac{1}{\sqrt{n}}\) yields:_
\[\mathcal{E}_{\text{ex}}\leq\mathcal{O}\left(\frac{1}{\sqrt{n}}\right).\]
In particular, Theorem 4.2 yields the consistency of RERM. Furthermore, the Remark 4.1 regarding the generalization gap still holds for the excess risk. First, when \(\mathcal{R}[h_{S}]\) is small, Algorithm 2.3 with MSE or squared hinge would have the fastest convergence rate. Second, when \(\mathcal{R}[h_{S}]\) is large compared to the sample size \(n\), then the safest option is to use the logistic or the softplus losses with \(\lambda=\frac{1}{\sqrt{n}}\). Note that, if \(\mathcal{R}[h_{S}]\) is small an improved convergence rate \(\left(1/\sqrt{-n\ln\left(\mathcal{R}[h_{s}]\right)}\right)\) can be achieved for the latter losses (see Appendix B.6 for further details). Finally, Algorithm 2.2 with the exponential loss is likely to suffer from negative learning. Indeed, if \(\mathcal{R}[h_{S}]\) is large, one needs a large amount of data to ensure the non-triviality of the rate \(\mathcal{R}[h_{S}]/\ln(n)\). It is worth noting that the rate of convergence with the exponential loss is naturally logarithmic even without a source hypothesis; see, for instance, Corollary 4.1 and Theorem 4.4 in Zhang (2004). To conclude, using a good source hypothesis improves convergence rates of RERM compared to those derived without transfer (Zhang, 2004).
**Remark 4.2** (on the universal consistency).: _If we assume that the kernel \(k\) is non-polynomial, \(h_{S}\) is continuous and the distribution of \(X\in\mathcal{X}_{T}\) is regular (see e.g. Definition 4.2 in Zhang, 2004). Then, one can use any universal approximation theorem (see for instance Theorem 4.1 in Zhang, 2004) to obtain_
\[h^{*}=\operatorname*{arg\,min}_{h\in\mathcal{H}}\mathcal{R}\left[h_{S}+h \right]=\operatorname*{arg\,min}_{h\in\mathcal{L}(\mathcal{X}_{T},\mathbb{R}) }\mathcal{R}\left[h_{S}+h\right],\]
_where \(\mathcal{L}(\mathcal{X}_{T},\mathbb{R})\) is the space of real-valued functions defined on \(\mathcal{X}_{T}\). The universal consistency of \(\mathcal{A}\) follows immediately from Theorem 4.2. Further, all the losses presented in this paper are_ classification calibrated (_Bartlett et al._, 2006) meaning that:_
\[\operatorname*{arg\,min}_{h\in\mathcal{L}(\mathcal{X}_{T},\mathbb{R})} \mathcal{R}\left[h\right]=\operatorname*{arg\,min}_{h\in\mathcal{L}( \mathcal{X}_{T},\mathbb{R})}\mathcal{R}^{\text{0-1}}\left[h\right],\]
_where \(\mathcal{R}^{\text{0-1}}\left[h\right]=P_{T}(\operatorname{sign}\left(h(X) \right)\neq Y)\) is the usual classification accuracy. Thus, minimizing the excess risk would likely yield a classifier with good accuracy._
## 5 Numerical experiments
We illustrate our analysis by providing some results using simulated data that aim to underscore the robustness of each loss to negative learning scenarios. The experiment is conducted as follows. A source domain is considered with random variables \((X_{S},Y_{S})\in\mathbb{R}^{2}\times\{-1,1\}\), where the positive and negative classes are respectively drawn from two multivariate \(t\)-distributions \(\mathcal{T}((r,0),3I_{2},2.5)\) and \(\mathcal{T}((-r,0),3I_{2},2.5)\). We train a linear classifier \(h_{S}\) on a source dataset of size \(10000\) using the SVM algorithm.
To emphasize the impact of negative learning on each loss, we generate a smaller target dataset of size \(100\). The distributions for positive and negative classes are given by \(\mathcal{T}(((r+d)cos(\theta),(r+d)sin(\theta)),I_{2},2.5)\) and \(\mathcal{T}((-(r+d)cos(\theta),-(r+d)sin(\theta)),I_{2},2.5)\), respectively. For different values of \(\theta\), the target risk \(\mathcal{R}\left[\hat{h}+h_{s}\right]\) of the analyzed RERM algorithm (with \(\lambda=1\)) trained on the small size dataset is estimated using a test set of size \(10000\).
It is important to note that when \(\theta=0\), it corresponds to the scenario of positive learning since the decision boundaries of both domains are similar. On the other hand, the case where \(\theta=\pi\) corresponds to negative learning since the true decision functions of the source and the target domain are pointing to opposite directions.
Figure 1 presents the median true risk of the HTL algorithm (cf. Equation 2.3) as a function of \(\theta\) for \((r,d)=(5,5)\) computed over \(1000\) simulations. The parameter \(s\) of the softplus loss is set to \(0.1\). Consistent with our theoretical analysis, the softplus and logistic functions exhibit significant robustness to negative transfer.
## 6 Conclusion
In this paper, we study hypothesis transfer learning through the angle of Algorithmic Stability. Following the work of Kuzborskij2013, where hypothesis stability is shown for the MSE in the regression setting, we derive similar hypothesis stability rates in classification with general losses under slight assumptions. Furthermore, we show that our assumptions are satisfied for the most popular machine learning losses, making our work valuable for practitioners. Moreover, we leverage our stability results to provide finite-sample analysis on the generalization gap and the excess risk. We show that HTL framework is efficient and explicit (fast) rates for these popular losses. Our theoretical analysis will help practitioners better understand the benefits of HTL and give insight into the loss choices.
The proposed work is general and may fit with many other domains. Future work may involve our analysis for different Machine Learning tasks where transfer learning procedures can be beneficial such as robust learning (Shafahi et al., 2020; Laforgue et al., 2021; Staerman et al., 2021), anomaly detection (Andrews et al., 2016; Chandola et al., 2009; Staerman et al., 2020, 2022), speech (Campi et al., 2021; 2023), automatic language generation (Staerman et al., 2021; Golovanov et al., 2019), knowledge distillation (Cho and Hariharan, 2019), events-based modelling (Staerman et al., 2022), fairness (Colombo et al., 2022) or general neural-networks based tasks (Colombo et al., 2022; Picot et al., 2023; Darrin et al., 2023).
|
2301.07175 | Scaffold-Based Multi-Objective Drug Candidate Optimization | In therapeutic design, balancing various physiochemical properties is crucial
for molecule development, similar to how Multiparameter Optimization (MPO)
evaluates multiple variables to meet a primary goal. While many molecular
features can now be predicted using \textit{in silico} methods, aiding early
drug development, the vast data generated from high throughput virtual
screening challenges the practicality of traditional MPO approaches. Addressing
this, we introduce a scaffold focused graph-based Markov chain Monte Carlo
framework (ScaMARS) built to generate molecules with optimal properties. This
innovative framework is capable of self-training and handling a wider array of
properties, sampling different chemical spaces according to the starting
scaffold. The benchmark analysis on several properties shows that ScaMARS has a
diversity score of 84.6\% and has a much higher success rate of 99.5\% compared
to conditional models. The integration of new features into MPO significantly
enhances its adaptability and effectiveness in therapeutic design, facilitating
the discovery of candidates that efficiently optimize multiple properties. | Agustin Kruel, Andrew D. McNaughton, Neeraj Kumar | 2022-12-15T21:42:17Z | http://arxiv.org/abs/2301.07175v2 | # Scaffold-Based Multi-Objective Drug Candidate Optimization
###### Abstract
Multiparameter optimization (MPO) provides a means to assess and balance several variables based on their importance to the overall objective. However, using MPO methods in therapeutic discovery is challenging due to the number of cheminformatics properties required to find an optimal solution. High throughput virtual screening to identify hit candidates produces a large amount of data with conflicting properties. For instance, toxicity and binding affinity can contradict each other and cause improbable levels of toxicity that can lead to adverse effects. Instead of using the exhaustive method of treating each property, multiple properties can be combined into a single MPO score, with weights assigned for each property. This desirability score also lends itself well to ML applications that can use the score in the loss function. In this work, we will discuss scaffold focused graph-based Markov chain monte carlo framework built to generate molecules with optimal properties. This framework trains itself on-the-fly with the MPO score of each iteration of molecules, and is able to work on a greater number of properties and sample the chemical space around a starting scaffold. Results are compared to the chemical Transformer model molGCT to judge performance between graph and natural language processing approaches.
## 1 Introduction
Machine learning (ML) has become increasingly useful for medicinal chemistry, including in the area of drug design. Lo et al. (2018) A molecule's structure determines its activity towards biological targets, physiochemical properties, even ease of synthesis. It follows that all of these properties must be balanced when designing drug candidates at the risk of becoming toxic or ineffective. The challenge lies in predicting which portions of a molecular structure contribute to property values closer to the desired goal. ML streamlines this process, with two techniques highlighted in this paper: optimization and conditional models.
In optimization, the model navigates through chemical space using iterative changes to a molecule as movement. The model seeks paths that lead to desirable molecules while avoiding paths that end in toxic or otherwise ineffective solutions. How the model ranks molecules according to its desirability requires one of two main multi-parameter optimization (MPO) approaches: Pareto optimization or desirability functions. Given the choice between the two, Pareto optimization becomes infeasible when analyzing the overwhelming quantity of chemical properties. D. Segall (2012) Without reliable
## 2 Methods
Figure 1 displays a summary of the ScaMARS architecture. ScaffMARS's expanded equation allows for many more properties calculated through RDKit Landrum et al. (2022) as well as alternative desirability functions, while the original MARS paper focused on optimizing two chemical and two ML-predicted biological properties. The user may easily add properties to the equation through a single change in the script or limit the focus when calling the model by supplying a list of desired properties.
### Objectives
Desirable ranges for a molecule's properties depend on the application. The desirability function in ScaMARS is flexible enough to account for user choice in which properties to use, as well as custom formulae for calculation and normalization of properties. This work follows the ranges for
Figure 1: ScaMARS workflow for proposing a new molecule generation. First, the initial scaffold or molecule from the previous generation is fed into the MPNN. A new molecule is then proposed through edits (addition, subtraction) by the MPNN. The scores for prior and proposal are used in the annealed MCMC to choose whether the model accepts the proposal. If so, the proposal is added to the generation and the cycle repeats. If not, the prior molecule is kept unchanged for the next generation. Once it reaches the desired number of molecules, MPNN loss is calculated on the success of the entire generation to favor beneficial edits.
assessments of absorption, distribution, metabolism, and excretion (ADME) used in SwissADME Daina et al. (2017) and summarised in Appendix A. Whether the function seeks to linearly minimize or maximize the property that falls within the range was determined through trends of each property. Including the original GSK3\(\beta\), JNK3, QED, and SA, proposed objectives for ScaMARS to optimize included: Calculated Partition Coefficient (cLogP), Number of Rotatable Bonds (nRotat), Fraction of scp\({}^{3}\) hybridized carbons (fCsp3), Molecular Weight (MW), Topological Polar Surface Area (TPSA), and Tanimoto similarity to the starting scaffold. Bickerton et al. (2012), Ertl and Schuffenhauer (2009)
The default desirability function is a Derringer function for the additive mean of all normalized properties as follows: \((\sum_{i=1}^{n}d_{i}Y_{i})/n\) where \(n\) is the number of properties and \(d_{i}\) is the weight given to the normalized property \(Y_{i}\). Each \(Y_{i}\) was normalized with different user-defined functions according to the SwissADME property ranges and whether the value must be maximized or minimized. Most were a linear function between the maximum and minimum which defaults to 0 for values outside the SwissADME range. For this application, all property weights were kept at 1. The full equation for all ten properties would therefore be an average of raw values, normalized score, and conditionals such as QED, SA, and cLogP respectively: \(\frac{1}{10}\sum objectives\). A geometric mean was also implemented as an alternative: \((\prod_{i=1}^{n}d_{i}Y_{i})^{1/n}\) To mitigate unintended effects on the model, a hybrid of both additive and geometric was added where the model resorts to comparing the additive mean when no molecules in the new generation show an increase in the geometric. Linear normalization used a form already present in MARS, \(\frac{max-x}{max-min}\), which would invert the property for minimization. Those that instead needed maximization were inherently on a scale of 0 to 1. Values beyond the extremes caused \(Y_{i}\) to be set at 0, as that would imply toxicity or ineffectiveness.
### Scaffold
Added flexibility and edits provide ScaMARS the choice to optimize according to a probability, or return to the scaffold if further proposals produce invalid molecules. When this occurs, there is a 50% chance the path will propose the original scaffold instead of a modification. While the scaffold proposal must still be accepted through the MCMC sampler, acceptance is more likely to occur if the path is at a score closer to the original scaffold (little to no optimization in properties) or early in the run (higher annealing temperature). In the event more than one scaffold is input at the start, there is still a 50% chance to return to a scaffold, then the specific scaffold chosen at random.
### Computation
All ScaMARS trials were run for 5 hours on an RTX 2080 Ti GPU for 600 steps, each with a generation size of 1,000 molecules. Only the molecules produced at the final step were chosen for comparison, corresponding to the optima once the annealing temperature reached zero. MolGCT was trained using a single RTX 2080 Ti GPU as well for 48 hours for 9 epochs. 2,000 molecules were generated using the trained molGCT with inputs logP=0.05 TPSA=20 QED=0.9, but only 890 remained valid and unique. Inputs were chosen to maximize properties while confined to the recommended ranges Kim et al. Kim et al. (2021) set (0.03-4.97 LogP, 17.92-112.83 TPSA, 0.58-0.95 QED). T-SNE visualizations were created using the openTSNE, seaborn, and RDKit packages. Policar et al. (2019), Waskom (2021), Landrum et al. (2022) First, a multiscale affinity matrix was calculated between perplexities 50 and 500 using cosine distances. The affinities were then passed to a PCA-initialized Flt-SNE Linderman et al. (2019) that used the jaccard-tainmoto metric to optimize the space, displayed in kernel density estimation (KDE) plots.
## 3 Discussion
Optimizing for a greater number of properties implies more balanced molecules, but adding properties to the additive mean has the unintended effect of decreasing the influence each has on the score. The alternative to this is a geometric mean. Every property has a strong effect on the score through multiplication and, if any property reaches zero, the entire score becomes zero and the molecule is ignored. This is closer to reality as well, as a toxic molecule will not be considered during drug development regardless of how desirable the other properties. Since both MARS and ScaMARS only calculate loss using improved molecules, though, using only geometric means exclusively returned a score of 0 and never allowed the model to learn.
The hybrid solution was intended to bridge the gap between allowing the model to learn and rejecting molecules more strictly, but it instead increases computation time for each step and resorts to the additive method the majority of the time. We did not observe an increase in ability to reach higher scores or raise scores quicker. The geometric mean is better suited for filtering solutions, scoring the output molecules in post-processing more strictly. Nonetheless, the desirability function still serves as an efficient way of lowering the dimensionality of many properties. From 2 properties to 7, ScaMARS still completed 600 steps after five hours.
On the other hand, conditional models are limited by the training data, and generated molecules will be less explainable. While it is beneficial to have control over exact property values, an optimization model can run quicker, show why a given structure was created, and propose molecules in different regions of chemical space according to a starting scaffold. According to metrics outlined by Xie et al. (2021) to compare MARS, molGCT performs similarly but lacks the ability to generate molecules that pass the SwissADME checks for Success Rate. In total, ScaMARS has a Diversity (Div) of 84.6% while molGCT has 85.0%. ScaMARS Success Rate (SR) totals 99.5% while molGCT totals 52%. The difference in SR is a result of TPSA values of molGCT generated molecules following a normal distribution between -1 and 2, potentially due to insufficient training or the model's inability to balance TPSA with other properties.
The flexibility ScaMARS provides in the choice of starting scaffolds, properties, and fragments is essential to drug design and hindered by the focus on generating a diverse library. Paths will quickly diverge from scaffolds in order to satisfy the novelty requirements. Figure 2 illustrates this with three ScaMARS runs of differing scaffolds and properties. Instead of clustering around the given scaffolds, all runs overlap a shared region of chemical space with scaffolds S-Adenosyl methionine (SAM) andfuran on the outside. Runs within the shared space have slightly varied distributions, though. SAM with 6 features mostly occupies the bottom right (+x, -y), furan the bottom left (-x,-y), and SAM with 3 features the middle (x 0,y 0) representing the fact it shares a scaffold with one and the properties of the other. Proposed changes that force the paths back to the scaffold only affect the first 250 steps, with molecules after that point plateauing at a score too high from the initial molecule to have it accepted through MCMC at that temperature. Even with these flaws, the explored chemical space still varied according to the initial scaffolds and properties.
## 4 Conclusion & Future Work
In this contribution, we present ScaMARS model with a flexible architecture for multiparameter optimization. This includes prioritization of the initial scaffold, support for a greater number of properties, and variants to the desirability function. Comparison to a Transformer conditional model molGCT shows that ScaMARS remains better suited for the optimization of molecules for drug design. Future work could introduce more explainability to the model, remove the redundancy in the fragment-based approach, and allow the model to explore chemical space closer to the chosen scaffold.
Figure 2: T-distributed stochastic neighbor embedding (T-SNE) of the final molecules for each model run. SAM is marked by a blue “X” and furan by orange. (A) ScaMARS was run to optimize 6 features (QED, TPSA, cLogP, nRotat, fCsp3, SA) and start from SAM as the scaffold. (B) ScaMARS optimized 3 features (QED, TPSA, cLogP) with furan as the scaffold. (C) ScaMARS optimized 3 features (QED, TPSA, cLogP) with SAM as the scaffold. Daylight fingerprints were compared as a 2048-bit vector for each molecule.
## Acknowledgements
This research was supported by the I3T Investment, under the Laboratory Directed Research and Development (LDRD) Program at Pacific Northwest National Laboratory (PNNL). PNNL is a multi-program national laboratory operated for the U.S. Department of Energy (DOE) by Battelle Memorial Institute under Contract No. DE-AC05-76RL01830. The computational work was performed using PNNL's research computing at Pacific Northwest National Laboratory.
|
2303.00063 | New chondritic bodies identified in eight oxygen-bearing white dwarfs | We present observations and analyses of eight white dwarf stars that have
accreted rocky material from their surrounding planetary systems. The spectra
of these helium-atmosphere white dwarfs contain detectable optical lines of all
four major rock-forming elements (O, Mg, Si, Fe). This work increases the
sample of oxygen-bearing white dwarfs with parent body composition analyses by
roughly thirty-three percent. To first order, the parent bodies that have been
accreted by the eight white dwarfs are similar to those of chondritic
meteorites in relative elemental abundances and oxidation states. Seventy-five
percent of the white dwarfs in this study have observed oxygen excesses
implying volatiles in the parent bodies with abundances similar to those of
chondritic meteorites. Three white dwarfs have oxidation states that imply more
reduced material than found in CI chondrites, indicating the possible detection
of Mercury-like parent bodies, but are less constrained. These results
contribute to the recurring conclusion that extrasolar rocky bodies closely
resemble those in our solar system, and do not, as a whole, yield unusual or
unique compositions. | Alexandra E. Doyle, Beth L. Klein, Patrick Dufour, Carl Melis, B. Zuckerman, Siyi Xu, Alycia J. Weinberger, Isabella L. Trierweiler, Nathaniel N. Monson, Michael A. Jura, Edward D. Young | 2023-02-28T20:06:31Z | http://arxiv.org/abs/2303.00063v1 | # New chondritic bodies identified in eight oxygen-bearing white dwarfs
###### Abstract
We present observations and analyses of eight white dwarf stars that have accreted rocky material from their surrounding planetary systems. The spectra of these helium-atmosphere white dwarfs contain detectable optical lines of all four major rock-forming elements (O, Mg, Si, Fe). This work increases the sample of oxygen-bearing white dwarfs with parent body composition analyses by roughly thirty-three percent. To first order, the parent bodies that have been accreted by the eight white dwarfs are similar to those of chondritic meteorites in relative elemental abundances and oxidation states. Seventy-five percent of the white dwarfs in this study have observed oxygen excesses implying volatiles in the parent bodies with abundances similar to those of chondritic meteorites. Three white dwarfs have oxidation states that imply more reduced material than found in CI chondrites, indicating the possible detection of Mercury-like parent bodies, but are less constrained. These results contribute to the recurring conclusion that extrasolar rocky bodies closely resemble those in our solar system, and do not, as a whole, yield unusual or unique compositions.
0000-0002-4882-8879]S. L. Klein
0000-0002-4882-8879]Patrick Dufour
0000-0002-4883-0883]Carl Meils
0000-0002-1883-0883]B. Zuckerman
0000-0002-4883-0883]Siyi Xu
0000-0002-4883-0883]A. Lyse
0000-0002-4883-0883]J. Weinberger
0000-0002-4883-0883]Isabella L. Trieweiler
0000-0002-4883-0883]Nathaniel N. Monson
0000-0002-4883-0883]Michael A. Jura
0000-0002-4883-0883]E. Edward D. Young
## 1 Introduction
Categorization of the compositions of rocky exoplanets, and evaluation of their similarities to or differences from rocky bodies in our solar system, is a challenging and flourishing area of study. To this end, many studies have characterized exoplanet compositions using stellar spectroscopy of FGK, or Sun-like, stars (e.g., Unterborn and Panero, 2019; Adibekyan et al., 2021; Kolecki and Wang, 2022) in combination with planetary mass-radius relations. An alternative approach is to use white dwarf stars (WDs) - stars in the last stage of stellar evolution - that have been "externally-polluted" by accretion of rocky bodies from their surrounding planetary systems. Owing to their strong gravitational acceleration, the atmospheres of WDs are typically devoid of elements heavier than helium. The heavy elements sink out of the observable atmosphere on timescales of days to millions of years (Koester, 2009), depending on the atmospheric temperature and dominant constituent (H or He). Because of the relatively short settling timescales of heavy elements, externally-polluted WDs must have acquired their heavy elements relatively recently compared to their lifetimes. Radiative levitation as a mechanism to maintain heavy elements in a white dwarf atmosphere (e.g., Chayer et al., 1995) is not effective for the white dwarfs presented herein (helium atmosphere WDs with effective temperatures cooler than 20,000K).
WDs for which hydrogen presents the strongest spectral line are referred to as 'DAs' and neutral helium as 'DBs.' If a spectrum displays both H i and He i lines, the spectral type can be either DAB or DBA depending on whether H or He, respectively, has the strongest optical absorption line. White dwarfs are deemed polluted if any element heavier than He is detected in their atmosphere; following Sion et al. (1983) and Wesemael et al. (1993), we denote external-pollution with a 'Z' in the spectral classifications.
We now understand that these polluted WDs, constituting 25-50% of all WDs, accrete material from the
planets, asteroids, and comets that orbited the host star and were subsequently scattered toward the star by the post-main sequence evolution (Debes and Sigurdsson, 2002; Jura, 2003; Zuckerman et al., 2003, 2010; Koester et al., 2014; Veras, 2016). Observations of transiting debris from planetary material that has been tidally disrupted by the WD (Vanderburg et al., 2015; Xu et al., 2016; Vanderbosch et al., 2020; Guidry et al., 2021; Vanderbosch et al., 2021) suggest the presence of a body in the process of being pulverized and accreted by the WD, thus substantiating our understanding of the source of pollution. Analyses of polluted WDs to evaluate the compositions of extrasolar rocky bodies have proliferated in the last decade (e.g., Zuckerman et al., 2007; Klein et al., 2010; Vennes et al., 2010; Melis et al., 2011; Farihi et al., 2011; Zuckerman et al., 2011; Jura et al., 2012; Dufour et al., 2012; Gansicke et al., 2012; Jura and Young, 2014; Xu et al., 2017; Harrison et al., 2018; Hollands et al., 2018; Doyle et al., 2019; Swan et al., 2019; Bonsor et al., 2020; Buchan et al., 2022).
To date, the parent bodies being accreted by polluted WDs mostly resemble dry, rocky bodies similar in size and general composition to asteroids in the solar system. However, a few water-rich bodies (Farihi et al., 2011; Farihi et al., 2013; Raddi et al., 2015; Hoskin et al., 2020; Klein et al., 2021), including a Kuiper Belt analog (Xu et al., 2017), have been discovered. Additionally, parent bodies that resemble giant planets (Gansicke et al., 2019) and icy moons (Doyle et al., 2021) have been argued. While just a few dozen WDs are 'heavily' polluted, with more than a few rock-forming elements detected, taken together, 23 distinct elements have been detected in polluted WDs (see Table 1 of Klein et al., 2021). Compositional variations due to igneous differentiation- with compositions that range from crust-like to core-like - have been identified (e.g. Zuckerman et al., 2011; Melis et al., 2011; Gansicke et al., 2012; Jura and Young, 2014; Melis and Dufour, 2017; Putirka and Xu, 2021; Hollands et al., 2021; Johnson et al., 2022).
In this work we present new observations of eight heavily polluted DB WDs and examine the compositions of the accreting rocky parent bodies. We focus on evaluating these bodies through bulk composition and oxidation state. In addition to Ca and the four major rock-forming elements (O, Mg, Si, and Fe), instances of additional elements (e.g., Al, Cr, and Ti) have been detected in some of the WDs. These new data increase the sample of oxygen-bearing WDs with parent body composition analyses by \(\sim 33\%\). This paper is organized as follows: in Section 2 we list our target selection and observations for the WDs described. Our atmosphere models are described in Section 3 along with spectra of the detected major rock-forming elements. Section 4 provides an analysis of the parent body compositions and in Section 5 we summarize our findings.
## 2 Observations
### Target Selection
In this paper we focus on eight DB WDs (Table 1). In each of these WDs, all four major rock-forming elements (O, Mg, Si, Fe) are detected.
Three out of eight WDs in this work have been observed over the years by members of our team. In particular we obtained HIRES spectra of WD1244+498 and SDSSJ1248+1005 because they were previously identified as DBZs in SDSS spectra (Kleinman et al., 2013; Koester and Kepler, 2015), and WD1415+234 was followed up at high resolution due to the possible appearance of a Ca ii K line in Limoges and Bergeron (2010).
The other five WDs were identified in a search for heavily polluted WDs (Melis et al., 2018). We compiled our list of targets by utilizing the sample of probable WDs from Gentile Fusillo et al. (2019), which calculates stellar parameters and the probability of an object being a WD based on fits to Gaia DR2 data.
To focus on finding DB WDs, we compared GALEX colors (Bianchi et al., 2017) to effective temperature (\(T_{\rm eff}\)) (Figure 1). Differences in opacity of DA and DB WDs have a salient effect on emergent fluxes, particularly at UV wavelengths as observed with GALEX. These colors reveal a distinct dichotomy between DA and DB WDs (e.g. Bergeron et al., 2019). We constrained Gaia WD candidates from Gentile Fusillo et al. (2019) to include only those where \(\rm{Gmag}<17.0\), distance \(<300\) pc, and far-UV (FUV) and near-UV (NUV) GALEX data exist, (see Figure 1). Known characterizations of each WD are labeled as either green squares (DAs) or blue triangles (DBs), and unconfirmed WD candidates are labeled as gray circles. The polluted DBs analyzed in this paper are represented as red circles.
To process these data for our purposes, we constructed a "cut function" (red curve in Figure 1) with the equation
\[T_{\rm eff,cut}=28000\exp\left(-\left(\frac{\rm FUV-NUV+0.24}{2}\right)^{1/3} \right), \tag{1}\]
that applies for \(12000<T_{\rm eff}<24000\). We used equation 1 to flag points as "likely DBs" where \(T_{\rm eff}>T_{\rm eff,cut}\) (above the red curve in Figure 1) and those where \(T_{\rm eff}<T_{\rm eff,cut}\) as "likely DAs" (below the red curve, Figure 1). This allowed us to specifically target WDs that fell within the range of known DBs. This particular selection method for observing WDs led to the discovery
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline Name & UT Date & Instrument & Coverage (Å) & Int. Time (sec) & SNR\({}^{a}\) \\ \hline GaiaJ0218+3625 & 2021/08/31 & HIRES (blue) & 3115\(-\)5950 & 3600 & 43 \\ & 2020/10/08 & HIRES (blue) & 4000\(-\)5950 & 2000\(\times\)2 & 7\({}^{bc}\) \\ & 2019/07/16 & HIRES (red) & 4715\(-\)8995 & 3300 & 33 \\ & 2018/12/30 & Kast & 3420\(-\)5485, 5590\(-\)7840 & 3300 & 62 \\ WD1244+498 & 2018/05/18 & HIRES (blue) & 3115\(-\)5950 & 2000 & 24 \\ & 2015/04/09 & HIRES (red) & 4715\(-\)8995 & 1800\(\times\)2 & 39\({}^{b}\) \\ & 2010/03/28 & HIRES (blue) & 3115\(-\)5950 & 3000 & 25 \\ SDSSJ1248+1005 & 2015/04/09 & HIRES (red) & 4715\(-\)8995 & 3000\(\times\)2 & 24\({}^{b}\) \\ & 2014/05/22 & HIRES (blue) & 3115\(-\)5950 & 3000\(\times\)3 & 32\({}^{b}\) \\ WD1415+234 & 2019/07/16 & HIRES (red) & 4715\(-\)8995 & 3300 & 43 \\ & 2016/04/01 & HIRES (blue) & 3115\(-\)5950 & 2400\(\times\)2 & 40\({}^{b}\) \\ & 2015/04/25 & ESI & 3900\(-\)10900 & 1180\(\times\)2 & 25\({}^{b}\) \\ SDSSJ1734+6052 & 2019/09/07 & HIRES (red) & 4715\(-\)8995 & 3600 & 29 \\ & 2019/07/16 & HIRES (red) & 4715\(-\)8995 & 3300 & 34 \\ & 2019/07/07 & HIRES (blue) & 3115\(-\)5950 & 3300 & 27 \\ & 2019/05/29 & Kast & 3415\(-\)5480, 6420\(-\)8790 & 3900 & 28 \\ GaiaJ1922+4709 & 2020/10/07 & HIRES (red) & 4715\(-\)8995 & 3600 & 37 \\ & 2020/06/14 & HIRES (blue) & 3115\(-\)5950 & 3000 & 28 \\ & 2019/12/09 & HIRES (red) & 4715\(-\)8995 & 3600 & 26 \\ & 2019/10/12 & Kast & 3420\(-\)5485, 6400\(-\)8800 & 3000 & 42 \\ EC22211\(-\)2525 & 2021/08/31 & HIRES (blue) & 3115-5950 & 3300 & 40 \\ & 2020/10/07 & HIRES (red) & 4715\(-\)8995 & 3600 & 46 \\ & 2019/07/07 & HIRES (blue) & 3115\(-\)5950 & 3300 & 38 \\ & 2019/07/03 & MagE & 3065\(-\)9470 & 1200\(\times\)2 & 78\({}^{b}\) \\ & 2018/12/12 & Kast & 3450\(-\)5475, 5590\(-\)7840 & 2700 & 17 \\ SDSSJ2248+2632 & 2019/09/07 & HIRES (red) & 4715\(-\)8995 & 3300 & 38 \\ & 2019/07/16 & HIRES (red) & 4715\(-\)8995 & 3300 & 43 \\ & 2019/07/07 & HIRES (blue) & 3115\(-\)5950 & 3000 & 36 \\ & 2017/12/11 & Kast & 3430\(-\)5500, 5625\(-\)7820 & 3600 & 62 \\ \hline \end{tabular} \({}^{a}\) Signal-to-noise-ratio (SNR) measured at 3445Å for HIRES (blue), 5195Å for HIRES (red), 5160Å for MagE, 5100Å for Kast, and 6000Å for ESI
\({}^{b}\) SNR for combined exposures
\({}^{c}\) Only CCDs 2 and 3 were used in our analysis
\end{table}
Table 1: WD Observation Data
of many of the polluted DB WDs in this study, as well as others to be published in future studies.
### Instrument Setup
Table 1 lists our target WDs along with their observation dates, instruments, and resulting data properties. We describe each instrument and observational setup in more detail below.
#### 2.2.1 Kast
Our large-scale survey to search for heavily polluted WDs from Gaia DR2 WD candidates (described in Section 2.1 and Melis et al., 2018) utilized the KAST Spectrograph on the 3m Shane telescope at Lick Observatory. Our standard setup implemented the d57 dichroic, which split blue light through the 600/4310 grism and red light through the 830/8460 grating. This setup provides a resolving power (R = \(\lambda/\Delta\lambda\)) for a 2'' slit in blue and red of R = 950 and 1,500, respectively, and wavelength coverage from 3450-7800 A. Where indicated in Table 1, we implemented another version of our setup which tilted the 830/8460 grating to cover redder wavelengths and specifically the Ca infrared triplet (\(\lambda\) 8498/8542/8662 A) resulting in red arm wavelength coverage from 6440\(-\)8750 A. For both setups, we used slit widths of 1, 1.5, or 2'' and integration times from 45\(-\)60 minutes depending on observing conditions and target brightness. The data were reduced using standard IRAF routines, including bias subtraction, flat-fielding, wavelength calibration using arc lamps, and instrumental response calibration using observations of standard stars (Tody, 1986). Signal-to-noise-ratios (SNRs) for the resulting spectra are measured at 5100A and reported in Table 1.
#### 2.2.2 MagE
Moderate resolution optical spectra of EC22211\(-\)2525 were acquired with the Magellan Echellette (MagE) spectrograph on the Magellan 1 (Baade) telescope at Las Campanas Observatory on 2019 July 03. EC22211\(-\)2525 was observed through the 0.5'' slit providing a resolving power of R \(\simeq\) 7,500. Data reduction was performed with the Carnegie Python pipeline (Kelson et al., 2000; Kelson, 2003) and SNR measurements were made at 5160 A.
#### 2.2.3 Esi
We used the Echellette Spectrograph and Imager (ESI) on the Keck II Telescope at Maunakea Observatory (Sheinis et al., 2002) to obtain a spectrum for WD1415+234. ESI data were taken with a 0.3''slit providing a resolving power of R \(\simeq\) 13,000. Data were reduced using MAKEE and IRAF, similar to the HIRES reduction process described in Klein et al. (2010). SNR for the resulting combined spectrum was \(\thicksim\) 25, measured at 6000A.
#### 2.2.4 Hires
We used HIRES on the Keck I Telescope at Maunakea Observatory (Vogt et al., 1994) to obtain higher resolution spectra for each of the eight WDs in this sample. HIRES data were taken with the C5 decker (slit width 1.148'') for a resolving power of R \(\simeq\) 37,000 and resulting in wavelength coverage of 3115-5950 A with the blue collimator and 4715-8995 A with the red collimator. Exposure times ranged from 30\(-\)60 minutes and depended on observing conditions and target brightness. Data were reduced using either the MAKEE software package with IRAF continuum normalization or IRAF reduction routines (see Klein et al., 2010 for more details on the methods and routines used). The SNR for the resulting spectra were measured at 3445A for HIRES blue and 5195A for HIRES red, and are displayed in Table 1.
## 3 Data Analysis
### Spectral Typing
WD spectral types are established according to the appearance of their optical spectra and do not always reflect the dominant atmospheric composition (e.g. GD 16 and GD 362, Koester et al., 2005; Zuckerman et al., 2007).
Figure 1: \(T_{\rm eff}\) as a function of GALEX colors. Here we show the Gaia WD candidates from Gentile Fusillo et al. (2019) that have both far-UV (FUV) and near-UV (NUV) GALEX data, which reveals a distinct dichotomy between DA and DB WDs. The subset of polluted helium-dominated atmospheres from this work is represented as red circles. The red curve is our constructed “cut function,” which we use to assign likely dominant elements based on the location of the WD parameters on this figure (see also Equation 1).
A colleague prudently pointed out, "Annie Jump Cannon was propothetical when she made it clear that stellar spectral types should never have physical interpretations, because she realized models would change but spectral morphology would be static for a given type" (J. Farihi, 2022, private communication).
Three stars in our sample (WD1244+498, SDSSJ1248+1005, WD1415+234) were previously known WDs; the other five are newly identified in this work. In all cases, as of the date of this publication, the spectral types in SIMBAD are either absent or need updating.
In trying to determine the appropriate spectral types for this set of WDs, we ran into a matter that requires some clarification. In all these spectra, the He i lines are clearly the dominant optical features: He i \(\lambda\)5876 A equivalent widths (EWs) range from 5-14 A, and line depths (as defined in Table 2 note) range from 0.34-0.48, with little depth difference between low and high resolution spectra. Thus the primary spectral type begins with 'DB' in each case (Table 2). However, since each WD also displays H\(\alpha\) and high-Z lines, the question is how to distinguish whether the secondary type should be DBZA or DBAZ? The paradigm established in Sion et al. (1983) and Wesemael et al. (1993) states that the spectral type is defined in order of the "strongest" optical spectral features, but no further definition is given as to what exactly that means. It is ambiguous whether "strongest" refers to the **equivalent width** or the line **depth**. These comparisons can be substantially different depending on the instrument spectral resolution, especially for Ca ii \(\lambda\)3933.663 A (CaK), which is typically the high-Z line with the largest EW in our temperature range (T\({}_{\rm eff}\)\(<\) 18,000 K). To illustrate this point, we list the CaK and H\(\alpha\) line depths measured at both higher resolution (R \(\sim\) 37,000) and lower resolution (R \(\sim\) 1000), as well as their EWs in Table 2.
If all we had were low resolution spectra, and if we chose to assign secondary spectral types by line depth, then four of the WDs would be DBZA and four DBAZ. But then when those same WDs are observed at high resolution, according to line depth, the four previous DBAZs would all change to DBZAs. Instead, we decided to assign the spectral type according to EW: DBAZ if EW(H\(\alpha\)) \(>\) EW(CaK), and DBZA if EW(CaK) \(>\) EW(H\(\alpha\)). As long as spectra have sufficient signal-to-noise to detect a given line, EW measurements are essentially independent of the instrument resolution, and thus our choice of spectral type should be enduring.
### Stellar Parameters
The effects of additional opacity from the presence of hydrogen and heavier elements in the atmospheres of He-dominated WDs with effective temperatures (\(T_{\rm eff}\)) \(<\) 20,000 K have been well described (Dufour et al., 2007, 2010; Coutu et al., 2019).
We follow an iterative procedure to obtain atmospheric parameters for each target. First, we get a rough estimate for \(T_{\rm eff}\) and gravity ( log \(g\)) by fitting photometry (typically Sloan Digital Sky Survey (SDSS), but PanSTARRS was used for EC22211\(-\)2525). We then fit the Ca ii K (CaK) region and H\(\alpha\) from low resolution spectra concurrently with SDSS _ugriz_ photometry (Alam et al., 2015) or PanSTARRS _grizy_ photometry (Flewelling et al., 2020) and Gaia parallax (Gaia Collaboration et al., 2016, 2021). Where available (Table 1) we use KAST spectra, otherwise we use SDSS spectra. Atmospheric structure calculations are then informed by the hydrogen abundance by number, n, (log \(n\)(H)/\(n\)(He)) and heavy element presence when scaling
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c c} \hline \hline WD & RA & Dec & G & D & Teff & log\(g\) & H\(\alpha\) & H\(\alpha\) & H\(\alpha\) & CaK & CaK & CaK & Spectral \\ Name & (J2000) & (J2000) & (mag) & (pc) & (K) & (cgs) & EW & depth & depth & EW & depth & depth & Type \\ & & & & & & & & (mÅ) & HIRES & lowres & (mÅ) & HIRES & lowres & \\ \hline Gaia\(\lambda\)0218+3625 & 02 18 16.64 & +36 25 07.6 & 16.4 & 116 & 14700 & 7.86 & 475 & 0.10 & 0.06 & 595 & 0.65 & 0.13 & DBZA \\ WD1244+498 & 12 47 03.28 & +49 34 23.5 & 16.6 & 120 & 15150 & 7.97 & 1600 & 0.26 & 0.16 & 664 & 0.67 & 0.20 & DBAZ \\ SDSSJ1248+1005 & 12 48 10.23 & +10 05 41.2 & 17.4 & 164 & 15180 & 8.11 & 11750 & 0.28 & 0.17 & 1245 & 0.66 & 0.39 & DBZA \\ WD1415+234 & 14 17 55.37 & +23 11 36.7 & 16.6 & 127 & 17300 & 8.17 & 1150 & 0.23 & 0.15 & 274 & 0.63 & 0.07 & DBAZ \\ SDSSJ1734+6052 & 17 34 35.75 & +60 52 03.2 & 16.9 & 150 & 16340 & 8.04 & 2000 & 0.25 & 0.21 & 256 & 0.67 & 0.08 & DBAZ \\ Gaia\(\lambda\)1922+4709 & 19 22 23.41 & +47 09 45.4 & 16.6 & 127 & 15500 & 7.95 & 510 & 0.16 & 0.08 & 528 & 0.57 & 0.18 & DBZA \\ EC22211\(-\)2525 & 22 23 58.39 & \(-\)25 10 43.6 & 16.3 & 109 & 14740 & 7.89 & 1500 & 0.24 & 0.22 & 710 & 0.68 & 0.17 & DBAZ \\ SDSSJ2248+2632 & 22 48 40.93 & +26 32 51.6 & 16.4 & 123 & 17370 & 8.02 & 750 & 0.18 & 0.15 & 169 & 0.55 & 0.07 & DBAZ \\ \hline \end{tabular} Note. –G\({}_{\rm mag}\) and distances (calculated from parallaxes) are from Gaia EDR3 (Gaia Collaboration et al., 2016, 2021). \(T_{\rm eff}\) and log\(g\) are fit as described in Section 3.2. Typical uncertainties for \(T_{\rm eff}\) and log\(g\) are \(\pm\)500K and \(\pm\)0.05, respectively. “lowres” refers to either SDSS or Kast spectra. Line “depth” is the position of the line center between the continuum and zero, measured as the fractional distance below the continuum. Spectral Type assignments are based on equivalent widths (EWs) of Ca II K (CaK) and H\(\alpha\) as described in Section 3.1.
\end{table}
Table 2: WD Parameters
elements to the number abundance of Ca in a CI chondrite (Lodders, 2019).
We compared our fits to Gaia and GALEX photometry to confirm good agreement (see Figure A1); standard de-reddening corrections were applied as described in Coutu et al. (2019). Our best-fit parameters are given in Table 2. We use these parameters to calculate the model atmospheres from which we produce synthetic spectra for each WD.
### Abundance Measurements
Over a series of multiple iterations, we fit these synthetic spectra to the HIRES data until we find a best-fit abundance solution for each element detected (Table 3). We show a sample of WD spectral lines for detections of O, Mg, Si, Fe, and Ca (Figure 2). In each panel our spectra are shown in black, and our best-fit model is overlain in red, and the numerical average abundance is given at the bottom of each panel. Our sample of eight WDs have clear detections of O (7772 A, multiplet), Mg (4481A, multiplet), Si (6347A), Fe (5169A), and Ca (3933A and 8542A), as well as other detected lines. Measured radial velocities (RVs) and a full listing of all detected lines with their EWs are given in the Appendix Tables A1 and A2, respectively. We also discuss some detections of non-photospheric lines in the Appendix and Table A1.
Abundances are reported by number, n, relative to He along with uncertainties for each of the WDs in Table 3. Where elements are detected through multiple lines, we take the average abundance. Uncertainties are measured as the standard deviation where there are multiple lines of the same element. Systematic uncertainties, such as from uncertain atomic data (Vennes et al., 2011; Gansicke et al., 2012), or other missing physics in atmosphere models (e.g. Klein et al., 2020; Cukanovaite et al., 2021) are difficult to quantify. Therefore, where only one line of an element is observed or where uncertainties are smaller than 0.15 dex, we conservatively set them to 0.15 dex.
## 4 Discussion
### Accretion and Diffusion
Three phases of accretion and diffusion of planetary debris onto a WD are commonly recognized in the literature: the buildup phase, sometimes referred to as an "increasing" phase, the steady-state phase, and the settling, or "decreasing" phase (e.g., Dupuis et al., 1992; Koester, 2009). Though the specific nomenclature varies, the idea remains the same: as a single parent body accretes onto a WD, the observed pollution will first increase as material accumulates in the WD atmosphere. Then, as material begins to sink through the atmosphere, a steady state is eventually reached between accretion and diffusive settling. Steady state is achieved on a timescale comparable to a few e-folding times for settling. Once the parent body source is depleted, material ceases to accrete, and the observed pollution decreases commensurate with the settling times of the individual elements.
The correction for this effect during steady-state accretion is straight forward \(-\) element ratios are multiplied by the inverse ratio of settling timescales; see Equation 7 in Koester (2009) and settling timescales in Table A3.
While it is not clear which accretion state WDs exist in, ongoing accretion can be assumed for WDs with observed infrared excess, which emerge where circumstellar debris disks thermally reprocess the light from the star (Jura, 2003). EC22211-2525 is the only WD in the sample with detected infrared excess (Lai et al., 2021), as can be seen in Figure A1.
### Abundance Pattern
For each of the WDs in this study we compared the observed abundances of rock-forming elements (Mg, Al, Si, Ca, Ti, Cr, and Fe) to those of typical rocky compositions in the solar system (CI chondrite, bulk silicate Earth, and continental crust). In general, the best fit is to CI chondrite. In Figure 3 we illustrate this result using the composition of the parent body polluting WD1244+498 as an example. The parent body is comparable to CI chondrite, as indicated by the close agreement of chondritic abundances (orange symbols) to the 1:1 line in Figure 3. Indeed, each element agrees with chondritic compositions within a factor of 2.
Motivated by Figure 3, we statistically evaluate the hypothesis that the parent bodies being accreted by these eight WDs were approximately chondritic in composition. Similar to Xu et al. (2013), Swan et al. (2019), and Doyle et al. (2021), we compare the goodness-of-fit for rock-forming elemental abundances observed in each WD to the known composition of CI chondrites using the reduced chi-square statistic, \(\chi^{2}_{\nu}\) (Figure 4). We calculate \(\chi^{2}_{\nu}\) using the elements Al, Si, Ca, Ti, Cr, and Fe, where available for each WD. Oxygen is excluded due to its correlation with the other rock-forming elements (see Section 4.4). Additionally, because we ratio elements to Mg, Mg is not an independent observation for this calculation and is therefore excluded. The data points and their uncertainties shown in Figure 4 represent propagated uncertainties using a Monte Carlo approach with a bootstrap of n=1.
The parameter \(\alpha\) represents a probability of obtaining \(\chi^{2}_{\nu}\) values greater than the observed value by chance.
Figure 2: Selected lines for each of the WDs in this study, displaying the detected O triplet and example lines for Mg, Si, Ca and Fe. Wavelengths are in air and shifted to the laboratory frame of rest. The red line is our best-fit model.
Figure 2: **(cont.).** Selected lines for each of the WDs in this study, displaying the detected O triplet and example lines for Mg. Si, Ca and Fe. Wavelengths are in air and shifted to the laboratory frame of rest. The red line is our best-fit model.
Convention suggests that the threshold to reject the hypothesis that the data are consistent with a CI composition is 5% or better, or \(\alpha<0.05\) (\(\alpha\sim 0.4\) for \(\chi^{2}_{\nu}=1\)). Due to the relatively small number of data points per star, and their uncertainties, the value of \(\chi^{2}_{\nu}\) is also uncertain, which can be accounted for using the approach of Andrae et al. (2010) in which the uncertainty in \(\chi^{2}_{\nu}\) is \(\sigma\sim\sqrt{2/N}\) for \(N\) data points. Based on a threshold for \(\alpha=0.05\) and a \(2\sigma\) error for \(\chi^{2}_{\nu}\), we define a critical value, \(\chi^{2}_{\nu,\rm crit}\), as the reduced chi-square value corresponding to \(\alpha=0.05+2\sigma\). Based on these critical values, ranging from 3.5 to 5.0, depending on the number of elements involved, the relative elemental abundances for the polluted WDs examined here are in good agreement with CI chondrites, with 5 of the 8 WDs having values for \(\chi^{2}_{\nu}\) less than the associated critical values. The remaining WDs have values for \(\chi^{2}_{\nu}\) of 5.08, 7.3, and 4.9, making their fits to CI tentative. For context, we also calculate \(\chi^{2}_{\nu}\) for the bulk Earth, bulk silicate Earth, and terrestrial crustal rocks compared to CI chondrite, where we assume errors equal to the average WD error for each element ratioed to Mg, \(n_{z}/n_{\rm M_{g}}\). Note that bulk Earth and bulk silicate Earth (BSE) are indistinguishable from CI chondrite in this analysis using uncertainties associated with the WD observations of Mg, Al, Si, Ca, Ti, Cr, and Fe. The compositions of continental and oceanic crust, the latter represented by Mid-Ocean Ridge Basalt (MORB), are readily distinguished from CI chondrite in major elements using WD uncertainties (Figure 4). We
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Name & \(\log(n({\rm H})/n({\rm He}))\) & \(\log(n({\rm Be})/n({\rm He}))\) & \(\log(n({\rm O})/n({\rm He}))\) & \(\log(n({\rm Na})/n({\rm He}))\) & \(\log(n({\rm Mg})/n({\rm He}))\) \\ \hline GaiaJ0218+3625 & \(-6.03\pm 0.15\) & \(<-11.0\) & \(-5.53\pm 0.15\) & \(-7.11\pm 0.15\) & \(-6.64\pm 0.15\) \\ WD1244+498 & \(-5.12\pm 0.15\) & \(<-11.0\) & \(-5.77\pm 0.15\) & & \(-6.79\pm 0.15\) \\ SDSSJ1248+1005 & \(-5.18\pm 0.15\) & \(<-11.0\) & \(-5.44\pm 0.15\) & & \(-6.40\pm 0.15\) \\ WD1415+234 & \(-4.92\pm 0.15\) & \(<-11.0\) & \(-5.59\pm 0.15\) & & \(-5.82\pm 0.17\) \\ SDSSJ1734+6052 & \(-4.76\pm 0.15\) & \(<-10.3\) & \(-5.93\pm 0.15\) & & \(-6.62\pm 0.15\) \\ GaiaJ1922+4709 & \(-5.66\pm 0.15\) & \(<-10.4\) & \(-5.51\pm 0.15\) & & \(-6.14\pm 0.15\) \\ EC22211\(-\)2525 & \(-5.56\pm 0.15\) & \(<-11.0\) & \(-5.76\pm 0.15\) & & \(-6.52\pm 0.15\) \\ SDSSJ2248+2632 & \(-5.12\pm 0.15\) & \(<-10.5\) & \(-5.94\pm 0.15\) & & \(-6.52\pm 0.15\) \\ \hline Name & \(\log(n({\rm Al})/n({\rm He}))\) & \(\log(n({\rm Si})/n({\rm He}))\) & \(\log(n({\rm Ca})/n({\rm He}))\) & \(\log(n({\rm Ti})/n({\rm He}))\) & \(\log(n({\rm Cr})/n({\rm He}))\) \\ \hline GaiaJ0218+3625 & \(-7.3\pm 0.2\) & \(-6.50\pm 0.15\) & \(-7.81\pm 0.21\) & \(-9.43\pm 0.15\) & \(-8.68\pm 0.15\) \\ WD1244+498 & & \(-6.92\pm 0.15\) & \(-7.79\pm 0.17\) & \(-9.34\pm 0.15\) & \(-8.78\pm 0.16\) \\ SDSSJ1248+1005 & & \(-6.65\pm 0.15\) & \(-7.22\pm 0.17\) & \(-8.80\pm 0.15\) & \(-8.41\pm 0.15\) \\ WD1415+234 & & \(-6.25\pm 0.18\) & \(-7.40\pm 0.15\) & & \(-7.81\pm 0.15\) \\ SDSSJ1734+6052 & & \(-6.93\pm 0.15\) & \(-7.83\pm 0.17\) & & \\ GaiaJ1922+4709 & \(-6.9\pm 0.2\) & \(-6.02\pm 0.15\) & \(-7.53\pm 0.15\) & \(-9.05\pm 0.15\) & \(-8.30\pm 0.15\) \\ EC22211\(-\)2525 & \(-7.7\pm 0.3\) & \(-6.67\pm 0.15\) & \(-7.85\pm 0.20\) & \(-9.60\pm 0.15\) & \(-8.79\pm 0.15\) \\ SDSSJ2248+2632 & & \(-6.83\pm 0.15\) & \(-7.45\pm 0.23\) & & \\ \hline Name & \(\log(n({\rm Mn})/n({\rm He}))\) & \(\log(n({\rm Fe})/n({\rm He}))\) & & & \\ \hline GaiaJ0218+3625 & \(-8.84\pm 0.15\) & \(-6.85\pm 0.15\) & & & \\ WD1244+498 & & \(-6.58\pm 0.15\) & & & \\ SDSSJ1248+1005 & & \(-6.63\pm 0.15\) & & & \\ WD1415+234 & & \(-5.89\pm 0.15\) & & & \\ SDSSJ1734+6052 & & \(-6.85\pm 0.15\) & & & \\ GaiaJ1922+4709 & & \(-5.88\pm 0.15\) & & & \\ EC22211\(-\)2525 & & \(-6.84\pm 0.15\) & & & \\ SDSSJ2248+2632 & & \(-7.10\pm 0.27\) & & & \\ \hline \end{tabular} Note. – Abundances by number, n, relative to He and uncertainties for each of the WDs in this work. Where statistical uncertainties are small (\(<\)0.15 dex), we conservatively set them to 0.15 dex. We have included upper limits on Be abundances, which demonstrate that Be is not detected at the greatly elevated levels seen in two WDs in Klein et al. (2021). We list observed lines used for these abundance determinations in Table 2.
\end{table}
Table 3: Observed Atmospheric Elemental Abundances
see no evidence for crust-like compositions among the eight polluted WDs considered here.
In the examples presented above, we used the observed elemental ratios with no corrections for settling times. This tacitly assumes that the parent body accretion is in the buildup phase. We calculate the same \(\chi^{2}_{\nu}\) statistic to assess the goodness-of-fit for these WDs relative to the CI elemental ratios assuming the WDs are accreting material in a steady-state phase (Figure 5). Steady state is often assumed for WDs in which heavy element settling times are relatively short. Under this assumption, we find that for 3 of the 8 WDs, the \(\chi^{2}_{\nu}\) values relative to CI chondrite indicate better agreement with CI chondrite than for the buildup-phase assumption. However, with the steady-state assumption, still 5 of the 8 WDs are indistinguishable from CI chondrites (\(\chi^{2}_{\nu}<\chi^{2}_{\nu,crit.}\)). Therefore, regardless of whether these polluted WDs are assumed to be in the buildup phase or in steady state, they appear to be accreting bodies that are chondritic, or approximately chondritic, in composition. We note that for GaiaJ0218+3625 (irrespective of accretion phase) the abundance of Na/Mg is \(\simeq\)6\(\times\) the chondritic ratio. There is likely more work to be done in future analysis of GaiaJ0218+3625, but this particular enhanced relative abundance is not sufficient alone to reject the assessment that overall, the accreted bodies of this sample are broadly chondritic.
### Parent Body Size
In order to estimate parent body sizes, we calculate the minimum masses of the parent bodies accreting onto these eight WDs as the sum of the masses of all heavy elements in the convection zone (CVZ). We convert number abundance ratios from Table 3 to mass ratios and multiply by the mass of the convection zone, computed from evolution models from the Montreal White Dwarf Database (MWDD; Dufour et al., 2007)1. We find minimum masses that range from 2.8 \(\times\) 10\({}^{21}\)\(-\) 9.0 \(\times\) 10\({}^{22}\) g. These masses are consistent with some of the most massive asteroids in the solar system (\(\thicksim\)8 Flora \(-\) 10 Hygiea) and some of the mid-sized moons in the solar system (\(\thicksim\) Neptune's Larissa \(-\) Saturn's Enceladus). The immensity of these minima for parent body masses supports the conclusion that only the most massive of polluting objects will be observable in WDs (Trierweiler et al., 2022). Mass fluxes onto the WD atmosphere can be obtained by assuming steady state between accretion and settling. For this we use the CVZ pollution masses and settling times from Table A3. The derived fluxes range from 1.4 \(\times\) 10\({}^{8}\)\(-\) 8.5 \(\times\) 10\({}^{9}\) g s\({}^{-1}\), typical for polluted WDs under similar assumptions (e.g. Rafikov, 2011; Farihi et al., 2012; Wyatt et al., 2014; Xu et al., 2019) and would result in parent body masses that range from 2.1\(\times\) 10\({}^{21}\)\(-\) 1.4 \(\times\) 10\({}^{23}\) g, assuming accretion from a disk is sustained for roughly 5\(\times\)10\({}^{5}\) yr (Girven et al., 2012).
Footnote 1: [http://dev.montrealwhitedwarfdatabase.org/evolution.html](http://dev.montrealwhitedwarfdatabase.org/evolution.html)
### Oxygen and Oxidation State
We evaluate the oxidation state of the parent bodies accreting onto each WD by following the prescription introduced by Doyle et al. (2019) and improved in Doyle et al. (2020). We use the ratio of O\({}_{\rm rem}\)/Fe, where O\({}_{\rm rem}\) is the O remaining after assigning O to Mg, Si, Ca, and Al to form the oxides MgO, SiO\({}_{2}\), CaO, and Al\({}_{2}\)O\({}_{3}\), as an indicator for whether a WD will yield a recoverable oxygen fugacity (\(\Delta\)IW, see discussion below for complete definition for this parameter) value and error bounds. We calculate O\({}_{\rm rem}\) relative to Fe as:
\[\frac{\rm O_{rem}}{\rm Fe}=\frac{\rm O}{\rm Fe}-\frac{\rm Mg}{\rm Fe}-2\frac{ \rm Si}{\rm Fe}-3\frac{\rm Al}{\rm Fe}-\frac{\rm Ca}{\rm Fe}. \tag{2}\]
For an ideal rock, in which Fe exists as ferrous iron (effective charge of 2+), the value of O\({}_{\rm rem}\)/Fe should be unity. Where O\({}_{\rm rem}\)/Fe \(>\) 1, an oxygen excess exists, suggesting an additional source for oxygen, often due to accretion of oxygen-bearing volatiles such as H\({}_{2}\)O from the parent body (we exclude the effect of Fe\({}^{3+}\) here, present as the oxide Fe\({}_{2}\)O\({}_{3}\), under the assumption that the ferric iron
Figure 3: Element/magnesium atomic ratios, \(z\)/Mg, for the parent body accreted by WD1244+498, assuming an increasing phase, relative to \(z\)/Mg in various rocks found in our solar system. We compare the calculated parent body elemental abundances accreted by WD1244+498 to CI chondrite (orange, Lodders, 2019), bulk silicate Earth (BSE) (red, McDonough, 2003), and the Earth’s continental crust (blue, Rudnick and Gao, 2014). The best match compositionally for the parent body accreting onto WD1244+498 is CI chondrite.
Figure 4: One-to-one comparison of major and minor rock forming elements (n\({}_{z}\)), ratioed to Mg (n\({}_{\rm Mg}\)) and CI chondrite (Lodders, 2019) for eight WDs. Abundances are from Table 3, representative of an increasing phase. Errors for WDs are propagated from model abundances and uncertainties using a Monte Carlo approach with a bootstrap of n=1. We report the goodness of fit using a reduced chi-square statistic, \(\chi^{2}_{\nu}\), using the elements Si, Fe, Ca, Al, Cr, and Ti, where available for each WD (see text), displayed in the bottom right corner of each plot. Generally, the elemental abundances from WD data show good agreement with CI chondrites (\(\chi^{2}_{\nu,{\rm crit.}}<\) 3.5-5.0, depending on which elements are used in the analysis, see text). For comparison, we calculate \(\chi^{2}_{\nu}\) statistics for known compositions of Earth rocks (bulk Earth (McDonough, 2003), bulk silicate Earth (BSE, McDonough, 2003), Mid-Ocean Ridge Basalt (MORB, Gale et al., 2013), and the Earth’s continental crust (Rudnick & Gao, 2014)) compared to CI chondrite. Bulk Earth and bulk silicate Earth are in good agreement with CI chondrite, revealing that WD-sized errors in the elements used (Ti, Cr, Ca, Al, Fe, and Si) are unable to distinguish between the two compositions in the data.
Figure 5: As in Figure 4, but assuming steady-state phase (SS) compositions for the eight WDs presented.
will be relatively minor, \(<10\%\) of all Fe, as it is in most solar-system rocks). Six of the eight WDs in this study have observed oxygen excesses implying water-rich bodies (\(\mathrm{O_{rem}/Fe>1}\); Table 4). Of the six WDs with oxygen excesses, five have an observed amount of H that can account for the excess oxygen assuming a buildup phase. Large abundances of H in helium-dominated WDs are either from primordial H (prior to the DA-to-DB evolution, Rolland et al., 2020) or due to the accumulation of H throughout accretion events, as H floats on the atmospheric surface (Gentile Fusillo et al., 2017; Izquierdo et al., 2021). Notably, a steady-state approximation decreases, but does not entirely remove, the oxygen excesses (Table 4).
The level of oxidation in a geochemical system is described as the non-ideal partial pressure of \(\mathrm{O_{2}}\), or oxygen fugacity (\(f_{\mathrm{O_{2}}}\)), and has implications for the geochemistry and geophysics of rocky bodies. In the planet formation regime, oxygen fugacities are often compared with that defined by the equilibrium reaction between metallic iron (Fe) and FeO, which in mineral form is wistite (FeO):
\[\mathrm{Fe}+\frac{1}{2}\mathrm{O_{2}}\rightleftharpoons\mathrm{FeO}. \tag{3}\]
This iron-wistite (IW) reference reaction assumes pure Fe metal and FeO oxide. By reporting \(f_{\mathrm{O_{2}}}\) of a rock to a reference reaction such as Equation 3, the thermodynamics simplifies to a ratio of activities, or mole fractions (see Appendix in Doyle et al. (2019) for a full derivation). The intrinsic oxygen fugacity of a rock or rocky body can thus be described relative to that for the IW reference, such that
\[\Delta\mathrm{IW}\equiv\log\left(f_{\mathrm{O_{2}}}\right)_{\mathrm{rock}}- \log\left(f_{\mathrm{O_{2}}}\right)_{\mathrm{IW}}=2\mathrm{log}\left(\frac{x _{\mathrm{FeO}}^{\mathrm{rock}}}{x_{\mathrm{Fe}}^{\mathrm{metal}}}\right). \tag{4}\]
This simplification results in an equation for \(\Delta\mathrm{IW}\) that depends solely on the mole fraction of FeO in the rock (\(x_{\mathrm{FeO}}^{\mathrm{rock}}\)) and the mole fraction of Fe in the metal (\(x_{\mathrm{Fe}}^{\mathrm{metal}}\)).
Where \(\mathrm{O_{rem}/Fe<1}\), a dearth of oxygen exists, suggesting iron is present in the form of Fe metal. Of the eight WDs reported in Table 4, three have lower bounds with values for \(\mathrm{O_{rem}/Fe<1}\) (WD1415+234, GaiaJ1922+4709, and SDSSJ2248+2632). In such cases, lower bounds on the level of oxidation, measured as oxygen fugacity, cannot be obtained.
As in Doyle et al. (2019) and Doyle et al. (2020), we use the oxides \(\mathrm{SiO_{2}}\), MgO, FeO, CaO and \(\mathrm{Al_{2}O_{3}}\) to characterize the chemical composition of the accreting rocks. Where Al is not observed, we assume a chondritic Al/Ca ratio and set uncertainties equal to 0.3 dex. Using oxides ensures charge balance and provides a means of tracking oxygen that was in the form of rock. We first assign oxygen to Mg, Si, and Ca to form these oxides, and then we assign the remaining oxygen, \(\mathrm{O_{rem}}\), to Fe to form FeO. In this way, we can assess what portion of Fe can be paired with O and is presumed to have existed as FeO in the rock (\(x_{\mathrm{FeO}}^{\mathrm{rock}}\)) versus what portion of Fe existed as Fe metal (i.e. where there is a deficit of O). For application of Equation 4 we set \(x_{\mathrm{Fe}}^{\mathrm{metal}}=0.85\), consistent with estimates for Fe metal in the core of differentiated bodies from our solar system. We propagate measurement uncertainties for the polluted WDs using a Monte Carlo approach with a bootstrap of n=1. We report our calculated \(\Delta\mathrm{IW}\) values in Table 4.
In our solar system, most rocky bodies are oxidized relative to a hydrogen-rich solar gas (\(\Delta\mathrm{IW}=-6\)), with \(\Delta\mathrm{IW}\) values greater than \(-3\), corresponding to \(x_{\mathrm{FeO}}^{\mathrm{rock}}>0.025\). Only Mercury and enstatite chondrites are "reduced" (\(\Delta\mathrm{IW}<-3\); \(x_{\mathrm{FeO}}^{\mathrm{rock}}<0.025\)). In general, the WDs in this study have \(\Delta\mathrm{IW}\) values similar to chondrites, consistent with their chondritic bulk chemistry (Figures 4 and 5). However, there are two WDs in this study for which lower bounds on \(\Delta\mathrm{IW}\) cannot be obtained (GaiaJ1922+4709 and SDSSJ2248+2632), and one for which neither a median nor a lower bound can
Figure 6: \(\Delta\mathrm{IW}\) vs \(\mathrm{O_{rem}/Fe}\). The value of \(\mathrm{O_{rem}/Fe}\) should be unity for ideal rocks, represented as the dotted silicate line. Where \(\mathrm{O_{rem}/Fe>1}\), an oxygen excess exists, and where \(\mathrm{O_{rem}/Fe<1}\), a dearth of oxygen exists. Where errors allow \(\mathrm{O_{rem}/Fe<1}\), lower bounds in \(\Delta\mathrm{IW}\) cannot be obtained. Such is the case for three WDs in this study (GaiaJ1922+4709, SDSSJ2248+2632, and WD1415+234). These three WDs are those in the Figure where lower error bounds in \(\mathrm{O_{rem}/Fe}\) plot below the ideal value for silicates.
be obtained (WD1415+234). Situations like these arise where negative \(x_{\rm FeO}^{\rm rock}\) values are a significant fraction of the Monte Carlo draws for error propagation. This in turn comes about where there is either a relative scarcity of oxygen relative to the propagated errors or abundance uncertainties are large (refer to Section 2.3 and Figure 3 in Doyle et al. (2020) for a more detailed discussion).
Therefore, the calculation of \(\rm O_{rem}/Fe\) is a good indicator for whether a WD will yield a recoverable \(\Delta\)IW value and error bounds. Indeed, the same three WDs that have lower bounds with negative values for \(\rm O_{rem}/Fe\) have unrecoverable lower error bounds for \(\Delta\)IW (Figure 6). It is worth noting that one of these WDs, GaiaJ1922+4709, is that with the least good fit to CI chondrite, based on \(\chi_{\nu}^{2}\) statistics presented in Figure 4. It is also worth noting that one of these WDs, SDSSJ2248+2632, has a median value for \(\rm O_{rem}/Fe\) that indicates excess oxygen, but large uncertainties for Fe (Table 3). Indeed, it is possible that the parent bodies accreting onto these WDs had less FeO in the rocky portion of the body and were more reduced than CI chondrite. While these WDs have oxidation states that are less constrained, the median values for \(\Delta\)IW calculated for this subset of polluted WDs generally adds to the increasing quantity of chondrite-like parent bodies accreting onto WDs in both bulk composition and degree of oxidation.
## 5 Conclusions
In this work we present observations for eight heavily polluted DB white dwarfs and relative elemental abundances for the rocky parent bodies that accreted onto them. All of the WDs in this data set required new designations or updates of spectral types. In a step towards some needed clarification to the spectral classification system, we measured and ordered the "strongest" spectral features according to equivalent widths (not line depths). That determined our assignment of spectral types as DBAZ or DBZA.
We assembled our dataset by comparing GALEX colors to \(T_{\rm eff}\) for white dwarf candidates presented in Gentile Fusillo et al. (2019), as well as from known polluted DB white dwarfs. This comparison reveals a distinct dichotomy between DA and DB white dwarfs, which we used to target DB white dwarfs to search for those that are heavily polluted. The white dwarfs presented here were chosen due to their detections of all four major rock-forming elements (O, Mg, Si, Fe). Through this work, we have increased the sample of known oxygen-bearing white dwarfs polluted by rocky parent bodies by \(\thicksim 33\%\)2.
Footnote 2: see also note added in proof
We assessed the bulk compositions and oxidation states of the accreting bodies, and find that they are indistinguishable from chondritic in composition. This adds to the growing body of evidence suggesting that extrasolar rocky bodies closely resemble those in our solar system, and do not, as a whole, yield unusual or unique compositions. This result is not dependent on assumptions of an increasing phase versus a steady-state phase of accretion.
Six of the eight white dwarfs in this study have observed oxygen excesses implying volatiles, in various abundances, in the parent bodies (a trait shared by CI chondrites). Generally, the oxidation states of these parent bodies also corroborate the conclusion that the
\begin{table}
\begin{tabular}{l c c c} \hline \hline Name & \(\Delta\)IW & \(\rm O_{rem}/Fe\) & \(\rm O_{rem}/Fe\) (steady) \\ \hline GaiaJ0218+3625 & \(-1.29^{+0.27}_{-0.37}\) & \(14.16^{+11.02}_{-7.52}\) & \(9.28^{+8.68}_{-6.68}\) \\ WD1244+498 & \(-0.54^{+0.17}_{-0.26}\) & \(4.69^{+3.53}_{-2.34}\) & \(3.01^{+2.65}_{-2.03}\) \\ SDSSJ1248+1005 & \(-1.09^{+0.24}_{-0.34}\) & \(11.23^{+8.58}_{-5.68}\) & \(7.11^{+6.40}_{-4.90}\) \\ WD1415+234 & \(<-0.87\) & \(-0.18^{+0.91}_{-0.94}\) & \(-0.30^{+0.71}_{-0.79}\) \\ SDSSJ1734+6052 & \(-1.04^{+0.26}_{-0.43}\) & \(4.62^{+4.13}_{-2.97}\) & \(2.72^{+3.24}_{-2.64}\) \\ GaiaJ1922+4709 & \(-1.78^{+1.17}_{-1.17}\) & \(0.17^{+1.01}_{-0.97}\) & \(0.03^{+0.89}_{-0.88}\) \\ EC22211\(-\)2525 & \(-1.25^{+0.28}_{-0.44}\) & \(6.73^{+6.07}_{-4.36}\) & \(4.27^{+4.82}_{-3.89}\) \\ SDSSJ2248+2632 & \(-1.74^{+0.49}_{-0.49}\) & \(5.01^{+8.83}_{-5.48}\) & \(2.44^{+5.77}_{-4.73}\) \\ \hline \end{tabular} Note. – Calculated \(\Delta\)IW and remaining O relative to Fe, along with error bounds for the WDs in this study. \(\rm O_{rem}/Fe\) for an ideal rock should be unity, and variations from this value are due to oxygen either in excess or shortage of that required to form MgO, SiO\({}_{2}\), CaO, and FeO. Measurement uncertainties are propagated using a Monte Carlo approach with a bootstrap of n=1; see Section 4.4 for discussion about absent lower error bounds for \(\Delta\)IW. Generally, a steady-state assumption reduces the remaining oxygen, but does not entirely remove the excess, implying that the 6 WDs with oxygen excesses in the steady-state calculation have some amount of oxygen-bearing volatiles, such as \(\rm H_{2}O\) ice, in the parent body.
\end{table}
Table 4: Oxidation States Determined from WD Data in this Study
accreting bodies are chondritic. Three exceptions exist in which oxidation states are less constrained and could be more reduced than chondritic (lower oxygen fugacity values), and one of these white dwarfs (GaiaJ1922+4709) is the same WD that obtains the least good fit to CI chondrite. This result is in accordance with the assessment that perhaps 1/4 of polluted white dwarfs may be consistent with more reduced parent bodies that cannot be identified by use of this method (Doyle et al., 2020). Overall, our results are consistent with the emerging view that extrasolar rocks across the solar neighborhood are broadly similar to rocky bodies in our solar system.
This work was supported by NASA 2XRP grant No. 80NSSC20K0270 to EDY. C.M. and B.Z. acknowledge support from NSF grants SPG-1826583 and SPG-1826550. S. Xu is supported by the international Gemini Observatory, a program of NSF's NOIRLab, which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation, on behalf of the Gemini partnership of Argentina, Brazil, Canada, Chile, the Republic of Korea, and the United States of America.
The authors thank Simon Blouin (University of Victoria) for helpful discussions about abundance modeling, and Jay Farihi (University College London) for helpful discussions regarding WD spectral type classifications. We also thank the anonymous reviewer for their comments, which improved the manuscript.
Much of the data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain.
Similarly, we acknowledge that Lick Observatory resides on land traditionally inhabited by the Muwekma Ohlone Tribe of Native Americans. Research at Lick Observatory is partially supported by a generous gift from Google.
This paper includes data gathered with the 6.5 meter Magellan Telescopes located at Las Campanas Observatory, Chile.
This work has made use of data from the European Space Agency (ESA) mission Gaia ([https://www.cosmos](https://www.cosmos). esa.int/gaia), processed by the Gaia Data Processing and Analysis Consortium (DPAC). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement. This research has made use of NASA's Astrophysics Data System, the SIMBAD database, and the VizieR service. This research has made use of IRAF. IRAF is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation.
The following atomic spectral line databases were consulted: Vienna Atomic Line Database (VALD), Kurucz (1995, R. L. Kurucz and B. Bell, CD-ROM No. 23, Cambridge, MA: Smithsonian Astrophysical Observatory), NIST Standard Reference Database 78, and van Hoof (2018).
Facilities: Shane (Kast), Keck I (HIRES), Keck II (ESI), Magellan (MagE)
_Note added in proof._ Contemporaneous with this paper, Izquierdo et al. (2023) reported oxygen detections in ten polluted WDs, though parent body composition analyses have not yet been carried out.
## Appendix
This appendix presents details of spectral line measurements, broadband spectral energy distributions (SEDs), diffusion timescales and accretion rates. Radial velocites (RVs) are given in Table A1, equivalent widths (EWs) are given in Table A2, SEDs are displayed in Figure A1, and accretion-diffusion data are reported in Table A3. Equivalent widths were measured by profile fitting using IRAF's _splot_ task, and RVs were calculated as Doppler shifts of the measured line centers relative to laboratory wavelengths (see Klein et al., 2021).
Half of the stars in this sample (WD1415+234, SDSSJ1734+6052, GaiaJ1922+4709, SDSSJ2248+2632) display absorption lines of the Na i resonance doublet \(\lambda\)5889.951/5895.924 A (NaD) with RVs which are significantly blue-shifted from the photospheric averages based on many photospheric lines (see Table A1). In some stars non-photospheric Ca ii \(\lambda\)3933.663 A (CaK) features are also observed. Based on results from Redfield & Linsky (2008, [http://lism.wesleyan.edu/LISMdynamics.html](http://lism.wesleyan.edu/LISMdynamics.html)) and Welsh et al. (2010), it is probably the case that WD1415+234, SDSSJ1734+6052, and GaiaJ1922+4709 host interstellar medium (ISM) features.
On the other hand, if the non-photospheric RV is blue-shifted from the photospheric RV by an amount equal to or somewhat less than the gravitational redshift of the WD, then it could be that the non-photospheric absorption is occurring in CS gas (co-moving with the WD, but not fully in its photospheric gravitational well). Referring to Table A1, and considering an uncertainty range of 3 km s\({}^{-1}\) in gravitational redshift plus 2 km s\({}^{-1}\) in photospheric RV, a CS origin is reasonable for only two WDs: SDSSJ1734+6052 and SDSSJ2248+2632. However, we can not rule out the possibility that absorption may be due to ISM material (especially at distances \(>\) 80 pc; e.g., see Figure 7 of Welsh et al., 2010) or could even possibly have some association with accretion-related outflows.
Unlike the four aforementioned WDs, the NaD RV in GaiaJ0218+3625 agrees exactly with the average photospheric RV. There may be a slight chance that an ISM cloud has the unusually high RV of 39 km s\({}^{-1}\)(e.g., Redfield & Linsky, 2008 and Welsh et al., 2010) and is coincidentally the same as the WD RV. We think this unlikely, and deem the Na line in GaiaJ0218+3625 to originate in the WD photosphere and be associated with the polluting parent body.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline WD & & 0218+3625 & 22211\(-\)2525 & 1244+498 & 1248+1005 & 1922+4709 & 1734+6052 & 1415+234 & 2248+2632 \\ T\({}_{\rm eff}\) & & 14700 K & 14740 K & 15150 K & 15180 K & 15500 K & 16340 K & 17300 K & 17370 K \\ \hline Ion & \(\lambda\) (Å) & & & & Equivalent Width (mÅ) & & & \\ \hline O i & 7771.944 & 200 (27) & 134 (15) & 138 (22) & 266 (59) & 152 (13) & 93 (22) & 80 (11) & 43 (18) \\ O i & 7774.166 & 129 (30) & 88 (16) & 109 (26) & 106 (28) & 124 (15) & 31 (27) & 35 (9) & 31 (13) \\ O i & 7775.388 & 100 (19) & 81 (19) & 55 (33) & 71 (21) & 82 (12) & 19 (7) & 29 (10) & \\ O i & 8446.359 & 149 (57) & 167 (33) & 75 (22) & 395 (141) & 134 (26) & & 63 (34) & \\ Na i & 5889.951 & 45 (10) & & & & & & \\ Mg i & 3829.355 & & 11 (3) & & & & & \\ Mg i & 3832.304 & 31 (5) & 42 (3) & 15 (5) & 43 (8) & 39 (6) & & 11 (3) & \\ Mg i & 3838.292 & 58 (4) & 74 (3) & 65 (8) & 76 (16) & 73 (7) & & 47 (7) & \\ Mg i & 5172.684 & & 9 (2) & & & 15 (4) & & \\ Mg i & 5183.604 & 17 (3) & 31 (4) & & & 32 (10) & & \\ Mg ii & 4481\({}^{\dag}\) & 321 (11) & 400 (8) & 216 (11) & 368 (42) & 374 (25) & 160 (15) & 276 (8) & 97 (13) \\ Mg ii & 7877.054 & & 208 (64) & & 179 (68) & 366 (49) & & 91 (41) & \\ Mg ii & 7896.366 & 309 (72) & 337 (51) & & 230 (82) & 473 (40) & & 195 (41) & 104 (36) \\ Al ii & 3587\({}^{\dag}\) & 163 (25) & 39 (14) & & & & 241 (45) & & \\ Si ii & 3853.665 & 19 (4) & 14 (3) & & & 35 (6) & 8 (4) & 11 (4) & \\ Si ii & 3856.018 & 105 (3) & 83 (5) & 54 (6) & 72 (5) & 186 (6) & 33 (5) & 82 (3) & 30 (6) \\ Si ii & 3862.595 & 70 (2) & 48 (4) & 19 (4) & 46 (5) & 128 (8) & 21 (4) & 58 (3) & 20 (3) \\ Si ii & 4128.054 & 59 (3) & 36 (5) & 13 (5) & 25 (7) & 133 (13) & 13 (4) & 39 (5) & \\ Si ii & 4130.894 & 92 (3) & 56 (4) & 35 (5) & 80 (13) & 192 (12) & 21 (4) & 58 (5) & \\ Si ii & 5041.024 & 41 (8) & 28 (9) & & & 152 (13) & & 36 (7) & \\ Si ii & 5055.984 & 85 (6) & 58 (12) & 30 (8) & 114 (17) & 203 (27) & 27 (8) & 54 (8) & \\ Si ii & 5957.559 & & & & & & 27 (7) & & \\ Si ii & 5978.930 & & & & & 81 (13) & & \\ Si ii & 6347.109 & 232 (14) & 200 (16) & 104 (10) & 217 (37) & 413 (13) & 126 (37) & 186 (12) & 97 (7) \\ Si ii & 6371.371 & 153 (12) & 103 (12) & 46 (7) & 158 (29) & 255 (10) & 63 (7) & 89 (11) & 50 (6) \\ Ca ii & 3158.869 & 158 (7) & 136 (8) & 128 (8) & 255 (12) & 187 (11) & 54 (7) & 77 (7) & 26 (5) \\ Ca ii & 3179.331 & 175 (15) & 181 (10) & 154 (13) & 379 (15) & 246 (22) & 82 (15) & 93 (8) & 47 (6) \\ Ca ii & 3181.275 & 52 (6) & 43 (9) & & 41 (7) & 52 (13) & 6 (3) & & \\ Ca ii & 3706.024 & 39 (12) & 31 (4) & & 87 (8) & 66 (8) & & 9 (2) & \\ Ca ii & 3736.902 & 86 (3) & 86 (3) & 93 (6) & 160 (6) & 137 (10) & 19 (4) & 27 (3) & \\ Ca ii & 3933.663 & 595 (20) & 710 (11) & 664 (33) & 1245 (38) & 528 (23) & 256 (8) & 274 (4) & 169 (7) \\ Ca ii & 3968.469 & 338 (15) & 391 (13) & 430 (59) & 747 (50) & 284 (17) & 149 (3) & 154 (4) & 111 (4) \\ Ca ii & 8498.023 & & & & 173 (53) & & & & \\ Ca ii & 8542.091 & 305 (61) & 305 (40) & 228 (24) & 528 (57) & 255 (18) & 120 (17) & 60 (26) & 88 (31) \\ Ca ii & 8662.141 & 192 (50) & 193 (27) & 175 (24) & 386 (52) & 96 (16) & 53 (16) & 45 (12) & \\ Ti ii & 3168.518 & & & & & & & & \\ Ti ii & 3234.520 & 20 (2) & 15 (2) & & 45 (8) & & & \\ \hline \end{tabular}
\end{table}
Table 1: Photospheric Absorption Line Measurements
**Table A2 (continued)**
\begin{tabular}{l c c c c c c c c c} \hline \hline WD & & 0218\(+\)3625 & 22211\(-\)2525 & 1244\(+\)498 & 1248\(+\)1005 & 1922\(+\)4709 & 1734\(+\)6052 & 1415\(+\)234 & 2248\(+\)2632 \\ T\({}_{\rm eff}\) & & 14700 K & 14740 K & 15150 K & 15180 K & 15500 K & 16340 K & 17300 K & 17370 K \\ \hline Ion & \(\lambda\) (Å) & & & & & Equivalent Width (mÅ) & & & \\ \hline Ti ii & 3236.578 & 16 (3) & 10 (2) & & 40 (7) & & & & \\ Ti ii & 3239.044 & & 11 (3) & & 34 (7) & & & & \\ Ti ii & 3241.994 & 9 (2) & 9 (2) & & 18 (4) & & & \\ Ti ii & 3248.598 & & & & 16 (7) & & & & \\ Ti ii & 3322.941 & & & & 35 (7) & & & & \\ Ti ii & 3341.880 & 16 (3) & 8 (3) & & 36 (4) & & & \\ Ti ii & 3349.037 & 13 (2) & 8 (2) & 14 (9) & 47 (11) & 26 (5) & & \\ Ti ii & 3349.408 & 36 (3) & 26 (2) & 19 (4) & 56 (6) & 48 (7) & & \\ Ti ii & 3361.218 & 17 (3) & 13 (2) & & 48 (4) & & & \\ Ti ii & 3372.800 & 16 (22) & 11 (2) & & 30 (3) & & & \\ Ti ii & 3383.768 & 12 (4) & 11 (2) & & 29 (4) & & & \\ Ti ii & 3387.846 & & & & 34 (13) & & & \\ Ti ii & 3685.189 & 17 (6) & 14 (4) & & 29 (5) & & & \\ Ti ii & 3759.296 & 11 (2) & 6.3 (1.6) & & 16 (3) & & & \\ Ti ii & 3761.323 & 6 (1) & 6.1 (1.3) & & 13 (3) & & & \\ Cr ii & 3118.646 & 13 (4) & 18 (4) & & 31 (8) & 19 (6) & 12 (3) & \\ Cr ii & 3120.359 & 28 (5) & 30 (4) & & 31 (7) & 38 (9) & 12 (2) & \\ Cr ii & 3124.973 & 32 (5) & 37 (4) & 27 (6) & 43 (9) & 40 (11) & 27 (6) & \\ Cr ii & 3132.053 & 40 (5) & 44 (4) & 28 (5) & 53 (7) & 65 (9) & 32 (3) & \\ Cr ii & 3147.220 & & 9 (3) & & 14 (4) & & & & \\ Cr ii & 3180.693 & 18 (4) & 16 (4) & & & & & & \\ Cr ii & 3197.075 & 8 (3) & 8 (3) & & & & & & \\ Cr ii & 3368.041 & 19 (2) & 15 (2) & & 22 (3) & 28 (7) & 12 (3) & \\ Cr ii & 3408.757 & 15 (3) & 13 (2) & & & & & & 9 (2) \\ Cr ii & 3422.732 & 13 (2) & & & & & & & \\ Cr ii & 3433.295 & 9 (2) & & & & & & & \\ Mn ii & 3441.988 & 17 (2) & & & & & & & \\ Mn ii & 3460.316 & 15 (3) & & & & & & & \\ Fe i & 3570.097 & & & & & 25 (7) & & & \\ Fe i & 3581.195 & 7 (2) & & & & & 34 (5) & & \\ Fe i & 3734.864 & 5 (2) & & & & & 25 (5) & & \\ Fe i & 3749.485 & & & & & & 17 (5) & & \\ Fe ii & 3135.360 & & 25 (4) & & & & 54 (9) & & 16 (3) \\ Fe ii & 3144.752 & & 10 (3) & & & & 23 (8) & & \\ Fe ii & 3154.202 & 44 (4) & 61 (5) & 67 (17) & 62 (10) & 79 (10) & 9 (4) & 40 (3) \\ Fe ii & 3162.798 & & 11 (3) & & & 40 (6) & & \\ Fe ii & 3167.857 & 36 (7) & 23 (2) & 37 (14) & 20 (4) & 77 (9) & & 32 (4) \\ Fe ii & 3170.337 & & & & & & 24 (6) & & \\ Fe ii & 3177.532 & 22 (6) & 17 (3) & 17 (4) & & 56 (8) & & 34 (6) \\ \hline \end{tabular}
**Table A2**_continued_
\begin{tabular}{l c c c c c c c c c} \multicolumn{10}{c}{**Table A2** _(continued)_} \\ \hline \hline WD & & 0218+3625 & 22211\(-\)2525 & 1244+498 & 1248+1005 & 1922+4709 & 1734+6052 & 1415+234 & 2248+2632 \\ T\({}_{\rm eff}\) & & 14700 K & 14740 K & 15150 K & 15180 K & 15500 K & 16340 K & 17300 K & 17370 K \\ \hline Ion & \(\lambda\) (Å) & & & & & Equivalent Width (mÅ) & & & \\ \hline Fe ii & 3180.149 & & & & & 27 (6) & & & \\ Fe ii & 3183.111 & 15 (4) & 16 (5) & & & 38 (7) & & 10 (3) & \\ Fe ii & 3186.737 & 16 (4) & 21 (5) & 23 (5) & & 44 (9) & & 14 (4) & \\ Fe ii & 3192.909 & 19 (4) & 15 (3) & 12 (3) & & 45 (9) & & 12 (2) & \\ Fe ii & 3193.799 & 42 (6) & 32 (4) & 35 (5) & 44 (8) & 79 (7) & & 34 (4) & \\ Fe ii & 3196.070 & 18 (3) & 24 (3) & 17 (3) & 27 (6) & 62 (15) & & 14 (3) & \\ Fe ii & 3210.444 & 28 (4) & 39 (4) & 29 (5) & 38 (6) & 78 (9) & & 27 (3) & \\ Fe ii & 3212.017 & & 8 (3) & & & 41 (8) & & \\ Fe ii & 3213.309 & 55 (4) & 68 (3) & 52 (4) & 41 (4) & 104 (15) & 17 (4) & 39 (3) & 9 (3) \\ Fe ii & 3227.742 & 69 (6) & 84 (4) & 97 (5) & 95 (9) & 151 (7) & 26 (5) & 60 (7) & 16 (5) \\ Fe ii & 3231.706 & & & & & & 24 (5) & & \\ Fe ii & 3232.785 & 8 (2) & 8 (2) & 17 (4) & & 20 (6) & & 10 (3) & \\ Fe ii & 3237.399 & & & & & & 30 (9) & & \\ Fe ii & 3237.820 & 10 (3) & 7 (2) & & 16 (6) & 32 (7) & & 7 (2) & \\ Fe ii & 3243.723 & 11 (5) & 12 (2) & 16 (4) & & 35 (6) & & 9 (3) & \\ Fe ii & 3247.175 & 27 (4) & 15 (3) & 28 (6) & 17 (4) & 75 (8) & & 18 (2) & \\ Fe ii & 3255.887 & 10 (2) & 11 (2) & & & 37 (6) & & 13 (2) & \\ Fe ii & 3258.771 & 12 (2) & 20 (2) & 10 (3) & 18 (5) & 45 (6) & & 21 (4) & \\ Fe ii & 3259.051 & 24 (3) & 19 (2) & 20 (4) & 22 (5) & 68 (8) & & 24 (5) & \\ Fe ii & 3276.604 & & & & & & 20 (5) & & \\ Fe ii & 3277.348 & 13 (2) & 13 (2) & & & 30 (5) & & 9 (2) & \\ Fe ii & 3281.292 & & & & & & 19 (5) & & \\ Fe ii & 3289.354 & & & & & & 28 (5) & & \\ Fe ii & 3323.063 & & 9 (3) & & & 36 (8) & & \\ Fe ii & 3468.678 & & & & & 30 (4) & & \\ Fe ii & 3493.470 & 11 (3) & 12 (2) & & & 37 (6) & & 14 (2) & \\ Fe ii & 3748.483 & & & & & & 24 (6) & & \\ Fe ii & 4233.170 & 12 (2) & 11 (3) & & & 42 (5) & & 10 (3) & \\ Fe ii & 4351.769 & & & & & & 23 (6) & & \\ Fe ii & 4522.634 & & & & & & 19 (5) & & \\ Fe ii & 4549.474 & & & & & & 54 (11) & & \\ Fe ii & 4583.837 & 13 (2) & & & & & 50 (7) & & \\ Fe ii & 4923.927 & 23 (10) & 47 (16) & 29 (13) & 32 (11) & 40 (11) & 9 (3) & 13 (3) & \\ Fe ii & 5001.959 & & & & & & 32 (10) & & \\ Fe ii & 5018.440 & 27 (5) & 31 (11) & 34 (7) & 37 (8) & 68 (10) & 11 (4) & 31 (6) & \\ Fe ii & 5035.708 & & & & & & 16 (5) & & \\ Fe ii & 5100.727 & & & & & 40 (6) & & & \\ Fe ii & 5169.033 & 45 (3) & 56 (6) & 53 (7) & 47 (7) & 143 (12) & 13 (4) & 47 (5) & 8 (2) \\ Fe ii & 5197.577 & & & & & 19 (4) & & & \\ \hline \end{tabular}
**Table A2** _continued_
\begin{tabular}{l c c c c c c c c} \multicolumn{11}{c}{**Table A2 (continued)**} \\ \hline \hline WD & 0218\(+\)3625 & 22211\(-\)2525 & 1244\(+\)498 & 1248\(+\)1005 & 1922\(+\)4709 & 1734\(+\)6052 & 1415\(+\)234 & 2248\(+\)2632 \\ T\({}_{\rm eff}\) & 14700 K & 14740 K & 15150 K & 15180 K & 15500 K & 16340 K & 17300 K & 17370 K \\ \hline Ion & \(\lambda\) (Å) & & & & Equivalent Width (mÅ) & & & \\ \hline Fe ii & 5216.863 & & & & 21 (7) & & \\ Fe ii & 5227.481 & & & & 78 (10) & & \\ Fe ii & 5234.625 & & & & 24 (3) & & \\ Fe ii & 5247.952 & & & & 19 (6) & & \\ Fe ii & 5251.233 & & & & 21 (7) & & \\ Fe ii & 5260.259 & & & & 86 (10) & & \\ Fe ii & 5276.002 & & & & 23 (4) & & \\ Fe ii & 5291.666 & & & & 21 (5) & & \\ Fe ii & 5316.615 & 15 (3) & 18 (7) & & 51 (6) & & \\ Fe ii & 5339.585 & & & & 47 (16) & & \\ Fe ii & 5362.869 & & & & 17 (3) & & \\ Fe ii & 5506.195 & & & & 45 (11) & & \\ \hline \end{tabular} Note. –Wavelengths are in air. EW measurements and uncertainty estimates were made using IRAF’s task _splot_ as described in Klein et al. (2021).
\({}^{\dagger}\) blended multiplet \(-\) the EW is the total for the blended feature.
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline Name & \(\tau_{\rm Al}\) & \(\tau_{\rm Ca}\) & \(\tau_{\rm Mg}\) & \(\tau_{\rm Si}\) & \(\tau_{\rm Fe}\) & \(\tau_{\rm O}\) & \(\tau_{\rm Sia}\) & \(\tau_{\rm Ti}\) & \(\tau_{\rm Cr}\) & \(\tau_{\rm Mn}\) \\ & Myr & & & & & & & & & \\ \hline GaiaJ0218+3625 & 1.71 & 1.19 & 1.77 & 1.73 & 1.16 & 1.77 & 1.70 & 1.09 & 1.12 & 1.12 \\ WD1244+498 & 0.86 & 0.60 & 0.90 & 0.87 & 0.59 & 0.90 & & 0.55 & 0.57 & \\ SDSSJ1248+1005 & 0.47 & 0.32 & 0.49 & 0.47 & 0.31 & 0.48 & & 0.29 & 0.30 & 0.30 \\ WD1415+234 & 0.11 & 0.09 & 0.13 & 0.11 & 0.08 & 0.13 & & & 0.08 & \\ SDSSJ1734+6052 & 0.35 & 0.25 & 0.37 & 0.33 & 0.24 & 0.38 & & & \\ GaiaJ1922+4709 & 0.77 & 0.53 & 0.81 & 0.76 & 0.53 & 0.81 & & 0.50 & 0.51 & 0.51 \\ EC22211\(-\)2525 & 1.42 & 0.98 & 1.47 & 1.43 & 0.96 & 1.47 & & & \\ SDSSJ2248+2632 & 0.19 & 0.15 & 0.21 & 0.18 & 0.14 & 0.23 & & & \\ \hline Name & \(M_{\rm CV}\) & \(\dot{M}_{\rm Al}\) & \(\dot{M}_{\rm Ca}\) & \(\dot{M}_{\rm Mg}\) & \(\dot{M}_{\rm Si}\) & \(\dot{M}_{\rm Fe}\) & \(\dot{M}_{\rm O}\) & \(\dot{M}_{\rm Na}\) & \(\dot{M}_{\rm Ti}\) & \(\dot{M}_{\rm Cr}\) & \(\dot{M}_{\rm Mn}\) \\ & g & s\({}^{-1}\) & & & & & & & & \\ \hline GaiaJ0218+3625 & 4.4\(\times 10^{27}\) & 2.7\(\times 10^{7}\) & 1.83\(\times 10^{7}\) & 1.10\(\times 10^{8}\) & 1.79\(\times 10^{8}\) & 2.40\(\times 10^{8}\) & 9.34\(\times 10^{8}\) & 3.72\(\times 10^{7}\) & 5.77\(\times 10^{5}\) & 3.39\(\times 10^{6}\) & 2.50\(\times 10^{6}\) \\ WD1244+498 & 2.3\(\times 10^{27}\) & 5.86\(\times 10^{7}\) & 8.15\(\times 10^{7}\) & 3.42\(\times 10^{8}\) & 3.42\(\times 10^{8}\) & 1.98\(\times 10^{9}\) & 2.22\(\times 10^{9}\) & & 2.89\(\times 10^{6}\) & 1.12\(\times 10^{7}\) & \\ SDSSJ1248+1005 & 1.4\(\times 10^{27}\) & 2.21\(\times 10^{8}\) & 3.14\(\times 10^{8}\) & 8.46\(\times 10^{8}\) & 6.60\(\times 10^{8}\) & 1.79\(\times 10^{9}\) & 4.89\(\times 10^{9}\) & & 1.03\(\times 10^{7}\) & 2.71\(\times 10^{7}\) & 2.56\(\times 10^{6}\) \\ WD1415+234 & 2.6\(\times 10^{26}\) & 1.68\(\times 10^{7}\) & 2.35\(\times 10^{7}\) & 3.83\(\times 10^{8}\) & 1.93\(\times 10^{8}\) & 1.15\(\times 10^{9}\) & 3.97\(\times 10^{8}\) & & & 1.24\(\times 10^{7}\) & \\ SDSSJ1734+6052 & 3.7\(\times 10^{26}\) & 2.23\(\times 10^{6}\) & 2.89\(\times 10^{6}\) & 1.90\(\times 10^{7}\) & 1.12\(\times 10^{7}\) & 3.89\(\times 10^{7}\) & 6.07\(\times 10^{7}\) & & & \\ GaiaJ1922+4709 & 1.9\(\times 10^{27}\) & 7.01\(\times 10^{7}\) & 3.31\(\times 10^{7}\) & 3.29\(\times 10^{8}\) & 5.29\(\times 10^{8}\) & 2.13\(\times 10^{9}\) & 9.14\(\times 10^{8}\) & 1.29\(\times 10^{6}\) & 7.65\(\times 10^{6}\) & 3.98\(\times 10^{6}\) \\ EC22211\(-\)2525 & 3.8\(\times 10^{27}\) & 1.25\(\times 10^{7}\) & 1.73\(\times 10^{7}\) & 1.51\(\times 10^{8}\) & 1.27\(\times 10^{8}\) & 2.53\(\times 10^{8}\) & 5.74\(\times 10^{8}\) & 4.07\(\times 10^{5}\) & 2.73\(\times 10^{6}\) & 1.21\(\times 10^{6}\) \\ SDSSJ2248+2632 & 3.8\(\times 10^{26}\) & 9.05\(\times 10^{6}\) & 1.34\(\times 10^{7}\) & 4.50\(\times 10^{7}\) & 2.66\(\times 10^{7}\) & 4.22\(\times 10^{7}\) & 1.14\(\times 10^{8}\) & & \\ \hline \end{tabular}
*
\end{table}
Table 3: Diffusion timescales and accretion rates for WDs in this study
Figure 11: SEDs for the eight DB WDs in this study. |
2309.05140 | Privacy-Preserving Line Outage Detection in Distribution Grids: An
Efficient Approach with Uncompromised Performance | Recent advancements in research have shown the efficacy of employing sensor
measurements, such as voltage and power data, in identifying line outages
within distribution grids. However, these measurements inadvertently pose
privacy risks to electricity customers by potentially revealing their sensitive
information, such as household occupancy and economic status, to adversaries.
To safeguard raw data from direct exposure to third-party adversaries, this
paper proposes a novel decentralized data encryption scheme. The effectiveness
of this encryption strategy is validated via demonstration of its differential
privacy attributes by studying the Gaussian differential privacy. Recognizing
that the encryption of raw data could affect the efficacy of outage detection,
this paper analyzes the performance degradation by examining the
Kullback-Leibler divergence between data distributions before and after the
line outage. This analysis allows us to further alleviate the performance
degradation by designing an innovative detection statistic that accurately
approximates the optimal one. Manipulating the variance of this statistic, we
demonstrate its ability to approach the optimal detection performance. The
proposed privacy-aware detection procedure is evaluated using representative
distribution grids and real load profiles, covering 17 distinct outage
configurations. Our empirical results confirm the privacy-preserving nature of
our approach and show that it achieves comparable detection performance to the
optimal baseline. | Chenhan Xiao, Yizheng Liao, Yang Weng | 2023-09-10T21:20:37Z | http://arxiv.org/abs/2309.05140v2 | # Distribution Grid Line Outage Detection with Privacy Data
###### Abstract
Change point detection is important for many real-world applications. While sensor readings enable line outage identification, they bring privacy concerns by allowing an adversary to divulge sensitive information such as household occupancy and economic status. In this paper, to preserve privacy, we develop a decentralized randomizing scheme to ensure no direct exposure of each user's raw data. Brought by the randomizing scheme, the trade-off between privacy gain and degradation of change point detection performance is quantified via studying the differential privacy framework and the Kullback-Leibler divergence. Furthermore, we propose a novel statistic to mitigate the impact of randomness, making our detection procedure both privacy-preserving and have optimal performance. The results of comprehensive experiments show that our proposed framework can effectively find the outage with privacy guarantees.
## I Introduction
Distribution grid line outage detection is important for efficient system monitoring and system control in smart grids, for restoring the network stability [1] and reducing financial loss. Recently, smart meters with advanced metering infrastructure (AMI) and fault location, isolation, and service restoration (FLISR) systems were installed to report the outage when there is a loss of power [2]. But these methods are limited when customers still receive power after the line outage from distributed energy resources, which are increasingly penetrated nowadays. To detect these kinds of line outages, real-time grid readings are utilized, including voltage magnitudes, phasor angles and load estimates [3, 4, 5, 6, 7, 8].
However, the utilization of grid readings brings privacy concerns, i.e., the leakage of sensitive information. For instance, given user's time-series grid readings, an untrusted third-party can discern the usage of appliances [9], and divulge the household occupancy and economic status [10, 11, 12] through the non-intrusive load monitoring technique. Thus, it calls for protecting such readings from being directly exposed to a third party while maintaining the performance of outage detection.
We consider the outage detection procedure from our previous work [7, 8], which has theoretical guarantees for the detection performance but requires the user's meter readings. Specifically, the voltage magnitudes are collected to find the line outage. The increment of voltage magnitudes in the distribution grid was proven to follow multivariate Gaussian distributions before and after the line outage. Then, the outage can be identified by detecting the change in the data distribution under the change point detection framework. This framework aims to find the change in data distribution as quickly as possible under the constraint of false alarm tolerance [13]. It has been widely used in line outage and fault detection in transmission and distribution grids [14, 15, 16].
In this paper, nevertheless, the smart meter readings may be exposed to an untrusted third party and serve to invade people's privacy [17]. Specifically, the popular linear coupled power flow model indicates that voltage magnitude is coupled with power consumption. Thus, the voltage magnitude can reflect the energy consumption or production billing information to a certain extent, causing privacy leakage issues [18].
To protect data privacy in change point detection, randomizing schemes are developed to "encrypt" the data, hiding sensitive information from potential attackers. For example, [19] applied the report noisy max algorithm by adding noise to partial log-likelihood ratios to estimate the change time. [20] introduced noise to the test statistic and privately estimated the change points using the Mann-Whitney test. However, these works "encrypted" the test statistic after raw user data was collected, which is crucial to user privacy in distribution grids. Another challenge of existing work is the compromised detection performance despite the privacy guarantee. To our best knowledge, protecting the privacy of raw user data without compromising the detecting performance still lies out of reach of existing theory, which is the focus of our paper.
To guarantee no exposure of raw grid readings, we design a decentralized scheme to directly "encrypt" each user's raw readings. Then, the proposed scheme is shown to satisfy the differential privacy [21], which is a commonly used framework to evaluate the privacy gain. Despite the privacy guarantee, detection performance is also degraded due to randomized data. Specifically, we show a prolonged detection delay is induced by studying the Kullback-Leibler divergence between data distributions before and after the line outage. These analytical studies allow us to answer the question: given a desired level of privacy, how much detection performance will be degraded? Finally, to mitigate the degradation of detection performance, we propose a novel statistic by considering an unbiased estimation of the noise-free optimal statistic. The proposed statistic is shown to have a detection delay proximate to the noise-free optimal case with the constraint of the false alarm rate. In doing so, our detecting procedure can be both privacy-preserving and have comparable detection performance as the optimal case.
The rest of the paper is organized as follows. Section II models the line outage detection problem via voltage data. Section III proposes a randomizing scheme to protect raw user data. Section IV evaluates the proposed method using representative grid systems and real residential load profiles.
## II System Model: Line Outage Detection
For showing our decentralized approach to protect each user's privacy, we model the distribution grid as a graph \(\mathcal{G}:=\{1,2,\cdots,M\}\) containing \(M>0\) buses (users). As mentioned earlier, we consider the outage detection procedure from our previous work [7, 8], which utilizes the voltage |
2305.19864 | Designing Closed-Loop Models for Task Allocation | Automatically assigning tasks to people is challenging because human
performance can vary across tasks for many reasons. This challenge is further
compounded in real-life settings in which no oracle exists to assess the
quality of human decisions and task assignments made. Instead, we find
ourselves in a "closed" decision-making loop in which the same fallible human
decisions we rely on in practice must also be used to guide task allocation.
How can imperfect and potentially biased human decisions train an accurate
allocation model? Our key insight is to exploit weak prior information on
human-task similarity to bootstrap model training. We show that the use of such
a weak prior can improve task allocation accuracy, even when human
decision-makers are fallible and biased. We present both theoretical analysis
and empirical evaluation over synthetic data and a social media toxicity
detection task. Results demonstrate the efficacy of our approach. | Vijay Keswani, L. Elisa Celis, Krishnaram Kenthapadi, Matthew Lease | 2023-05-31T13:57:56Z | http://arxiv.org/abs/2305.19864v1 | # Designing Closed-Loop Models for Task Allocation
###### Abstract
Automatically assigning tasks to people is challenging because human performance can vary across tasks for many reasons. This challenge is further compounded in real-life settings in which no oracle exists to assess the quality of human decisions and task assignments made. Instead, we find ourselves in a "closed" decision-making loop in which the same fallible human decisions we rely on in practice must also be used to guide task allocation. How can imperfect and potentially biased human decisions train an accurate allocation model? Our key insight is to exploit weak prior information on human-task similarity to bootstrap model training. We show that the use of such a weak prior can improve task allocation accuracy, even when human decision-makers are fallible and biased. We present both theoretical analysis and empirical evaluation over synthetic data and a social media toxicity detection task. Results demonstrate the efficacy of our approach.
## 1 Introduction
Human decision-making is ubiquitous: in the daily life of organizations or "pure" _human computation_ settings without automation, in making labeling decisions to train and test AI systems, and in _human-in-the-loop_ architectures that dovetail automated AI with human abilities. People are also naturally fallible: some people perform better than others across different tasks due to a wide range of factors (e.g., background or experience), as observed in recruitment [4] and healthcare [40]. Human error can be due to noise (e.g., fatigue/oversight) and systematic patterns of error (e.g., varying skill). Group decisions can also be fallible and systematically biased depending on the composition and decision process. Whereas "wisdom of crowds" [48] can boost collective intelligence via group diversity, lack of such diversity can amplify biases rather than mitigate them [18].
_Task allocation_ (cf. [22]) seeks to optimize the overall quality of outcomes by effectively matching people to tasks. Accurate task allocation has applications in crowdsourcing [13], human-in-the-loop frameworks [28], and collaborative web platforms [1]. A key assumption underlying most prior work on task allocation is that an oracle exists to provide feedback on the quality of human decisions and task assignments made. In real life, however, the same fallible human decisions we rely on must often also provide the basis for evaluating allocation decisions. When a hiring or admissions committee makes a decision whether to hire/admit a given candidate, all we have is the committee's decision; no outside oracle exists to provide a definitive evaluation of the committee's decision. Similarly, social media content moderation relies on decisions from human moderators. Moreover, decision criteria are often organization-specific model [38]. These applications motivate our investigation of the _closed_ training loop setting in which the aggregated annotations from an input-specific selection of human decision-makers are fed back into the system to train the task allocation model (**Figure 1**). However, considering that human decision-makers can make imperfect decisions, the question arises whether their aggregated decisions can be used to train an accurate task allocation framework.
A central challenge with such a framework is how to address human inaccuracy and bias, especially in the initial training iterations. Unsupervised aggregation of human decisions [11] can provide noisy feedback on task allocation efficacy [15, 13, 32, 55, 50]; however such noise, especially in initial training iterations, can result
in a slow or non-converging training process. Furthermore, any bias in human decisions may be fed back into task allocation training, further amplifying system error [35] (see SS2.1). Particularly problematic are human biases stemming from a lack of background, training, or prejudice, which can consistently impair performance. Another factor that can influence human decisions is underlying demographic identity. Goyal et al. [19] observe that the demographic identity of crowd annotators impacts their toxicity ratings. Consequently, they call for "_ML engineers and researchers to... consider all the different kinds of specialized rater pools we should be imagining, considering, designing, and testing with._" Multiple other studies [29, 5, 17, 47, 42] have reported significant differences in ratings across annotator demographics (see SS6 for additional related work). Motivated by these studies, we tackle the problem of developing allocation methods that are input-specific, contextually-aware, and cognizant of the background of the human annotators.
**Contributions.** In this work, we formulate the challenge of closed-loop learning from noisy human labels and propose two online learning algorithms for it. To mitigate inaccuracy from fallible human decision-makers, we propose exploiting a specific form of weak prior on human-task similarity to initialize the task allocation model (SS2.2). This enables us to obtain relatively accurate class labels for initial inputs and thereby effectively bootstrap the training process. The first algorithm we present, _Strict-Matching_, directly uses the prior information to initialize the allocation model. The second, _Smooth-Matching_, provides a smoother transition from the prior distribution to learning from noisy feedback during training. We demonstrate the efficacy of our methods via both theoretical analysis (SS2.3) and empirical improvement on synthetic and real-world datasets (SS3 and SS4). The latter extends beyond the classic assumption of universal, objective truth to consider recent advocacy for recognizing subjective, community-based gold standards [45, 29, 17, 19].
## 2 Model and Algorithm
We consider the binary classification task of predicting label \(y\) from input \(x\in\mathbb{R}^{n}\). We assume each input \(x\) belongs to one or more categories \(z\in\mathcal{Z}\), which could correspond to any demographic or task-specific interpretable feature. Given any input \(x\) (i.e., a task), the goal of task allocation is to choose appropriate human annotators (interchangeably referred to as individuals or decision-makers) from a given pool, whose aggregated prediction is the final predicted label for \(x\). Assume there is a pool of \(m\) available annotators \(e_{1},\ldots,e_{m}:\mathbb{R}^{n}\rightarrow\{0,1\}\), with \(e_{i}(x)\) denoting the \(i\)-th annotator's prediction for \(x\). For input \(x\), the task allocation model \(D_{u}\) infers a probability distribution over annotators: \(D_{u}:\mathbb{R}^{n}\rightarrow\Delta^{m}\), where \(u\) denotes model parameters and \(\Delta^{m}\) denotes the \(m\)-dimensional simplex1. When possible, we omit subscript \(u\) and refer to \(D_{u}\) by \(D\). \(D(x)_{i}\) denotes the model probability assigned to individual \(i\), reflecting the model's estimate of that individual's ability to correctly label input \(x\), relative to the other annotators. While not evaluated in our study, our framework also supports each person having additional input-specific costs associated with their predictions (see discussion of this point in SS5).
Footnote 1: distribution over \(m\) annotators: \(\forall d\)\(\in\)\(\Delta^{m}\), \(0\)\(\leq\)\(d_{i}\)\(\leq\)\(1\) for all \(i\)\(\in\)\(\{1,\ldots,m\}\) and \(d^{\top}\)**1**=1.
Figure 1: A closed-loop task allocation model in which predictions are fed back to train the model. To bootstrap training, we use prior information on human-task similarity, evaluated here by matching task & annotator colors.
**Committee Voting.** Given the task allocation model \(D(x)\)'s inferred probability distribution over the pool of \(m\) annotators, the top-\(k\) can be selected to form a committee. When \(k>1\), the committee's decision is determined by majority vote (assuming \(k\) is odd, no tie-breaking is required). A technical detail is that we sample annotators with replacement according to \(D(x)\) so that the the majority vote of the committee implicitly equals (on average) the weighted majority vote of all \(m\) annotators, with \(D(x)\) probabilities as weights. Alternatively, one could sample the \(k\) annotators without replacement and explicitly weight member votes by \(D(x)_{i}\).
**Online learning.** Assuming a streaming setting, after each input \(x\) is labeled by a selected committee, the (potentially noisy) label is fed back into the closed-loop learning process to update the model \(D(x)\). This online learning setting supports potential use in various real-world applications [14, 3]. However, such a noisy feedback loop also risks problematic predictions when trained without care; our algorithms are thus designed to address this.
### Training the Allocation Framework
An ideal training process for an allocation framework learns a partition of the feature space and assigns annotators to those partitions where they are expected to be most accurate. Prior training approaches optimize over labeled datasets to learn an allocation model that simulates such a partition [28, 50, 15]. In this section, we first summarize training procedures from prior work (that assume access to oracle training labels or rewards/penalties). We then discuss extensions of these procedures for closed-loop training.
**Prior work training allocation models with gold.** Assume input \(x\) having group attribute \(z\) and true binary label \(y\), \(D(x)\) is the task allocation model probability distribution over the pool \(m\) experts, and \(e_{i}(x)\) binary prediction of expert \(i\). A general training algorithm, with access to ground truth labels, will update the allocation model to reward the correct experts (for whom \(e_{i}(x)=y\)) and penalize those who are incorrect:
\[D(x)_{i}=\begin{cases}D(x)_{i}+\delta^{(i)}_{reward}(x,y),&\text{ if }e_{i}(x)=y \\ D(x)_{i}-\delta^{(i)}_{penalty}(x,y),&\text{ if }e_{i}(x)\neq y\end{cases} \tag{1}\]
where \(\delta^{(i)}_{reward}(\cdot),\delta^{(i)}_{penalty}(\cdot):\mathcal{X}\times \{0,1\}\rightarrow\mathbb{R}_{\geq 0}\) are input and annotator-specific updates and chosen so that the updated weights sum to 12. This appropriately rewards/penalizes the annotators, yielding allocation model updates that simulate these rewards/penalties.
Footnote 2: i.e., \(\sum_{i=1}^{m}\delta^{(i)}_{reward}(x,y)\cdot\mathbf{1}(e_{i}=y)-\sum_{i=1} ^{m}\delta^{(i)}_{penalty}(x,y)\cdot\mathbf{1}(e_{i}\neq y)=0\).
The reward/penalty functions are constructed by framing the problem as an optimization program. In the case of Keswani et al. [28], rewards/penalties are constructed as follows: given \((x,y)\), allocation parameters \(u\), and committee size \(k\), first select a committee \(C\) of \(k\) annotators using \(D_{u}(x)\) and compute the (probabilistic) prediction \(\hat{y}_{u}(x)\) by taking the mean of the selected annotators, i.e., \(\hat{y}_{u}(x):=\sum_{i\in C}e_{i}(x)/|C|\). Then minimize the following regularized log-loss function: \(\mathcal{L}_{D}(u):=\mathbb{E}_{x,y}\left[-y\log(\sigma(\hat{y}_{u}(x)))-(1- y)\log(1-\sigma(\hat{y}_{u}(x)))\right],\) where \(\sigma\) is the standard sigmoid function. Expected loss can be computed by the mean over a batch of training samples, with optimization performed via gradient descent. The gradient updates for this loss function can be seen to reward the correct annotators and penalize the incorrect annotators [28]. Hence, functionally, each step of this algorithm has a similar structure as **Equation 1**. Other prior training algorithms can also be shown to have similar underlying reward/penalty structure; see Appendix B for examples.
**Training using noisy aggregated human labels.** In this work, we focus on the more challenging case of having access to fallible human decisions only, with no oracle feedback regarding their accuracy (i.e., no access to \(y\)). Lacking gold labels, one way to directly use the above training process is to learn from noisy, aggregate human labels. Given input \(x\) and committee \(C\) selected using \(D(x)\), the predicted label \(\hat{y}(x):=\mathbf{1}\left[\sum_{i\in C}e_{i}(x)/|C|>0.5\right]\). Then, the training updates can substitute \(y\) with \(\hat{y}\) in Equation 1:
\[D(x)_{i}=\begin{cases}D(x)_{i}+\delta^{(i)}_{reward}(x,\hat{y}(x)),&\text{ if }e_{i}(x)=\hat{y}(x)\\ D(x)_{i}-\delta^{(i)}_{penalty}(x,\hat{y}(x)),&\text{ if }e_{i}(x)\neq\hat{y}(x) \end{cases} \tag{2}\]
By substituting true class labels with noisy aggregated labels, existing training allocation algorithms [28, 36] can be used without major changes (e.g., substitute \(y\) with \(\hat{y}\) in above loss \(\mathcal{L}_{D}(u)\)). While simple, this
approach also has a potential downside: when the majority of the annotators are consistently biased against any group \(z\in\mathcal{Z}\), this unsupervised training process is unable to detect such bias.
**Bias propagation when training using noisy labels.** Assuming a binary group attribute, we show below that: if (i) the starting allocation model chooses annotators randomly, and (ii) the majority of the annotators are biased against or highly inaccurate with respect to a group attribute type (e.g., a disadvantaged group), then the above training process leads to disparate performance with respect to the disadvantaged group. For \(\alpha>0.5\), assume that \(\alpha\) fraction of annotators are biased against group \(z=0\) and \((1{-}\alpha)\) fraction are biased against group \(z=1\). If a person is biased against \(z=j\), they will always predict correctly for inputs with \(z{=}1{-}j\) but predict correctly for inputs with \(z{=}j\) with probability 0.5.
Lacking an informative prior, training will start with \(D(x)\) assigning uniform probability \(1/m\) to all \(m\) annotators. When \(k=1\), a single person decides the label for input \(x\). In this case, the starting accuracy for group \(z=1\) elements will be \(\alpha+0.5(1-\alpha)\), and for group \(z=0\) elements, \((1-\alpha)+0.5\alpha\). Therefore, the difference in expected accuracy for group \(z=1\) vs. \(z=0\) elements will be \((\alpha-0.5)\). The larger the value of \(\alpha\), the greater the disparity will be. Hence, with biased starting allocation model and predicted labels used for retraining, the bias will propagate to the learned model.
**Claim 2.1**.: _In the above setting, the disparity between accuracy for group \(z{=}0\) and accuracy for group \(z{=}1\) does not decrease even after training using multiple Eqn. 2 steps._
The proof is provided in Appendix A. In SS3, we simulate a setting wherein most annotators are biased against certain input categories. Results show that prior training algorithms perform poorly, yielding low allocation accuracy.
### Injecting Prior Information
In real-life, no oracle exists to provide us feedback on our fallible or biased human decisions. There is no oracle gold training data to guide initial allocation decisions, nor is there gold feedback on human decisions made during closed-loop training. How then can imperfect human decisions train an accurate task allocation model? Our key insight is to exploit weak prior information on human-task similarity to bootstrap model training.
**Motivating Examples**.: _Example 1._ When a company recruits a new employee, the human decision-makers are typically current employees, and more specifically, a "hiring loop" of employees possessing appropriate expertise to assess the candidate's credentials. Assuming a company knows the varying expertise of its own workforce, prior information exists to match decision-makers to new candidates. In addition, organizations today appreciate the importance of forming hiring committees that combine diversity and expertise [43].
_Example 2._ In content moderation, moderator decisions vary due to many compatibility factors. For example, lack of familiarity with the dialect of the content's author can lead to biased decisions [41, 10]. Whether the moderator has themself been a target of hate speech [29], or whether their own demographic identity aligns with that being targeted in content they are reviewing [19] can also impact their decisions. Thus, once differences in judging behavior among moderators is acknowledged and accepted, it creates a space for matching different groups of moderators to different content types, based on moderator background (which can be collected via an on-boarding questionnaire).
**Encoding Prior Information**.: Any allocation model induces a probability distribution over the decision-makers for each input, such that the probability assigned to each decision-maker represents the confidence in their correctness. An initial approximation of this distribution over the human decision-makers can be derived using the contextual information of the application where the allocation model is being employed. For the motivating examples above, such weak prior information already exists to 1) appoint employees to a hiring loop who are capable of evaluating a candidate (by matching areas of expertise); and 2) select moderators to review content appropriate to their background (by matching target and annotator demographics/dialect).
In absence of labeled training data, we can use this prior information to bootstrap the closed-loop training process. The prior information is encoded in our framework using a similarity function \(dSim:\{e_{1},\ldots,e_{m}\}\times\mathcal{Z}\rightarrow[0,1]\), i.e., specifying a continuous similarity score matching each individual person to each content
category. As shown above, starting with a random allocation model is challenging when we also lack oracle feedback on the accuracy of human decisions in the closed-loop training process. Especially problematic are settings when the majority of annotators are biased against certain groups, as observed from the stylized example in Claim 2.1. By starting with some prior information about which people (or groups of people) might be best suited to each type of task (or category of content) using \(dSim\), we seek to address this flaw of the closed training framework and bootstrap an accurate training process. Indeed Claim 2.2, shows that using an appropriate \(dSim\) can address the issues observed in the setting of Claim 2.1.
**Claim 2.2**.: _Revisiting Claim 2.1, suppose \(dSim(e_{j},z)\)=\(1\) if annotator \(e_{j}\) is unbiased for category \(z\) and \(\gamma\) otherwise, where \(\gamma\) is any constant \(\in[0,1]\). Consider the allocation induced by this \(dSim\) function (i.e., for input \((x,z)\), allocation output \(D(x)_{i}\propto dSim(e_{i},z)\)). Then the difference between the accuracy for group z=\(0\) and z=\(1\) lies in \(\left[\frac{\gamma}{2},\frac{\alpha}{1-\alpha}\cdot\frac{\gamma}{2}\right]\)._
The proof is provided in Appendix A. Claim 2.2 shows that smaller \(\gamma\) values imply \(dSim\) is better able to differentiate between biased and unbiased annotators. Hence, the better \(dSim\) is at differentiating biased and unbiased annotators, the smaller is the disparity in performance across groups of the starting allocation model. We thus utilize \(dSim\) to mitigate biases in training using noisy labels (i.e., **Eq. (2)**). Our proposed algorithms in the next section operate on this general formulation of \(dSim(e,z)\).
```
1:Set initial allocation \(D_{0}\) such that for any input \(x\) in category \(z\), we have \(D_{u_{0}}(x)_{i}\propto dSim(e_{i},z)\)
2:for\(t\in\{1,2,\ldots,T\}\)do
3:\(D_{t-1}(x_{t})\leftarrow\) Allocation distribution for \(x_{t}\)
4:\(C\leftarrow\) Choose committee of size \(k\) using distribution \(D_{t-1}(x_{t})\)
5:\(\hat{y}_{t}\leftarrow\) Aggregated predictions of annotators in committee \(C\)
6:\(D_{t}\leftarrow\) Update allocation by training on \((x_{t},\hat{y}_{t})\)
7:return \(D_{T}\)
```
**Algorithm 1** Training with prior information.
### Training a Closed-loop Framework using \(dSim\)
Algorithm 1 presents our general training process. The first step ensures that initial allocation follows the prior information provided by \(dSim\). Subsequent training steps learn from noisy, aggregated decisions to further improve the task allocation model accuracy. Concrete methods to implement this algorithm are discussed next.
Training Method 1: Strict-Matching.One way to implement Algorithm 1 in practice is to encode the \(dSim\) function within the initial task allocation model.
In particular, we set initial allocation model parameters such that, for the starting allocation model \(D_{u_{0}}\) and input \((x,z)\) and annotator \(e_{i}\), we have that \(D_{u_{0}}(x)_{i}\propto dSim(e_{i},z)\) (Step 1). This can be feasibly accomplished in most applications using unlabeled data. The rest of the training process is the same as SS2.1 and Equation (2): for every input, reward the annotators whose prediction matches with aggregated prediction and penalize those who do not (using gradient of loss \(\mathcal{L}_{D}\)). Aggregation of selected annotator predictions can be implemented in various ways; see _Committee Voting_ in SS2. To add further robustness, we use a batch update process; i.e., for a given integer \(B\), train the model after observing \(B\) samples. This approach exploits the \(dSim\) prior to set the initial \(D(x)\) distribution, followed by closed-loop training with noisy aggregate feedback to further improve the task allocation model. For a given input (category), if an annotator shows low \(dSim\) score but high observed accuracy (or vice-versa), the allocation model can learn this property as training progresses. Consequently, we can view this training regime as a simple form of exploration-exploitation.
Training Method 2: Smooth-Matching.To obtain a better transition from the \(dSim\) prior to the allocation model learnt during closed-loop training, we can gradually mean ourselves off of the prior by decreasing its relative weight as more observed evidence accumulates.
In other words, the allocation employed at any iteration can be chosen as a convex combination of the allocation encoded by the \(dSim\) prior and the allocation trained using the observed samples (and aggregated class labels). This method of combining prior and observed data is conceptually similar to Bayesian or Laplacian smoothing techniques [44, 52]. Additive combination yields a task allocation distribution incorporating both the prior distribution and the empirical distribution. The smoothing parameter \(\mu\) is set to be an increasing function of the number of observations, ensuring that prior information \(dSim\) is used primarily in the initial training iterations. Full details are provided in Algorithm _Smooth-Matching_. Parameter \(T_{d}\) in Smooth-Matching controls the influence of \(dSim\) on the training process. The first \(T_{d}\) iterations focus on obtaining accurate labels for initial samples to bootstrap the training process. After \(T_{d}\) iterations, the weight given to the prior is relatively smaller than the weight given to the distribution learned during training.
### Theoretical Analysis
Analyzing the two algorithms that use \(dSim\) shows that the final trained allocation model simulates the underlying accuracy functions of the annotators. The first theorem (A.1) shows that if any annotator \(e_{j}\) has high accuracy for category \(z\), then how fast our algorithms converge to an allocation model that assigns high weight to \(e_{j}\) for category \(z\) depends on the initial weight assigned to \(e_{j}\) for \(z\).
**Theorem 2.3** (Exploitation using \(dSim\)).: _For any input group \(z\), assume annotator \(e_{j}\) is more accurate than all others. For \(\beta>0\), suppose we set \(dSim\) function \(\mathrm{st.}\)\(dSim(e_{j},z)-\max_{j^{\prime}\in\{1,\ldots,m\}\setminus\{j\}}dSim(e_{j^{ \prime}},z)\geq\beta\). Assume all annotators receive the same rewards/penalties for correct/incorrect predictions. Then the training algorithm that initializes \(D(x)\) parameters \(u\) with this \(dSim\) function increases the weight assigned to annotator \(e_{j}\) by at least \(2\beta\delta\) in expectation, where \(\delta\in[0,1]\) depends on the choice of \(\delta_{reward}\) and \(\delta_{penalty}\) values for the given input._
Hence, larger the \(dSim\) weight for \(e_{j}\), larger is their weight in the final allocation model. Secondly, we show that when using appropriate \(dSim\), if there are accurate annotators who are not assigned a high weight by \(dSim\), they will be "discovered" during the training.
**Theorem 2.4** (Exploration of accurate annotators).: _For any input group \(z\), assume annotator \(e_{j}\) has perfect accuracy (1). Let \(k\) be the size of the committee sampled from \(D(x)\) to label input \(x\). Let the \(dSim\) function be set such that \(dSim(e_{j},z)=\epsilon\), for some \(\epsilon\in[0,1]\), but the total weight (normalized) assigned by \(dSim\) to accurate annotators for group \(z\) is greater than 0.5. Assume all annotators receive the same rewards for
correct prediction and same penalties for incorrect prediction. Then, there is an expected positive increase in the weight of this annotator if \(\epsilon>1-\left(1-\frac{k}{2m}\right)^{1/k}\)._
Hence, our algorithms can discover accurate annotators so long as other accurate annotators are available to infer the true labels for this input category and \(k,\epsilon\) are sufficiently large. The proofs for both theorems are provided in Appendix A.
## 3 Evaluating Task Allocation on a Synthetic Dataset
Consider a binary classification task with three annotators having distinct areas of expertise, denoted by the colors orange, blue, and green. Assume each annotator is a perfect oracle when asked to label an example in their respective area but only 20% accurate in the other two areas, exhibiting consistent bias outside their respective areas of expertise. In the best case, the task allocator will correctly assign each input to the correct expert, yielding perfect labeling accuracy. In the worst case, assigning every input to the wrong annotator will yield around 20% accuracy. Because experts are assumed to be perfect oracles, each correct task allocation ensures a correct label. Consequently, task allocation accuracy largely determines label accuracy, which is lower-bounded by allocation accuracy.
As data, we generate 10,000 2D points, each represented by a \((x,y)\) coordinate and drawn from one of three clusters, corresponding to the three areas (colors) of expertise. We begin by sampling \(\mu\sim\)Unif\([0,1]\) and constructing a 2D diagonal matrix \(\Sigma\), with diagonal entries sampled from Unif\([0,1]\). Points are then sampled roughly equally from the three clusters as follows: \(\mathcal{N}(\mu,\Sigma)\) (orange), \(\mathcal{N}(\mu+2.5,\Sigma)\) (blue), and \(\mathcal{N}(\mu+5,\Sigma)\) (green). Every point is randomly assigned either label 0 ('\(\bullet\)') or 1 ('+'). **Figure 2** shows the dataset. Because class labels are assigned randomly, a classifier knowing only a point's \((x,y)\) coordinates can only achieve 50% accuracy. Similarly, a task allocator knowing only the \((x,y)\) coordinates has a 1/3 chance of assigning the input to the correct expert.
**Specifying \(dSim\).** Let \(e_{c}\) denote the expert corresponding to color \(c\). For input \(x\), the optimal \(dSim(e_{c},x)\) would be 1 when \(x\) has color \(c\) and 0 otherwise, perfectly assigning each example to the appropriate expert. To investigate the effect of varying informativeness of prior information, we introduce noise parameter \(s\in[0,2/3]\) and define \(dSim(e_{c},x)\) as follows: \(dSim(e_{c},x)=1-s\) if \(x\) has color \(c\) and \(s/2\) otherwise. With no noise (\(s=0\)), we revert to the optimal \(dSim(e_{c},x)\) specified above. Maximal noise (\(s=2/3\)) yields \(\forall_{x}dSim(e_{c},x)=1/3\): a uniform distribution over all annotators. Additional methodological details are provided in Appendix C.
**Baselines.** (1) Goel and Faltings [15] learn a task allocation policy using accuracy estimates for all annotators from a history of gold standard tasks. Because we assume that gold standard tasks are unavailable, we instead run their algorithm with accuracy estimates derived using noisy, aggregated annotator predictions. (2) Tran-Thanh et al. [50] learn an allocation using a multi-arm bandit approach, with initial exploration steps to estimate annotator accuracies followed by exploitation steps that assign inputs to annotators using estimated accuracies. Once again, in absence of gold standard, the accuracy estimates in the exploration step of their algorithm are obtained using aggregated predictions from the annotators. (3) Keswani et al. [28]'s
\begin{table}
\begin{tabular}{l c c} \hline \hline Method & Label Acc. & Assignment Acc. \\ \hline Smooth-Matching &.90 (.08) &.87 (0.27) \\ Strict-Matching &.79 (.01) &.74 (0.36) \\ \hline Goel and Faltings [15] &.50 (.01) &.33 (.00) \\ Tran-Thanh et al. [50] &.50 (.01) &.33 (.01) \\ Keswani et al. [28] &.41 (.09) &.17 (.26) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Label and allocation accuracy for §3 with \(s\)=0.3. We report mean accuracy over 50 trials (standard error in brackets).
Figure 2: Synthetic clusters in §3.
method is equivalent to training an input-specific task allocation model using Algorithm Smooth-Matching without any prior information, i.e., without \(dSim\). In this case, the training algorithm will start with a random allocation. Lacking prior information, we expect all three of these baselines to struggle in the closed-training setting we consider. See Appendix C for further implementation details.
**Results.** We first assume moderate noise \(s=0.3\), roughly the middle of \(s\in[0,2/3]\). **Table 1** compares Strict-Matching and Smooth-Matching vs. baseline algorithms [15, 50, 28]. We observe large differences in mean label accuracy: 0.90, 0.79, 0.50, 0.50, and 0.41, respectively. Standard error over 50 trials shows that differences are statistically significant. As noted earlier, our assumption of oracle experts means that task allocation accuracy largely determines label accuracy, as the task allocation results here confirm: 0.87, 0.74, 0.33, 0.33, and 0.17, respectively. The vast difference in task allocation accuracy in training with \(dSim\) (Smooth-Matching, Strict-Matching) and without \(dSim\) (Keswani et al. [28] baseline) shows the critical importance of prior information in training. The weaker accuracy of Strict-Matching vs. Smooth-Matching can be explained by their differing use of \(dSim\). Whereas Smooth-Matching exploits \(dSim\) for initialization only, Smooth-Matching continues to benefit from \(dSim\) by utilizing it throughout training.
**Figure 3** in Appendix C shows performance for varying \(s\) values. As expected, increasing values for noise \(s\) leads to a corresponding decrease in accuracy, and for large values of \(s\), \(dSim\) degrades toward the uniform distribution as an uninformative prior.
## 4 Evaluating Task Allocation on a Real-world Dataset for Toxicity Detection
Civil Comments [7] provides toxicity labels for 1.8M news comments. Of these, 450K comments are also labeled for the demographic group targeted (e.g., LGBTQ+, race, etc.). We consider the binary classification task of predicting whether a comment contains an "identity attack": a comment that is toxic and targets a specific demographic affiliation. Goyal et al. [19] augment Civil Comments with additional demographic identity labels of the annotators. This enables study of how annotator identity may influence their toxicity ratings. They sample 25.5k comments to augment with additional labels, uniformly sampling comments from three targeted groups: LGBTQ, African-American, and Control (identity agnostic). Of these, 12% of comments are labelled as containing an identity attack. Roughly 1K crowd annotators contributed labels to their study, with around 1/3 of annotators affiliated with each demographic group. Each comment is labeled by 5 annotators from each group (i.e., 15 in total). See Appendix D for additional details.
Control annotators often label toxicity differently than annotators whose own demographic group is targeted in a comment. **Table 2** shows illustrative examples. Such differences in toxicity ratings by annotator demographic indicate a form of consistent bias, motivating our consideration of demographics in task allocation.
\begin{table}
\begin{tabular}{l|l} \hline \hline Identity mentioned & Post \\ \hline \multirow{3}{*}{LGBTQ} & “I am NOT protecting or condoning the alleged \\ & behavior! I’m pointing out the ‘he did \\ & this because he is gay’ bigotry.” \\ \cline{1-1} & “I feel the same fear for the gay members of my family.” \\ \hline \multirow{3}{*}{African-American} & “I’m sure it was merely an oversight but...not \\ & mentioned in the story is that the killer was \\ & black and the victims were white. Jus’ saying’.” \\ \cline{1-1} & “You apparently can say whatever you want about Mexicans, Hispanics \& \\ \cline{1-1} & Black people, but the Republican Party draws the line on white women.” \\ \hline \hline \end{tabular}
\end{table}
Table 2: Example posts targeting LGBTQ or Black identities in toxicity data. For these posts, there is disagreement in toxicity labels of annotators with different demographics [19].
**Specifying \(dSim\).** We investigate potential allocation accuracy improvement by matching annotator demographics to the target groups. For comment \(x\) that targets demographic group \(g\) and annotator \(e\), we define \(dSim(z,e)\)=1, if \(e\) identifies with \(g\) and 0 otherwise.
**Baselines.** We evaluate against baseline training algorithms from Goel and Faltings [15], Tran-Thanh et al. [50] and Keswani et al. [28]. The descriptions of these baselines are provided in SS3. See Appendix D for model and implementation details of our algorithms and the baselines.
**Measurement.** We follow Goyal et al. [19] in reporting AUC score: the area under the receiver operating characteristic (ROC) curve. The dataset is skewed (only 12% of comments contain identity attacks), and AUC is appropriately sensitive to such class-imbalance. We randomly split the dataset into train-test partitions (70-30 split), evaluate methods across 25 trials of different splits, and report AUC mean and standard error.
**Alternative Gold Standards**. We measure label accuracy using two views of ground-truth: 1) the classic assumption of a single, objective gold standard vs. 2) that the gold standard is subjective and varies by community [29, 17, 19]. For the objective gold setting, we induce gold by majority vote over all annotators from Civil Comments [7]. Note that these annotators used to define the objective gold are completely disjoint from the set of annotators available for our task allocation experiments. In the subjective setting, we induce gold using the majority vote over the 5 annotators in [19] whose demographic identity matches the comment's target demographic. Here, the annotators available for task allocation includes the experts whose majority vote determines gold. Again, gold labels are never used for training allocation models, but for evaluation purposes only.
**Results. Table 3** present results for objective and subjective gold conditions. In both settings, training without prior information yields lower accuracy. We observe improvement in performance when using prior information despite the fact that differences in annotator accuracies across demographics are not statistically-significant. This is because, after training, allocation weights for each input contain information from both prior and observed samples, and correspondingly every test input is assigned to the top-ranked annotator for that input. The accuracy of the top-ranked annotator is often better than the average annotator accuracy, leading to improved prediction scores.
**Results: objective gold.** We observe negligible standard error (\(\sim 0.01\)) in AUC scores across trials, indicating consistency of the mean AUC scores for comparing methods. Smooth-Matching achieves the best AUC score (0.62), 2% better than prior work baselines that lack prior information (i.e., training without \(dSim\)). Strict-Matching performs 1% worse than these baselines, likely due to insufficiency of using \(dSim\) only for initialization. In contrast, Smooth-Matching mitigates this issue by using \(dSim\) throughout the training.
**Results: subjective gold.** Smooth-Matching again achieves the top mean AUC score (0.71), with 5-7% improvement over baselines. In contrast with the objective gold setting, Strict-Matching also outperforms all baselines (1-3%). In general, we observe both larger margins and higher overall scores than in the objective gold setting. In part, this may reflect a simple dataset artifact (e.g., all methods perform better in the subjective vs. objective gold setting). We also noted a minor artifact earlier in experimental design that could inflate scores here: whereas the objective gold setting uses disjoint annotator pools to define gold vs. task allocation, here the annotator pool for task allocation also includes the 5 annotators who define the
\begin{table}
\begin{tabular}{l c c} \hline \hline & Objective gold & Subjective gold \\ Method & AUC Score & AUC Score \\ \hline Smooth-Matching &.62 (.01) &.71 (.01) \\ Strict-Matching &.59 (0) &.67 (.01) \\ \hline Goel and Faltings [15] &.60 (.01) &.64 (.02) \\ Tran-Thanh et al. [50] &.60 (0) &.66 (.01) \\ Keswani et al. [28] &.60 (.01) &.66 (.01) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Objective and subjective gold results. We report AUC over 25 trials (standard error in brackets).
community gold standard. Despite these confounds, strong intuition remains to expect greater benefit from task allocation in the community-gold setting: when community members are empowered to define gold for their community, we stand to benefit more from engaging them as annotators for their community.
## 5 Discussion, Limitations, and Future Work
**Availability of _gold standard_.** As mentioned in SS2.1, prior studies on task allocation often assume that annotator correctness can be accurately determined. However, in real-life we only have access to fallible human decisions (no outside oracle exists) and task allocation seeks to find the suitable annotators whose aggregated decision is considered the gold [25]. Given that humans define the gold, the assumption that we can accurately determine annotator correctness would not always be true. That is not to say correctness can never be determined; for tasks with objectively correct answers, e.g. question-answer tasks [13], annotator qualities can be easily measured. But when the ground truth is subjective (e.g., in toxicity analysis) or for settings that are yet unexplored through crowdsourcing (e.g., regional language moderation), assuming presence of gold labels can be unrealistic.
The subjectivity of ground truth is also an important factor. The predominant view in crowdsourcing, that the _gold standard_ is the majority decision of a random group of annotators, has been challenged by many studies [19, 29, 42, 45]. Community-defined gold standard definitions have thus been forwarded as a way to incorporate minority voices in machine learning [17, 45]. Our framework, hence, takes a contextual approach to task allocation. Providing annotator demographics and background as prior information ensures that the social context of the tasks is taken into account. Training our closed-loop framework does not require any ground truth, ensuring separation from predefined ideas of "correctness". Finally, evaluation can be performed for both community-based and majority-based gold standards (as in SS 4), depending on the application in question.
**Prior information and \(dSim\).** Through empirical evaluations, we show that prior information can improve closed-loop model training. However, certain settings may not require such prior information for accurate training, e.g. tasks where the ground truth is considered objective (e.g., factual question-answer datasets) or when annotator qualities are not input-specific Secondly, providing incorrect or non-contextual prior information to the framework can have a negative impact on the training process. Our algorithms assume that \(dSim\) similarity is a weak proxy of annotator quality for a given task, and an incorrect estimate of \(dSim\) can lead to incorrect allocation in the initial iterations, thus derailing the entire training process (as observed for noisy \(dSim\) in SS 3). Finally, a key assumption we make is that annotator demographics and target demographics are known. While it is reasonable to expect annotator demographics to be provided (e.g., using in-take surveys during onboarding), demographics associated with the tasks (e.g., groups targeted in social media posts) may not always be available. In practice, target demographic would have to be manually labeled or automatically detected (with noise). Tackling noise in target demographics merits future exploration and can improve our framework's applicability.
**Annotator consultation costs.** Different annotators can have different consultation costs. E.g., platforms like Upwork [20] allow clients to employ the human experts (or freelancers) for their posted jobs. Experts often have more experience/training (and higher prices) than generalist workers. Our framework supports each person having such additional input-specific costs. Let \(c_{e_{j}}:\mathcal{X}\rightarrow\mathbb{R}\) for \(e_{j}\) denote the input-specific cost function for annotator \(e_{j}\). Recall from SS2.1 that the loss function for the task allocation model is captured by \(\mathcal{L}_{D}\). To incorporate input-specific costs, we can alternately minimize a regularized loss function: \(\mathcal{L}:=\mathcal{L}_{D}+\lambda\cdot\mathbb{E}_{x}\left[D(x)^{\top}c(x)\right]\), where \(\lambda\geq 0\) is the cost hyperparameter. Minimizing this cost-regularized loss will ensure that the annotator costs are accounted.
**Updating annotators.** To add a new annotator, we can assign them weight proportional to their \(dSim\) value for any given input category and subsequent training will update this weight based on their predictions. Removing an annotator, however, can affect performance if this annotator had expertise in subspaces where all other annotators are inaccurate. If the removed annotator did not have any unique expertise, then choosing a large committee size can partially ameliorate this issue. However, alternate methods to deal with annotator removal would be beneficial in practice and merit further exploration.
Related Work
Multiple studies in crowdsourcing have evaluated the importance of repeated labelling and proposed methods to handle annotator heterogeneity [46, 25, 31, 59, 54, 27, 6, 15]. Traditionally, most studies consider heterogeneity amongst annotators but not amongst the input tasks. In contrast, our framework constructs input-specific allocation models. Certain recent papers propose methods to handle correlated heterogeneity of annotators and tasks in offline settings, where all annotators provide a decision for most tasks and the goal is to infer ground truth from given annotations [58, 49, 9]. However, offline settings can be expensive as it requires a large number of annotations. We tackle online settings where the relevant experts are chosen judiciously to obtain relatively larger cost-benefit.
Studies on online allocation forward a variety of methods to construct appropriate allocation models. Yin et al. [57] design budget-limited allocation policies that match the annotator preferences to task requirements. However, annotator preferences can be unavailable and would be susceptible to implicit biases. Liu et al. [32], Li et al. [30] propose frameworks that learn annotator accuracies by comparing to ground truth for completed tasks. SS2.1 shows that such frameworks perform poorly in closed-loop settings when accuracies are estimated using noisy human labels; they can potentially be employed if our proposed approach of using prior information is incorporated into their designs. Fan et al. [13] develop allocation models that assigns a task to those annotators who have past experience with similar tasks. However, estimating inter-task similarity in real-world settings can be expensive due to the large size and variability across tasks. Certain studies estimate annotator cognitive abilities or use their social network profiles to allocate tasks appropriately [22, 16, 12]. In absence of subsequent training, these methods will have low accuracy when the annotator profiles have insufficient information about their qualities. Our framework, instead, employs simpler demographic and task-related information to pre-assess worker qualities and complements it with subsequent training. Ho et al. [23] and Ho and Vaughan [24] study a different setup where human annotators arrive in an online fashion and provide allocation approaches when annotators' skill levels are known in advance. Bandit approaches [2, 33, 53, 50] can also be used for task allocation but primarily assume access to true rewards/penalties. In the case of closed-loop models, they suffer from exacerbated inaccuracies in heterogeneous settings, as observed in SS3.
Recent human-in-the-loop research studies deferral frameworks that train an automated classifier which can either make a prediction or defer the decision to a human [28, 36, 34]. Deciding whether a prediction should be made by the classifier or (one or more) humans is a task allocation problem. However, here again prior algorithms for deferral training assume access to true class labels for training [28, 36, 34, 21], which are unavailable in our closed-loop setting. In case of limited ground truth information, semi-supervised classification [8, 51, 37, 56, 26] selectively use either the model's prediction or labels from noisy crowd annotators to appropriately re-train the model. While the goal of these approaches is to train a classifier, our primary goal is to train an input-specific task allocation model that learns every human decision-maker's error region.
## 7 Conclusion
We initiate a study of a closed-loop online task allocation framework where decisions from the human annotators are used to continuously train the allocation model as well. We provide algorithms that utilize the available prior information about the annotators to bootstrap an accurate training process. By encoding prior information about the human annotators, e.g. demographics and background, we ensure that the learned allocation models are contextually-relevant. 3 |
2309.08464 | Differentially Private Average Consensus with Improved Accuracy-Privacy
Trade-off | This paper studies the average consensus problem with differential privacy of
initial states, for which it is widely recognized that there is a trade-off
between the mean-square computation accuracy and privacy level. Considering the
trade-off gap between the average consensus algorithm and the centralized
averaging approach with differential privacy, we propose a distributed
shuffling mechanism based on the Paillier cryptosystem to generate correlated
zero-sum randomness. By randomizing each local privacy-sensitive initial state
with an i.i.d. Gaussian noise and the output of the mechanism using Gaussian
noises, it is shown that the resulting average consensus algorithm can
eliminate the gap in the sense that the accuracy-privacy trade-off of the
centralized averaging approach with differential privacy can be almost
recovered by appropriately designing the variances of the added noises. We also
extend such a design framework with Gaussian noises to the one using Laplace
noises, and show that the improved privacy-accuracy trade-off is preserved. | Lei Wang, Weijia Liu, Fanghong Guo, Zixin Qiao, Zhengguang Wu | 2023-09-15T15:14:14Z | http://arxiv.org/abs/2309.08464v3 | # Differentially Private Average Consensus with Improved Accuracy-Privacy Trade-off
###### Abstract
This paper studies the average consensus problem with differential privacy of initial states, for which it is widely recognized that there is a trade-off between the mean-square computation accuracy and privacy level. Considering the trade-off gap between the average consensus algorithm and the centralized averaging approach with differential privacy, we propose a distributed shuffling mechanism based on the Paillier cryptosystem to generate correlated zero-sum randomness. By randomizing each local privacy-sensitive initial state with an i.i.d. Gaussian noise and the output of the mechanism using Gaussian noises, it is shown that the resulting average consensus algorithm can eliminate the gap in the sense that the accuracy-privacy trade-off of the centralized averaging approach with differential privacy can be almost recovered by appropriately designing the variances of the added noises. We also extend such a design framework with Gaussian noises to the one using Laplace noises, and show that the improved privacy-accuracy trade-off is preserved.
D +
Footnote †: footnoteinfo
hardly used to infer the sensitive data [8, 9]. Particularly, in [9] the Paillier cryptosystem is employed to develop secure average consensus algorithms such that the average computation is completed in ciphertexts, i.e., there is no need to use the private key for decryption during computation. However, though providing privacy and accuracy guarantees, the computational and communication costs for encryption-based algorithms may be too heavy in practical applications [10].
Another common approach for privacy protection is to add offsets or masks to node states or their iteration processes. Along this line, [11] proposed to add offsets in such a way that for each node locally added offsets are zero in total, which ensures the exact average computation while achieving the privacy in the unobservability sense. Similar results have also been achieved in [12] by introducing time-varying output masks such that the masked time-varying system has the original system as its limit system. To quantify the achievable privacy, the variance matrix of the maximum likelihood estimation was employed in [13], where a privacy-preserving average consensus algorithm was established by adding and subtracting vanishing random noises. From a different perspective, [14] proposed to add constrained noises in consensus processes, where the inverse of the trace of the Fisher information matrix was used to measure the privacy guarantee. Note that in these efforts the privacy guarantees are established on the eavesdropped/accessed information and with uncertain robustness to the side information.
Differential privacy, a rigorous notion for defining and preserving data privacy, has been shown to be resilient to the side information and post-processing [15, 16]. In last decades, extensive developments have been emerged in such as signal processing [17, 18], control [19, 20, 21], and distributed computation [22, 23], etc, advancing the differential privacy as a gold standard in data privacy. Taking into account the average consensus with differential privacy of initial states, there are also many efforts in the literature [24, 25, 26, 27, 28]. Particularly, [24] developed an iterative consensus framework by adding a stream of noise drew from a time-varying Laplace distribution. In [25], a differentially private consensus algorithm was proposed by linearly perturbing the state-transition and message-generating functions with exponentially decaying noise. It is also shown in [25] that given adversaries having access to all the messages, achieving exact average in the mean-square sense is _impossible_ for average consensus algorithms under the requirement of differential privacy of the agents' initial states, and the corresponding optimal trade-off between the computation accuracy and the differential privacy can be achieved by the mechanism corresponding to the one-shot perturbation of initial states. With such an optimal trade-off, it is worth noting that for differential privacy the centralized average mechanism (i.e., publishing the perturbed average) shows a better accuracy-privacy trade-off, as shown in Section 2.2. More explicitly, given the same differential privacy requirement, the mean-square computation accuracy that can be achieved by the centralized approach is \(n\) times smaller than that of the average consensus algorithm with the one-shot perturbation of initial states for differential privacy [25], where \(n\) denotes the total agent number.
Motivated by the previously mentioned gap between the centralized approach and the average consensus algorithms in the literature for differential privacy, in this paper we revisit the average consensus problem with the requirement of differential privacy of agents' initial states against adversaries having access to all the messages, and aim to propose new differentially private average consensus algorithms with improved accuracy-privacy trade-offs. Inspired by [9], we propose a distributed shuffling mechanism based on the Paillier cryptosystem to generate correlated zero-sum randomness. With such a mechanism using Gaussian noises, we then inject the resulting correlated randomness and an extra i.i.d. Gaussian noise to the local data for average computation as the initialization step of the average consensus algorithm. It is shown that the resulting average consensus algorithm can preserve the desired differential privacy, while achieving exponential convergence to the average subject to an error relying on the added noises. We also extend such a design framework with Gaussian noises to the one using Laplace noises. Our contribution mainly lies in proposing two new design frameworks of differentially private average consensus algorithms, respectively, using Gaussian and Laplace noises, both of which can almost recover the accuracy-privacy trade-off of the corresponding centralized averaging approach. More explicitly, we show that, with the introduction of the proposed distributed shuffling mechanism and an extra i.i.d. Gaussian/Laplace noise to the initialization step, the resulting average consensus algorithms can eliminate the gap in the sense that the achieved trade-off can be adjusted arbitrarily close to that of the centralized averaging approach by appropriately designing the variance of the added noises.
The remainder of the paper is organized as follows. Section 2 presents the gap of the accuracy-privacy trade-off between the existing average consensus algorithm and the centralized algorithm for differential privacy, and formulates the problem of interest. In Section 3, the Paillier cryptosystem is employed to develop a distributed shuffling mechanism, which is then used in the initialization step of the average consensus algorithm in Sections 4 and 5 with Gaussian and Laplace noises, respectively for differentially private average consensus algorithms. Case studies are given in Section 6 to validate the effectiveness of the proposed algorithms. The conclusion is drawn in Section 7. This paper is a significant extension over the preliminary version [1] by reformulating the problem in Section 2, developing new technical results in Section 5 and simulations in Section 6.
**Notation**. Denote by \(\mathbb{R}\) the real numbers, \(\mathbb{R}^{n}\) the real space of \(n\) dimension for any positive integer \(n\) and \(\mathbb{N}\) the set of natural numbers. For a vector \(\mathbf{x}\in\mathbb{R}^{n}\), denote \(x_{i}\) as the \(i\)-th entry of \(\mathbf{x}\), and \(\|\mathbf{x}\|_{0}\), \(\|\mathbf{x}\|_{1}\) and \(\|\mathbf{x}\|\) as the 0, 1, and 2-norm of vector \(\mathbf{x}\), respectively, and for any set \(\mathrm{p}\subseteq\{1,2,\ldots,n\}\) of \(l\) elements, \(\mathbf{x}_{\mathrm{p}}\) a vector of dimension \(l\) with each entry being \(x_{j}\) with \(j\in\mathrm{p}\). Denote \(\mathbf{e}_{i}\) a basis vector whose entries are all zero except the \(i\)-th being one. Denote by \(\boldsymbol{\eta}\sim\mathcal{N}(\mu,\sigma^{2})^{r}\) if each entry in \(\boldsymbol{\eta}\in\mathbb{R}^{r}\) is i.i.d. drawn from a Gaussian distribution with mean \(\mu\) and variance \(\sigma^{2}\), and \(\boldsymbol{\eta}\sim\mathcal{L}(\mu,b)^{r}\), if each entry in \(\boldsymbol{\eta}\in\mathbb{R}^{r}\) is i.i.d. drawn from a Laplace distribution with mean \(\mu\) and variance \(2b^{2}\). Define \(\Phi(s):=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{s}e^{-\tau^{2}/2}d\tau\) and
\[\kappa_{\epsilon}(s):=\Phi(\frac{s}{2}-\frac{\epsilon}{s})-e^{\epsilon}\Phi(- \frac{s}{2}-\frac{\epsilon}{s})\,.\]
Denote \(\bar{\kappa}_{\epsilon}(\delta)\) as the inverse function3 of \(\kappa_{\epsilon}(s)\) for any \(\epsilon\geq 0\), i.e., \(\bar{\kappa}_{\epsilon}(\kappa_{\epsilon}(s))=\kappa_{\epsilon}(\bar{\kappa}_ {\epsilon}(s))=s\) for \(s>0\).
Footnote 3: It can be verified that \(\frac{\partial\kappa_{\epsilon}(s)}{\partial s}=e^{-\frac{1}{2}(\frac{ \epsilon}{2}-\frac{\epsilon}{s})^{2}}/\sqrt{2\pi}>0\). This indicates that the function \(\kappa_{\epsilon}(\cdot)\) and thus its inverse \(\bar{\kappa}_{\epsilon}(\cdot)\) are strictly increasing functions.
## 2 Problem Statement
### Preliminaries
In this paper, we study the problem of average consensus over a communication network \(\mathrm{G}=(\mathrm{V},\mathrm{E})\) where \(\mathrm{V}=\{1,\ldots,n\}\) is the agent set, \(\mathrm{E}\subseteq\mathrm{V}\times\mathrm{V}\) is the edge set, and each agent \(i\in\mathrm{V}\) holds a _privacy-sensitive_ local data \(d_{i}\). Throughout the paper, the following assumption on the communication network \(\mathrm{G}\) is made.
**Assumption 1**: _The communication graph \(\mathrm{G}\) is undirected and connected. Moreover, denote by \(w_{ij}\) the weight of the edge \((i,j)\), satisfying \(w_{ij}=w_{ji}>0\) if \((i,j)\in\mathrm{E}\), \(w_{ji}=0\) if \((i,j)\notin\mathrm{E}\) and \(\sum_{j\in\mathrm{V}}w_{ij}<1\) for all \(i\in\mathrm{V}\)._
Denote the Laplacian matrix of \(\mathrm{G}\) as \(\mathbf{L}\), satisfying \([\mathbf{L}]_{ij}=-w_{ij}\), \(j\neq i\) and \([\mathbf{L}]_{ii}=\sum_{k=1}^{n}w_{ik}\) for all \(i\in\mathrm{V}\). Let us arrange the eigenvalues of \(\mathbf{L}\) in the increasing order as \(\lambda_{\mathbf{L}}^{\mathbf{L}}\leq\lambda_{\mathbf{L}}^{\mathbf{L}}\leq... \leq\lambda_{\mathbf{L}}^{\mathbf{L}}\). By [29], with Assumption 1, we have \(0=\lambda_{\mathbf{L}}^{\mathbf{L}}<\lambda_{\mathbf{L}}^{\mathbf{L}}\leq \lambda_{\mathbf{L}}^{\mathbf{L}}<2\). Denote \(\beta=\max\{|1-\lambda_{2}^{\mathbf{L}}|,|1-\lambda_{n}^{\mathbf{L}}|\}\), satisfying \(0\leq\beta<1\).
A standard algorithm to solve the average consensus problem follows
\[x_{i}(t+1)=x_{i}(t)+\sum_{j\in\mathrm{V}}w_{ij}(x_{j}(t)-x_{i}(t))\,. \tag{1}\]
By initializing each agent state \(x_{i}(0)=d_{i}\), it is well-known that under Assumption 1, each agent state \(x_{i}(t)\) converges to the average \(d^{*}:=\sum_{i=1}^{n}d_{i}/n\) exponentially [29]. However, it is worth noting that during the average computation, there may be adversaries who have access to the communication messages over the communication graph \(\mathrm{G}\) and may infer the privacy-sensitive local data of the network. This thus leads to the study of modifying the algorithm (1) for privacy-preserving purpose. In view of this, this paper takes into account the differential privacy of these local data, and aim to develop new distributed average consensus algorithms to compute the average with privacy guarantees.
Denote \(\mathscr{M}:\mathcal{D}\rightarrow\mathcal{M}\) as the mapping from the local data to the eavesdropped messages. Let \(\mathcal{D}\subseteq\mathbb{R}^{n}\) be the input space of the private local data, and any pair of data \(\mathbf{x}\) and \(\mathbf{x}^{\prime}\) drawn from \(\mathcal{D}\) are said to be \(\mu\)-adjacent with \(\mu>0\), denoted by \((\mathbf{x},\mathbf{x}^{\prime})\in\mathrm{Adj}(\mu)\), if \(\|\mathbf{x}-\mathbf{x}^{\prime}\|_{0}\)= 1 and \(\|\mathbf{x}-\mathbf{x}^{\prime}\|_{1}\)\(\leq\mu\). We present the following definition [15].
_Definition_. Denote the vector of the sensitive local data as \(\mathbf{d}=[d_{1};d_{2};\ldots;d_{n}]\). A distributed average consensus algorithm over the communication graph \(\mathrm{G}\) preserves \((\epsilon,\delta)\)-differential privacy of \(\mathbf{d}\) under \(\mu\)-adjacency for \(\epsilon\geq 0,\delta\in[0,1)\), if for all \(\mathcal{M}\subseteq\mathrm{range}(\mathscr{M})\), there holds
\[\mathbb{P}(\mathscr{M}(\mathbf{d})\in\mathcal{M})\leq e^{\epsilon}\mathbb{P}( \mathscr{M}(\mathbf{d}^{\prime})\in\mathcal{M})+\delta \tag{2}\]
for any \((\mathbf{d},\mathbf{d}^{\prime})\in\mathrm{Adj}(\mu)\).
**Remark 1**: _In the above definition of \((\epsilon,\delta)\)-differential privacy, when the privacy budget \(\delta=0\), the corresponding \((\epsilon,0)\)-differential privacy is also called as \(\epsilon\)-differential privacy [15]._
### Problem Definition
Differentially private average consensus algorithms have been investigated in the literature (e.g. [25, 26]). Particularly, it is shown in [25] that the optimal trade-off between the privacy and the accuracy is achieved by injecting an i.i.d. noise to each local data before assigning it as the initial state, i.e., the so-called one-shot perturbation following
\[x_{i}(0)=d_{i}+\xi_{i} \tag{3}\]
with the injected i.i.d. noise \(\xi_{i}\), and then running the standard average consensus algorithm. For convenience, we call the above Differentially Private Average Consensus algorithm with the One-Shot Perturbation (1)-(3) as DPAC-OSP algorithm in short. In the following, we present the accuracy-privacy trade-offs of the DPAC-OSP algorithm with Laplace and Gaussian noises, both widely used in the literature to achieve the differential privacy with budgets \((\epsilon,0)\) and \((\epsilon,\delta)\) with \(\delta>0\), respectively.
**Proposition 1** (Trade-offs of DPAC-OSP algorithm): _Consider the DPAC-OSP algorithm (1)-(3)._
* **Laplace Mechanism**_. For any_ \(\epsilon>0\)_,_ \(\mu>0\)_, and_ \(\xi_{i}\sim\mathcal{L}(0,\sigma_{\xi})\)_, if the_ \((\epsilon,0)\)_-differential privacy of_ **d** _under_ \(\mu\)_-adjacency is preserved, then there must hold_ \[\sigma_{\xi}\geq\mu/\epsilon\,,\] (4) \[\lim_{t\to\infty}\mathbb{E}|x_{i}(t)-d^{*}|^{2}\geq 2\mu^{2}/(n \epsilon^{2})\,.\] (5)
* **Gaussian Mechanism**_. For any_ \(\epsilon\geq 0\)_,_ \(\delta\in(0,1)\)_,_ \(\mu>0\)_, and_ \(\xi_{i}\sim\mathcal{N}(0,\sigma_{\xi}^{2})\)_, if the_ \((\epsilon,\delta)\)_-differential privacy of_ **d** _under_ \(\mu\)_-adjacency is preserved, then there must hold_ \[\sigma_{\xi}\geq\mu/\bar{\kappa}_{\epsilon}(\delta)\,,\] (6) \[\lim_{t\to\infty}\mathbb{E}|x_{i}(t)-d^{*}|^{2}\geq\mu^{2}/(n \bar{\kappa}_{\epsilon}(\delta)^{2})\.\] (7)
By denoting \(\boldsymbol{\xi}=\mathrm{col}(\xi_{1},\ldots,\xi_{n})\), the mechanism for privacy analysis is given by \(\mathscr{M}(\mathbf{d})=\mathbf{d}+\boldsymbol{\xi}\), from which (4) and (6) can be easily verified by recalling [18] and [30, 27], respectively. As for (5) and (7), they are clear by noting that each agent state converges to the averaged state \(d^{*}+\mathbf{1}_{n}^{\top}\boldsymbol{\xi}/n\)[29] and then using (4) and (6).
If there is a center storing all private data \(\{d_{i}\}\), then the average can be computed in a centralized way as \(\sum_{i=1}^{n}d_{i}/n\). For differential privacy concern, the center generates a random noise \(\xi\) and publishes the perturbed average as
\[x=\frac{1}{n}\mathbf{1}_{n}^{\top}\mathbf{d}+\xi. \tag{8}\]
For convenience, we call the above Differentially Private Centralized Averaging algorithm (8) as DPCA algorithm in short. The resulting accuracy-privacy trade-offs under Laplace and Gaussian mechanisms are given below.
**Proposition 2** (Trade-offs of DPCA algorithm): _Consider the DPCA algorithm (8)._
* **Laplace Mechanism**_. For any_ \(\epsilon\geq 0\)_,_ \(\mu>0\)_, and_ \(\xi\sim\mathcal{L}(0,\sigma_{\xi})\)_, if the_ \((\epsilon,0)\)_-differential privacy of_ **d** _under_ \(\mu\)_-adjacency is preserved, then there holds_ \[\mathbb{E}|x-d^{*}|^{2}=2\sigma_{\xi}^{2}\geq 2\mu^{2}/(n^{2}\epsilon^{2}).\]
* **Gaussian Mechanism**_. For any_ \(\epsilon\geq 0\)_,_ \(\delta\in(0,1)\)_,_ \(\mu>0\)_, and_ \(\xi\sim\mathcal{N}(0,\sigma_{\xi}^{2})\)_, if the_ \((\epsilon,\delta)\)_-differential privacy of_ **d** _under_ \(\mu\)_-adjacency is preserved, then there holds_ \[\mathbb{E}|x-d^{*}|^{2}=\sigma_{\xi}^{2}\geq\mu^{2}/(n^{2}\bar{\kappa}_{ \epsilon}(\delta)^{2}).\]
The proof of the above proposition is clear by using [18, Theorem 2] and [30, Theorem 8] and is thus omitted.
**Problem Statement.** It can be seen from Propositions 1 and 2 that under both Laplace and Gaussian mechanisms achieving the given differential privacy requirements, the achievable mean-square computation accuracy of the DPCA algorithm is \(n\) times smaller than that of the DPAC-OSP algorithm, which makes a significant difference when the network size is very large. Motivated by this gap, this paper aims to propose new differentially private average consensus algorithms which improve the accuracy-privacy trade-off in the sense of reducing and even eliminating this gap.
## 3 Distributed Shuffling Mechanism
In this section, inspired by [9], we propose a distributed shuffling mechanism by employing the technique of Paillier cryptosystem [31] to generate correlated randomness in a distributed and secure manner, which will play a significant role in the initialization step of the average consensus algorithm.
Denote the encryption and decryption operations based on the Paillier cryptosystem as \(\mathrm{E}_{i}(\cdot)\) and \(\mathrm{D}_{i}(\cdot)\), respectively for agent \(i\in\mathrm{V}\). The proposed distributed shuffling mechanism is given in Algorithm 1.
```
Input: Data \(d_{i}\), public and private key pairs \((k_{pi},k_{si})\), and a large positive integer \(\bar{a}>>1\).
1. Each agent \(i\in\mathrm{V}\) generates an i.i.d. noise \(\eta_{i}\) with some probability distribution function \(f_{i}\) and adds to the local data \(d_{i}\): \(d_{i}\to\bar{d}_{i}:=d_{i}+\eta_{i}\);
2. Each agent \(i\in\mathrm{V}\) encrypts \(-\bar{d}_{i}\) with the local public key \(k_{pi}:-\bar{d}_{i}\to\mathrm{E}_{i}(-\bar{d}_{i})\), and sends the local ciphertext \(\mathrm{E}_{i}(-\bar{d}_{i})\) and public key \(k_{pi}\) to neighboring agents \(j\in\mathrm{N}_{i}\);
3. Each agent \(i\in\mathrm{V}\) encrypts the noisy data \(\bar{d}_{i}\) with the received public key \(k_{pi}:\bar{d}_{i}\to\mathrm{E}_{j}(\bar{d}_{i})\) for \(j\in\mathrm{N}_{i}\), and computes \(c_{ij}=\mathrm{E}_{j}(\bar{d}_{i})\mathrm{E}_{j}(-\bar{d}_{j})\) for \(j\in\mathrm{N}_{i}\);
4. Each agent \(i\in\mathrm{V}\) independently and randomly generates a set of positive integers \(a_{i\to j}\in[\bar{a}/2,\bar{a}]\), \(j\in\mathrm{N}_{i}\), and computes \((c_{ij})^{a_{i\to j}}\), for \(j\in\mathrm{N}_{i}\);
5. Each agent \(i\in\mathrm{V}\) sends the computed \((c_{ij})^{a_{i\to j}}\) to agent \(j\in\mathrm{N}_{i}\), and decrypt the received terms the received \((c_{ji})^{a_{j\to i}}\) with the local private key \(k_{si}:(c_{ji})^{a_{j\to i}}\to\mathrm{D}_{i}((c_{ji})^{a_{j\to i}})\), \(j\in\mathrm{N}_{i}\);
6. Each agent \(i\in\mathrm{V}\) multiplies each \(\mathrm{D}_{i}((c_{ji})^{a_{j\to i}})\) by \(a_{i\to j}\), \(j\in\mathrm{N}_{i}\), and computes the sum \[\Delta_{i}=\sum_{j\in\mathrm{N}_{i}}a_{i\to j}\mathrm{D}_{i}((c_{ji})^{a_{j\to i}})\,.\] Output:\(\Delta_{i}\), \(i\in\mathrm{V}\).
```
**Algorithm 1** Distributed Shuffling (DiShuf) Mechanism
Denote the encryption and decryption operations based on the Paillier cryptosystem as \(\mathrm{E}_{i}(\cdot)\) and \(\mathrm{D}_{i}(\cdot)\), respectively for agent \(i\in\mathrm{V}\). The proposed distributed shuffling mechanism is given in Algorithm 1.
**Remark 2**.: _The distributed shuffling process in Algorithm 1 follows the communication framework proposed in [9] to guarantee that the communicated messages are ciphertexts, ensuring security of the actual messages and thus the related sensitive data against the eavesdropper. More explicitly, in [9] a similar computation process to Algorithm 1 is implemented iteratively for node state updates with no noise added to node states, leading to a secure average consensus algorithm with convergence to the exact average. This is different from our cases (see the subsequent DiShuf-based average consensus algorithms), where Algorithm 1 is implemented for only one time at the initialization step. As a result, in contrast with the average consensus algorithm in [9], our DiShuf-based average consensus algorithms need less computational and communication costs, but at the price of sacrificing some privacy and accuracy (can see Theorems 1-6 subsequently)._
It is noted that the Paillier cryptosystem has the following two significant properties:
* _Homomorphic addition_ \[\mathrm{E}_{i}(m_{1}+m_{2})=\mathrm{E}_{i}(m_{1})\mathrm{E}_{i}(m_{2}).\] (9)
* _Homomorphic multiplication_ \[\mathrm{E}_{i}(km)=\mathrm{E}_{i}(m)^{k}\,,\quad\forall k\in\mathbb{Z}^{+}.\] (10)
Bearing in mind the above properties, we observe that
\[\begin{array}{rl}\Delta_{i}&=\,\sum_{j\in\mathrm{N}_{i}}a_{i\to j} \mathrm{D}_{i}(c_{ji}^{a_{j-i}})\\ &=\,\sum_{j\in\mathrm{N}_{i}}a_{i\to j}\mathrm{D}_{i}((\mathrm{E}_{i}(\bar{d} _{j})\mathrm{E}_{i}(-\bar{d}_{i}))^{a_{j-i}})\\ &=\,\sum_{j\in\mathrm{N}_{i}}a_{i\to j}\mathrm{D}_{i}((\mathrm{E}_{i}(\bar{d} _{j}-\bar{d}_{i}))^{a_{j-i}})\\ &=\,\sum_{j\in\mathrm{N}_{i}}a_{i\to j}\mathrm{D}_{i}(\mathrm{E}_{i}(a_{j-i}( \bar{d}_{j}-\bar{d}_{i})))\\ &=\,\sum_{j\in\mathrm{N}_{i}}a_{i\to j}a_{j\to i}(\bar{d}_{j}-\bar{d}_{i})\\ &=\,\sum_{j\in\mathrm{N}_{i}}a_{i\to j}a_{j\to i}(d_{j}-d_{i}+\eta_{j}-\eta_{i} )\,.\end{array}\]
Note that \(\Delta_{i}\), \(i\in\mathrm{V}\) are correlated random variables, satisfying \(\sum_{i=1}^{n}\Delta_{i}=0\).
In the proposed DiShuf mechanism, it is worth noting that by observing the communication messages, e.g., the local ciphertexts \(\mathrm{E}_{i}(-\bar{d}_{i})\) and \(c_{ij}^{a_{i\to j}}\), the eavesdroppers have no access to the actual information, e.g., \(-\bar{d}_{i}\) or \(a_{j\to i}(\bar{d}_{j}-\bar{d}_{i})\) due to the lack of the private keys. Thus, throughout the paper we assume that the "actual" communication messages (e.g., \(-\bar{d}_{i}\) and \(a_{j\to i}(\bar{d}_{j}-\bar{d}_{i})\)) are secure, and will not be incorporated into the eavesdropper information in the subsequent differential privacy analysis. Moreover, if there is a malicious agent \(i\), the received \(\mathrm{E}_{j}(-\bar{d}_{j})\) and \((c_{ji})^{a_{j\to i}}\), \(j\in\mathrm{N}_{i}\) cannot be used to infer \(\bar{d}_{j}\) as the private key \(k_{sj}\) and \(a_{j\to i}\) are unknown to agent \(i\).
**Remark 3**.: _It is noted that the Paillier cryptosystem in Algorithm 1 works on integers, while real world agent states \(d_{i}\) and added noises \(\eta_{i}\) are typically represented by floating point numbers in modern computing architectures. To handle such an issue, one may multiply \(\bar{d}_{i}\) by a large integer \(C\) and take the nearest integer to encrypt, then divide the decrypted result by \(C\) after applying Dishuf Mechanism. By choosing a sufficiently large \(C\), quantization errors can be made negligible, as addressed in [9]._
## 4 DiShuf-based Average Consensus: Gaussian Mechanism
In this section, the DiShuf mechanism in Algorithm 1 is employed to develop a new differentially private average consensus algorithm, where the added noises follow the Gaussian distribution, leading to the Gaussian mechanism for differential privacy analysis.
### Algorithm
We propose the DiShuf-based average consensus algorithm with Gaussian noises in Algorithm 2.
```
Input: Data \(d_{i}\), public and private key pairs \((k_{pi},k_{si})\), \(\zeta,\sigma_{\eta},\sigma_{\gamma}>0\) and a large positive integer \(\bar{a}>>1\).
1. Each agent \(i\in\mathrm{V}\) implements the DiShuf mechanism with \(\eta_{i}\sim\mathcal{N}(0,\sigma_{\eta}^{2})\), and outputs \(\Delta_{i}\);
2. Each agent \(i\in\mathrm{V}\) initializes \[x_{i}(0)=d_{i}+\zeta\Delta_{i}+\gamma_{i}\] (11) with \(\gamma_{i}\sim\mathcal{N}(0,\sigma_{\gamma}^{2})\) being an i.i.d. Gaussian noise;
3. For \(t=0,1,...\), run 3.1 Each agent \(i\in\mathrm{V}\) sends the local state \(x_{i}(t)\) to the neighboring agents; 3.2 Each agent \(i\in\mathrm{V}\) updates its state following (1).
```
**Algorithm 2** DiShuf-based Average Consensus
As \(\sum_{i=1}^{n}\Delta_{i}=0\), it can be easily verified that
\[\sum_{i=1}^{n}x_{i}(0)=\sum_{i=1}^{n}d_{i}+\sum_{i=1}^{n}\gamma_{i}\,,\]
which is independent of \(\eta_{i}\). In other words, the noises \(\eta_{i}\) do not affect the average of \(x_{i}(0)\), \(i\in\mathrm{V}\). We also note that if \(x_{i}(0)\), \(i\in\mathrm{V}\) are published, the corresponding differential privacy is determined by both noises \(\eta_{i},\gamma_{i}\). In contrast with the idea of adding the i.i.d. noise \(\bar{\xi}_{i}\) to the local data \(d_{i}\) directly as in Proposition 1, Algorithm 2 provides an extra design freedom (i.e., \(\eta_{i}\), or \(\sigma_{\eta}\)), which affects the achievable differential privacy by publishing \(x_{i}(0)\), \(i\in\mathrm{V}\), but with no influence to their average. As a result, a better trade-off between the privacy and the
accuracy can be achieved by appropriately designing \(\sigma_{\eta}\) and \(\sigma_{\gamma}\).
### Privacy and Accuracy Analysis
The differential privacy and computation accuracy of Algorithm 2 are summarized in the following theorems, with the proof given in Appendices A and B.
**Theorem 1** (Differential Privacy): _Let \(\zeta=\frac{1}{na^{2}+1}\). For any \(\epsilon\geq 0\), \(\delta\in(0,1)\) and \(\mu>0\), Algorithm 2 preserves the \((\epsilon,\delta)\)-differential privacy of \(\mathbf{d}\) under \(\mu\)-adjacency if_
\[\frac{1}{n\sigma_{\gamma}^{2}}+\frac{(n-1)\alpha^{2}}{\sigma_{\gamma}^{2}+(1- \alpha)^{2}\sigma_{\eta}^{2}}\leq\frac{(\bar{\kappa}_{\epsilon}(\delta))^{2}}{ \mu^{2}} \tag{12}\]
_where_
\[\alpha=\left(1-\frac{1}{(2(n+\bar{a}^{-2}))^{n-1}}\right)^{1/(n-1)}. \tag{13}\]
**Theorem 2** (Convergence): _Let \(\zeta=\frac{1}{na^{2}+1}\)._
1. _The agent states_ \(x_{i}(t)\) _exponentially converge to_ \[x(\infty):=d^{*}+\mathbf{1}_{n}^{\top}\boldsymbol{\gamma}/n\] _with convergence rate_ \(\ln(1/\beta)\)_._
2. \(\lim_{t\rightarrow\infty}\mathbb{E}|x_{i}(t)-d^{*}|=0\)_._
3. \(\lim_{t\rightarrow\infty}\mathbb{E}|x_{i}(t)-d^{*}|^{2}=\sigma_{\gamma}^{2}/n\)_._
From Theorems 1 and 2, it is clear that there is a trade-off between the differential privacy level and mean-square computation accuracy. More explicitly, let
\[\sigma_{\gamma}=\frac{(1+g)\mu}{\sqrt{n}\bar{\kappa}_{\epsilon}(\delta)} \tag{14}\]
with \(g>0\) a design freedom. Then (12) is satisfied if and only if there holds
\[\sigma_{\eta}^{2}\geq\frac{(n-1)\alpha^{2}}{(1-\alpha)^{2}(\bar{\kappa}_{ \epsilon}(\delta))^{2}}\left[\frac{(1+g)^{2}\mu^{2}}{(1+g)^{2}-1}-\frac{(1+g) ^{2}\mu^{2}}{n(n-1)\alpha^{2}}\right]\,. \tag{15}\]
Therefore, by Theorems 1 and 2 we can easily conclude the following result on the accuracy-privacy trade-off.
**Theorem 3** (Trade-off): _Let \(\zeta=\frac{1}{na^{2}+1}\), and \(\epsilon\geq 0,\delta\in(0,\,1),\mu>0\) be any expected differential privacy levels. By choosing the noise levels \(\sigma_{\gamma}\) and \(\sigma_{\eta}\) satisfying (14) and (15), respectively for any \(g>0\), Algorithm 2 preserves the \((\epsilon,\delta)\)-differential privacy of \(\mathbf{d}\) under \(\mu\)-adjacency, while rendering the mean-square computation error to satisfy_
\[\lim_{t\rightarrow\infty}\mathbb{E}|x_{i}(t)-d^{*}|^{2}\geq\frac{(1+g)^{2} \mu^{2}}{n^{2}(\bar{\kappa}_{\epsilon}(\delta))^{2}}\,, \tag{16}\]
_where "\(=\)" holds if "\(=\)" in (15) holds._
It is clear from Theorem 3 that the trade-off between the differential privacy and mean-square computation accuracy cannot be removed, which is consistent with other differentially private average consensus algorithms [25, 26]. However, we note that the achievable mean-square computation accuracy by Algorithm 2 is _inversely proportional_ to the _square_ of the agent number \(n\), as the centralized Gaussian mechanism in Proposition 2. Moreover, by Proposition 1 when achieving the same levels of differential privacy, the best mean-square computation error of the DPAC-OSP algorithm is \(\mu^{2}/[n(\bar{\kappa}_{\epsilon}(\delta))^{2}]\), which is \(n/(1+g)^{2}\) larger than that of our proposed Algorithm 2 with \(g>0\) an arbitrarily chosen constant. By Theorem 3 and Proposition 2, our achieved mean-square computation error is \((1+g)^{2}\) larger than that of the DPCA algorithm in Proposition 2. This means that the trade-off gap between the proposed Algorithm 2 and the DPCA algorithm can be almost eliminated by selecting \(g>0\) small enough.
## 5 DiShuf-based Differentially Private Average Consensus: Laplace Mechanism
In this section, the DiShuf mechanism in Algorithm 1 is employed to develop a new differentially private average consensus algorithm, where the noises follow the Laplace distribution, leading to the Laplace mechanism for differential privacy analysis.
### Algorithm
As there is no guarantee in general that the sum of multiple Laplace noises follows the Laplace distribution, Algorithm 2 with Gaussian mechanism cannot be directly adapted to the case with Laplace mechanism for \((\epsilon,0)\)-differential privacy, by replacing the Gaussian distribution by the Laplace distribution for the added noises. In view of this, we assume that there is a secure and pre-defined agent \(k^{*}\in\mathrm{V}\), and propose the DiShuf-based average consensus algorithm with Laplace noises in Algorithm 3.
### Privacy and Accuracy Analysis
The differential privacy and computation accuracy of Algorithm 3 are summarized in the following theorems.
**Theorem 4** (Differential Privacy): _Let \(\zeta=\frac{1}{na^{2}+1}\). For any \(\epsilon>0\) and \(\mu>0\), Algorithm 3 preserves \((\epsilon,0)\)-differential privacy of \(\mathbf{d}\) under \(\mu\)-adjacency if there hold_
\[\begin{split}\sigma_{\gamma}&\geq h\frac{\mu}{ \epsilon}\\ \sigma_{\eta}&\geq\frac{2\mu hn\sqrt{n-1}}{(1-\alpha)( h-1)\epsilon},\end{split} \tag{18}\]
_for some \(h>1\), where_
\[\alpha=\left(1-\frac{1}{(2(n+\bar{a}^{-2}))^{n-1}}\right)^{1/(n-1)}. \tag{19}\]
**Theorem 5** (Convergence): _Let \(\zeta=\frac{1}{na^{2}+1}\)._
1. _The agent states_ \(x_{i}(t)\) _exponentially converge to_ \[x(\infty):=d^{*}+\gamma_{k^{*}}/n\] _with convergence rate_ \(\ln(1/\beta)\)_._
2. \(\lim_{t\to\infty}\mathbb{E}|x_{i}(t)-d^{*}|=0\)_._
3. \(\lim_{t\to\infty}\mathbb{E}|x_{i}(t)-d^{*}|^{2}=2\sigma_{\gamma}^{2}/n^{2}\)_._
The proof of Theorem 4 is given in Appendix C, while for the proof of Theorem 5 it follows the same arguments of Theorem 2 and is thus omitted for simplicity. From Theorems 4 and 5, it is clear that there is a trade-off between the differential privacy level and mean-square computation accuracy, as in [25, 26].
**Remark 4**: _As shown in the proof of Theorem 4, the corresponding mechanism \(\mathscr{M}(\mathbf{d})\) for privacy analysis is of \(n\) dimensions and \(n+1\) correlated Laplace noises. This indeed brings the corresponding privacy analysis nontrivial, since there is no guarantee that the correlation of multiple Laplace noises still follows the Laplace distribution. To handle this issue, we first analyze a modified mechanism without considering the effect of the noise \(\eta_{k^{*}}\) and then employ the resilience property of the differential privacy to post-processing [16] to conclude the differential privacy of \(\mathscr{M}(\mathbf{d})\). This, from a different perspective, means that the conditions in (18) are conservative and not necessary._
**Theorem 6** (Trade-off): _Let \(\zeta=\frac{1}{na^{2}+1}\), and \(\epsilon>0,\mu>0\) be any expected differential privacy levels. By choosing the noise levels \(\sigma_{\gamma}\) and \(\sigma_{\eta}\) as in (18) for any \(h>1\), the proposed Algorithm 3 preserves the \((\epsilon,0)\)-differential privacy of \(\mathbf{d}\) under \(\mu\)-adjacency, while rendering the mean-square computation error to satisfy_
\[\lim_{t\to\infty}\mathbb{E}|x_{i}(t)-d^{*}|^{2}\geq\frac{2h^{2}\mu^{2}}{n^{2} \epsilon^{2}}\,, \tag{20}\]
_where "\(=\)" holds if "\(=\)" in the upper of (18) holds._
As concluded after Theorem 3, Theorem 6 implies that under the same level of differential privacy the resulting mean-square computation error is \(h^{2}/n\) times smaller than that of the DPAC-OSP algorithm in Proposition 1 under Laplace mechanism, and \(h^{2}\) larger than that of the DPCA algorithm in Proposition 2. This means that the trade-off gap between the proposed Algorithm 3 and the DPCA algorithm can be almost eliminated under Laplace mechanism by choosing \(h>1\) close enough to 1.
## 6 Case Studies
In this section, numerical examples are implemented to illustrate the effectiveness of the proposed Dishuff-based distributed average consensus algorithms.
We consider the differentially private average consensus problem over a cycle communication graph, where the agent size \(n=10\) and each edge weight is assigned as \(0.3\), and randomly choose the privacy-sensitive local data \(d_{i}\) as shown in Table 1 with the average \(d^{*}=13.1336\). Given the privacy budgets \((\epsilon,\delta,\mu)=(10,10^{-1},5)\), we let \(\bar{a}=10^{4}\) and implement Algorithm 2 with Gaussian noise level \(\sigma_{\gamma}=0.4500\) and \(\sigma_{\eta}\) satisfying (15) with an appropriate \(g>0\). By decreasing \(g\) from 3 to 0.01, it is shown in Table 2 that the resulting mean-square computation accuracy \(\sum_{i=1}^{n}\mathbb{E}|x_{i}(\infty)-d^{*}|^{2}/n\) decreases to that of the
DPCA algorithm (8), i.e., the accuracy-privacy trade-off gap decreases as \(g\) decreases. Moreover, as shown in Figure 1, under \(g=0.01\) the resulting mean-square computation accuracy is smaller than that of the DPAC-OSP algorithm (1)-(3) [25], and almost recover that of the DPCA algorithm (8).
Given the privacy budgets \((\epsilon,\delta,\mu)=(10,0,5)\), we implement Algorithm 3 with Laplace noise levels \(\sigma_{\gamma},\sigma_{\eta}\) satisfying (18) with an appropriate \(h>1\). Similarly, it is shown in Table 3 that the resulting mean-square computation accuracy decreases to that of the DPCA algorithm (8) as \(h\) decreases to one, i.e., the accuracy-privacy trade-off gap decreases as \(g\) decreases. Moreover, under \(h=1.1\) the resulting mean-square computation accuracy is smaller than that of the DPAC-OSP algorithm (1)-(3) [25], and almost recover that of the DPCA algorithm (8).
## 7 Conclusion
In this paper, we studied the problem of average consensus with differential privacy of initial states, for the purpose of improving the accuracy-privacy trade-off performance such that the trade-off of the centralized averaging approach with differential privacy can be (almost) recovered. To achieve such an objective, we proposed a distributed shuffling mechanism based on the Paillier cryptosystem to generate correlated zero-sum randomness. By randomizing each local privacy-sensitive initial state with an i.i.d. Gaussian noise and the output of the mechanism using Gaussian noises, the resulting average consensus algorithm was shown to be able to eliminate the gap in the sense that the accuracy-privacy trade-off of the centralized averaging approach can be (almost) recovered by adjusting a design parameter to be small enough. We also showed that such a design framework could be extended to the one using Laplace noises with the improved privacy-accuracy trade-off preserved. Future research works of interest include the extension to distributed optimization and cooperative control for better trade-offs.
## Appendix A Proof of Theorem 1
Let \(a_{ij}=\zeta a_{i\to j}a_{j\to i}\) and denote \(\mathbf{A}\in\mathbb{R}^{n\times n}\) by a matrix satisfying \([\mathbf{A}]_{ij}=-a_{ij}\) for \(j\in\mathbb{N}_{i}\) and \([\mathbf{A}]_{ii}=\sum_{j\in\mathbb{N}_{i}}a_{ij}\). It is clear that \(\mathbf{A}\) is symmetric and positive semi-definite, and for any \(i\in\mathbb{V}\), there hold
\[\begin{array}{l}[\mathbf{A}]_{ii}\leq\frac{(n-1)\bar{a}^{2}}{na^{2}+1}\,, \quad|[\mathbf{A}]_{ij}|\leq\frac{\bar{a}^{2}}{na^{2}+1}\,,\quad\forall j\neq i,\\ \sum_{j=1}^{n}\left[\mathbf{A}\right]_{ij}=0\,.\end{array}\]
In view of the above analysis, the matrix \(\mathbf{A}\) indeed can be regarded as a Laplacian matrix of graph G. Let us arrange the eigenvalues of \(\mathbf{A}\) in the increasing order as \(\lambda_{1}^{\mathbf{A}}\leq\lambda_{2}^{\mathbf{A}}\leq...\leq\lambda_{n}^{ \mathbf{A}}\), associated with the normalized
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \(i\) & 1 & 2 & 3 & 4 & 5 \\ \hline \(d_{i}\) & 14.3018 & 10.2806 & 11.1176 & 10.2264 & 13.7550 \\ \hline \(i\) & 6 & 7 & 8 & 9 & 10 \\ \hline \(d_{i}\) & 13.4903 & 14.5939 & 14.5446 & 14.0276 & 14.9984 \\ \hline \end{tabular}
\end{table}
Table 1: Sensitive local data \(d_{i}\)
\begin{table}
\begin{tabular}{c|c|c|c|c||c} \hline g & 3 & 2 & 1 & 0.01 & DPCA \\ \hline Accuracy & 0.3234 & 0.1694 & 0.0827 & 0.0259 & 0.0201 \\ \hline \end{tabular}
\end{table}
Table 2: Mean-square computation accuracy of Algorithm 2 with different \(g\)’s in contrast with the DPCA algorithm (8)
Figure 1: Trajectories of mean-square computation errors of Algorithm 2, the DPAC-OSP algorithm (1)-(3) [25], and the DPCA algorithm (8) with 200 samples
\begin{table}
\begin{tabular}{c|c|c|c|c||c} \hline h & 4 & 3 & 2 & 1.1 & DPCA \\ \hline Accuracy & 0.0832 & 0.0473 & 0.0220 & 0.0058 & 0.0050 \\ \hline \end{tabular}
\end{table}
Table 3: Mean-square computation accuracy of Algorithm 3 with different \(h\)’s in contrast with the DPCA algorithm (8)
Figure 2: Trajectories of mean-square computation errors \(\sum_{i=1}^{n}\mathbb{E}|x_{i}(t)-d^{*}|^{2}/n\) of Algorithm 3, the DPAC-OSP algorithm (1)-(3) [25], and the DPCA algorithm (8) with 200 samples
and mutually orthogonal eigenvectors \({\bf u}_{1},{\bf u}_{2},\ldots,{\bf u}_{n}\), respectively. It is clear that \({\bf u}_{1}={\bf 1}_{n}/\sqrt{n}\), and \(0=\lambda_{1}^{\bf A}<\lambda_{2}^{\bf A}\leq...\leq\lambda_{n}^{\bf A}<2\) by [29]. For convenience, we denote \(\Lambda_{\bf A}={\rm diag}\{\lambda_{1}^{A},\lambda_{2}^{\bf A},\ldots, \lambda_{n}^{\bf A}\}\), and \({\bf U}=[{\bf U}_{1};{\bf U}_{2}]\) with \({\bf U}_{1}={\bf u}_{1}\), \({\bf U}_{2}=[{\bf u}_{2};\cdots;{\bf u}_{n}]\).
Next we proceed to provide a tighter lower-bound to the second eigenvalue \(\lambda_{2}^{\bf A}\). It is clear that the matrix \({\bf P}:={\bf I}_{n}-{\bf A}\) is symmetric and doubly stochastic, with each element \([{\bf P}]_{ij}\geq\frac{\delta^{2}}{2(n\delta^{2}+1)}=\frac{1}{2(n+\delta^{-2})}\) for \(j\in{\rm N}_{i}\cup\{i\}\) and \([{\bf P}]_{ij}=0\) for \(j\notin{\rm N}_{i}\cup\{i\}\). According to [32, Proposition 1], one can obtain
\[|[{\bf P}^{k}]_{ij}-\frac{1}{n}|\] (A.1) \[\leq 2\frac{1+(2(n+\delta^{-2}))^{(n-1)}}{1-(2(n+\delta^{-2}))^{-(n-1 )}}\Big{(}1-\frac{1}{(2(n+\delta^{-2}))^{n-1}}\Big{)}^{\frac{k}{n-1}}\]
for all \(i,j\in{\rm V}\) and \(k\geq 1\), which yields
\[\|{\bf P}^{k}-\frac{1}{n}{\bf 1}_{n}{\bf 1}_{n}^{\top}\|\] (A.2) \[\leq 2n\frac{1+(2(n+\delta^{-2}))^{(n-1)}}{1-(2(n+\delta^{-2}))^{-(n- 1)}}\Big{(}1-\frac{1}{(2(n+\delta^{-2}))^{n-1}}\Big{)}^{\frac{k}{n-1}}\] \[= {\cal O}(\alpha^{k})\]
with \(\alpha<1\) defined in (13). With this in mind, we note that
\[{\bf P}^{k}-\frac{1}{n}{\bf 1}_{n}{\bf 1}_{n}^{\top}\] \[= {\bf U}({\bf I}_{n}-\Lambda_{\bf A})^{k}{\bf U}^{\top}-{\bf U}_{ 1}{\bf U}_{1}^{\top}\] \[= {\bf U}{\rm diag}\{(1-\lambda_{1}^{\bf A})^{k}-1,(1-\lambda_{2}^ {\bf A})^{k},\ldots,(1-\lambda_{n}^{\bf A})^{k}\}{\bf U}^{\top}\] \[= {\bf U}{\rm diag}\{0,(1-\lambda_{2}^{\bf A})^{k},\ldots,(1- \lambda_{n}^{\bf A})^{k}\}{\bf U}^{\top}\,,\] \[= \sum_{j=2}^{n}(1-\lambda_{j}^{\bf A})^{k}{\bf u}_{j}{\bf u}_{j}^ {\top}\,.\]
Thus, by (A.2) and the inequality \(0<\lambda_{2}^{\bf A}\leq...\leq\lambda_{n}^{\bf A}<2\), we have
\[\max\{|1-\lambda_{2}^{\bf A}|,|1-\lambda_{n}^{\bf A}|\}\leq\alpha\,.\]
Now we consider the following two cases:
1. If \(\lambda_{2}^{\bf A}<1\), we have \(1-\lambda_{2}^{\bf A}\leq\alpha\), implying \(1>\lambda_{2}^{\bf A}\geq 1-\alpha\);
2. If \(\lambda_{2}^{\bf A}\geq 1\), we have \(\lambda_{2}^{\bf A}-1\leq\lambda_{n}^{\bf A}-1\leq\alpha\), implying \(1+\alpha\geq\lambda_{2}^{\bf A}\geq 1-\alpha\);
In view of the above two cases, we have \(\lambda_{2}^{\bf A}\geq 1-\alpha\).
Bearing in mind the previous analysis, we now analyze the differential privacy of the proposed averaged consensus algorithm, and focus on the mechanism
\[{\mathscr{M}}({\bf d})={\bf P}{\bf d}-{\bf A}\mathbf{\eta}+\mathbf{\gamma}.\] (A.3)
For Theorem 1, \(\mathbf{\eta}:=[\eta_{1};\ldots;\eta_{n}]\sim{\cal N}(0,\sigma_{\eta}^ {2})^{n}\) and \(\mathbf{\gamma}:=[\gamma_{1};\ldots;\gamma_{n}]\sim{\cal N}(0,\sigma_{ \gamma}^{2})^{n}\). For any \({\cal M}\subseteq{\mathbb{R}}^{n}\), we note that
\[{\mathbb{P}}({\mathscr{M}}({\bf d})\in{\cal M})\] \[= {\mathbb{P}}({\bf U}^{\top}({\bf P}{\bf d}-{\bf A}\mathbf{ \eta}+\mathbf{\gamma})\in{\bf U}^{\top}{\cal M})\] \[= {\mathbb{P}}((({\bf I}_{n}-\Lambda_{\bf A}){\bf U}^{\top}{\bf d}- \Lambda_{\bf A}{\bf U}^{\top}\mathbf{\eta}+{\bf U}^{\top}\mathbf{\gamma})\in{\bf U}^{\top}{\cal M})\]
where by simple calculations we have
\[-\Lambda_{\bf A}{\bf U}^{\top}\mathbf{\eta}+{\bf U}^{\top}\mathbf{\gamma}\sim{\cal N}(0,\Sigma^{2})\]
with \(\Sigma={\rm diag}(\sigma_{\gamma},\sqrt{\sigma_{\gamma}^{2}+(\lambda_{2}^{\bf A }\sigma_{\eta})^{2}},\ldots,\sqrt{\sigma_{\gamma}^{2}+(\lambda_{n}^{\bf A} \sigma_{\eta})^{2}})\). Thus, we define the mechanism
\[{\mathscr{M}}_{\uparrow}({\bf d}):=\Sigma^{-1}({\bf I}_{n}-\Lambda_{\bf A}){ \bf U}^{\top}{\bf d}+\mathbf{\omega}\,,\quad\mathbf{\omega} \sim{\cal N}(0,1)^{n}\,,\]
and can see that for any \(\epsilon\geq 0\), \(\delta\in(0,1)\) and \(\mu>0\), the inequality
\[{\mathbb{P}}({\mathscr{M}}({\bf d})\in{\cal M})\leq e^{\epsilon}{\mathbb{P}}({ \mathscr{M}}({\bf d}^{\prime})\in{\cal M})+\delta\,,\quad\forall({\bf d},{\bf d }^{\prime})\in{\rm Adj}(\mu)\] (A.4)
holds for all \({\cal M}\subseteq{\mathbb{R}}^{n}\), if and only if for all \({\cal M}_{\uparrow}\subseteq{\mathbb{R}}^{n}\),
\[{\mathbb{P}}({\mathscr{M}}_{\uparrow}({\bf x})\in{\cal M}_{\uparrow})\leq e^{ \epsilon}{\mathbb{P}}({\mathscr{M}}_{\uparrow}({\bf x}^{\prime})\in{\cal M}_{ \uparrow})+\delta\] (A.5)
holds for all \(({\bf d},{\bf d}^{\prime})\in{\rm Adj}(\mu)\).
Then by recalling [30, Theorem 8], we know that the mechanism \({\mathscr{M}}_{\uparrow}\) is \((\epsilon,\delta)\)-differentially private if and only if
\[\kappa_{\epsilon}(S_{0}):=\Phi(\frac{S_{0}}{2}-\frac{\epsilon}{S_{0}})-e^{ \epsilon}\Phi(-\frac{S_{0}}{2}-\frac{\epsilon}{S_{0}})\leq\delta\] (A.6)
with the sensitivity \(S_{0}=\max_{i\in{\rm V}}\|\mu\Sigma^{-1}({\bf I}_{n}-\Lambda_{\bf A}){\bf U}^{ \top}{\bf e}_{i}\|\). It is further noted that
\[S_{0} = \max_{i\in{\rm V}}\sqrt{\sum_{j=1}^{n}\left|\frac{\mu(1-\lambda_{j }^{\bf A})}{\sqrt{\sigma_{\gamma}^{2}+\lambda_{j}^{\bf A}\sigma_{\eta}^{2}}}{{ \bf u}_{j}^{\top}{\bf e}_{i}}\right|^{2}}\] \[\leq \sqrt{\frac{\mu^{2}}{\sigma_{\gamma}^{2}}\max_{i\in{\rm V}}|{\bf u }_{1}^{\top}{\bf e}_{i}|^{2}+\frac{\mu^{2}\alpha^{2}}{\sigma_{\gamma}^{2}+( \lambda_{2}^{\bf A}\sigma_{\eta})^{2}}\max_{i\in{\rm V}}\sum_{j=2}^{n}|{\bf u }_{j}^{\top}{\bf e}_{i}|^{2}\] \[\leq \sqrt{\frac{\mu^{2}}{\sigma_{\gamma}^{2}}+\frac{(n-1)\mu^{2} \alpha^{2}}{\sigma_{\gamma}^{2}+(1-\alpha)^{2}\sigma_{\eta}^{2}}}\]
where to obtain the first inequality we have used the facts that \(\lambda_{1}^{\bf A}=0\), and the inequalities
\[\max\{|1-\lambda_{2}^{\bf A}|,\ldots,|1-\
Thus by recalling (A.5), the mechanism \(\mathscr{M}\) preserves \((\epsilon,\delta)\)-differential privacy of local data \(\mathbf{d}\) if (12) holds.
To complete the proof, it is worth specifying the information that may be eavesdropped. At the stage of distributed shuffling, the communication messages are encrypted and thus cannot be utilized for privacy inference by eavesdroppers due to the absence of the private keys. At the stage of average consensus, the eavesdroppers may have access to the communication messages, i.e., \(\mathbf{x}(t)\), \(t\geq 0\). It is noted that \(\mathbf{x}(t)\), \(t\geq 1\) can be expressed as deterministic functions of the initial states \(\mathbf{x}(0):=\mathscr{M}(\mathbf{d})\). According to the robustness property of the differential privacy to post-processing [16], and recalling that the mechanism \(\mathscr{M}\) is \((\epsilon,\delta)\)-differentially private with (12), we can conclude that the proposed average consensus algorithm preserves \((\epsilon,\delta)\)-differential privacy of \(\mathbf{d}\) with (12), completing the proof.
## Appendix B Proof of Theorem 2
First of all, by [29] and Assumption 1, we have
\[\|\mathbf{I}_{n}-\mathbf{L}\|\leq\beta<1\,.\] (B.1)
Let then take a look at the difference between the averages of \(\mathbf{x}(0)\) and \(\mathbf{d}\) as
\[\mathbf{1}_{n}\mathbf{1}_{n}^{\top}(\mathbf{x}(0)-\mathbf{d})/n = \mathbf{1}_{n}\mathbf{1}_{n}^{\top}(-\mathbf{Ad}-\mathbf{A} \boldsymbol{\eta}+\boldsymbol{\gamma})/n\] \[= \mathbf{1}_{n}\mathbf{1}_{n}^{\top}\boldsymbol{\gamma}/n\]
This indicates
\[\begin{array}{l}\mathbf{x}(t)-\frac{1}{n}\mathbf{1}_{n}\mathbf{1}_{n}^{\top} \mathbf{d}\\ =\mathbf{x}(t)-\frac{1}{n}\mathbf{1}_{n}\mathbf{1}_{n}^{\top}\mathbf{x}(0)+ \frac{1}{n}\mathbf{1}_{n}\mathbf{1}_{n}^{\top}(\mathbf{x}(0)-\mathbf{d})\\ =(\mathbf{I}_{n}-\mathbf{L})^{t}\mathbf{x}(0)-\frac{1}{n}\mathbf{1}_{n}\mathbf{ 1}_{n}^{\top}\mathbf{x}(0)+\frac{1}{n}\mathbf{1}_{n}\mathbf{1}_{n}^{\top} \boldsymbol{\gamma}\\ =(\mathbf{I}_{n}-\mathbf{L})^{t}(\mathbf{I}_{n}-\frac{1}{n}\mathbf{1}_{n} \mathbf{1}_{n}^{\top})\mathbf{x}(0)+\frac{1}{n}\mathbf{1}_{n}\mathbf{1}_{n}^{ \top}\boldsymbol{\gamma}\\ =(\mathbf{I}_{n}-\mathbf{L})^{t}(\mathbf{I}_{n}-\frac{1}{n}\mathbf{1}_{n} \mathbf{1}_{n}^{\top})(\mathbf{Pd}-\mathbf{A}\boldsymbol{\eta}+\boldsymbol{ \gamma})+\frac{1}{n}\mathbf{1}_{n}\mathbf{1}_{n}^{\top}\boldsymbol{\gamma} \end{array}\] (B.2)
Thus, given any \(\boldsymbol{\eta}\) and \(\boldsymbol{\gamma}\) it is clear by (B.1) that
\[\begin{array}{l}\lim_{t\to\infty}\|\mathbf{x}(t)-\frac{1}{n} \mathbf{1}_{n}\mathbf{1}_{n}^{\top}(\mathbf{d}+\boldsymbol{\gamma})\|\\ = \lim_{t\to\infty}\|(\mathbf{I}_{n}-\mathbf{L})^{t}(\mathbf{I}_{n}-\frac{1}{n} \mathbf{1}_{n}\mathbf{1}_{n}^{\top})(\mathbf{Pd}-\mathbf{A}\boldsymbol{\eta}+ \boldsymbol{\gamma})\|\\ \leq \|(\mathbf{I}_{n}-\frac{1}{n}\mathbf{1}_{n}\mathbf{1}_{n}^{\top})( \mathbf{Pd}-\mathbf{A}\boldsymbol{\eta}+\boldsymbol{\gamma})\|\lim_{t\to \infty}\beta^{t}\\ = 0\,.\end{array}\] (B.3)
This implies that the node states \(x_{i}(t)\) exponentially converge to
\[x_{i}(\infty)=\mathbf{1}_{n}^{\top}\mathbf{d}/n+\mathbf{1}_{n}^{\top} \boldsymbol{\gamma}/n:=x(\infty)\] (B.4)
with convergence rate \(\ln(1/\beta)\). This proves the statement (i).
Regarding the statements (ii) and (iii), they can be directly derived from (B.4) by taking the expectation operations.
## Appendix C Proof of Theorem 4
As in Appendix A, to analyze the differential privacy of Algorithm 3 against the eavesdropper accessing the communication messages, we consider the following mechanism
\[\mathscr{M}(\mathbf{d})=\mathbf{Pd}-\mathbf{A}\boldsymbol{\eta}+\mathbf{e}_{k ^{*}}\gamma_{k^{*}}\] (C.1)
where \(\boldsymbol{\eta}:=[\eta_{1};\ldots;\eta_{n}]\sim\mathcal{L}(0,\sigma_{\eta})^ {n}\) and \(\gamma_{k^{*}}\sim\mathcal{L}(0,\sigma_{\gamma})\).
To show \((\epsilon,0)\)-differential privacy of \(\mathscr{M}\), we first study \(\mathscr{M}_{1}:=\mathbf{1}_{n}^{\top}\mathscr{M}(\mathbf{d})\), and observe
\[\begin{array}{l}\mathbf{1}_{n}^{\top}\mathscr{M}(\mathbf{d}) =\mathbf{1}_{n}^{\top}\mathbf{Pd}-\mathbf{1}_{n}^{\top}\mathbf{A} \boldsymbol{\eta}+\mathbf{1}_{n}^{\top}\mathbf{e}_{k^{*}}\gamma_{k^{*}}\\ =\mathbf{1}_{n}^{\top}\mathbf{Pd}+\gamma_{k^{*}}.\end{array}\]
Since \(\sup_{(\mathbf{d},\mathbf{d}^{\prime})\in Adj(\mu)}\left\|\mathbf{1}_{n}^{\top} \mathbf{Pd}-\mathbf{1}_{n}^{\top}\mathbf{Pd}^{\prime}\right\|_{1}=\mu\), by [18, Theorem 2], it is clear that \(\mathscr{M}_{1}\) is \((\epsilon/h,0)\)-differentially private with \(\sigma_{\gamma}\) satisfying (18).
Then we observe that \(rank(\mathbf{U}_{2})=n-1\) and \(\mathbf{1}_{n}^{\top}\mathbf{U}_{2}=0\), which implies that the \(k^{*}\)-th row of \(\mathbf{U}_{2}\) can be expressed by a linear combination of the remaining rows of \(\mathbf{U}_{2}\). As a consequence, denote by \(\bar{\mathbf{U}}_{2}\in\mathbb{R}^{(n-1)\times(n-1)}\) the matrix by removing the \(k^{*}\)-th row of \(\mathbf{U}_{2}\in\mathbb{R}^{n\times(n-1)}\), and have \(rank(\bar{\mathbf{U}}_{2})=n-1\), i.e., \(\bar{\mathbf{U}}_{2}\) is invertible. Thus we consider
\[\mathscr{M}_{2}(\mathbf{d})=\left(\bar{\mathbf{U}}_{2}^{\top}\right)^{-1} \mathbf{T}^{-1}\mathbf{U}_{2}^{\top}(\mathbf{I}_{n}-\mathbf{e}_{k^{*}}\mathbf{1 }_{n}^{\top})\mathbf{Pd}-\bar{\eta},\]
where \(\mathbf{T}=\mathrm{diag}\{\lambda_{2}^{\mathbf{A}},\lambda_{3}^{\mathbf{A}}, \ldots,\lambda_{n}^{\mathbf{A}}\}\in\mathbb{R}^{(n-1)\times(n-1)}\), satisfying \(\lambda_{i}^{\mathbf{A}}\geq 1-\alpha\) for \(i=2,\ldots,n-1\), and \(\bar{\eta}=[\eta_{1};\ldots;\eta_{k^{*}-1};\eta_{k^{*}+1};\ldots;\eta_{n}]\in \mathbb{R}^{n-1}\). Note that
\[\begin{array}{l}\mathbf{U}_{2}^{\top}(\mathbf{I}_{n}-\mathbf{e}_{k^{*}} \mathbf{1}_{n}^{\top})\mathbf{P}\\ =\ \bar{\mathbf{U}}_{2}^{\top}\bar{\mathbf{P}}-\mathbf{U}_{2}^{\top}\mathbf{e}_{k^{*}} \mathbf{1}_{n}^{\top}-\bar{\mathbf{P}}\\ =\ \bar{\mathbf{U}}_{2}^{\top}\bar{\mathbf{P}}+(\mathbf{U}_{2}^{\top}\mathbf{1}_{n} -\mathbf{U}_{2}^{\top}\mathbf{e}_{k^{*}})\mathbf{1}_{n-1}^{\top}\bar{\mathbf{P}} \\ =\ \bar{\mathbf{U}}_{2}^{\top}(\mathbf{I}_{n-1}+\mathbf{1}_{n-1}\mathbf{1}_{n-1}^{ \top})\bar{\mathbf{P}}\,,\end{array}\]
where the first equation is obtained by defining \(\bar{\mathbf{P}}\in\mathbb{R}^{(n-1)\times n}\) as the matrix by removing the \(k^{*}\)-th row of \(\mathbf{P}\), and the second is obtained by using the fact that
\(\mathbf{U}_{2}^{\top}\mathbf{1}_{n}=0.\) This yields that for \((\mathbf{d},\mathbf{d}^{\prime})\in Adj(\mu)\),
\[\|(\bar{\mathbf{U}}_{2}^{\top})^{-1}\mathbf{T}^{-1}\mathbf{U}_{2}^{ \top}(\mathbf{I}_{n}-\mathbf{e}_{k^{\prime}}\mathbf{1}_{n}^{\top})\mathbf{P}( \mathbf{d}-\mathbf{d}^{\prime})\|_{1}\] \[= \|(\bar{\mathbf{U}}_{2}^{\top})^{-1}\mathbf{T}^{-1}\bar{\mathbf{U }}_{2}^{\top}(\mathbf{I}_{n-1}+\mathbf{1}_{n-1}\mathbf{1}_{n-1}^{\top})\bar{ \mathbf{P}}(\mathbf{d}-\mathbf{d}^{\prime})\|_{1}\] \[\leq \mu\|(\bar{\mathbf{U}}_{2}^{\top})^{-1}\mathbf{T}^{-1}\bar{\mathbf{ U}}_{2}^{\top}\|_{1}\|(\mathbf{I}_{n-1}+\mathbf{1}_{n-1}\mathbf{1}_{n-1}^{\top})\|_{1} \|\bar{\mathbf{P}}\|_{1}\] \[\leq \mu\frac{\sqrt{n-1}}{1-\alpha}\|(\mathbf{I}_{n-1}+\mathbf{1}_{n- 1}\mathbf{1}_{n-1}^{\top})\|_{1}\|\bar{\mathbf{P}}\|_{1}\] \[\leq 2\mu\frac{n\sqrt{n-1}}{1-\alpha},\]
where the second inequality is obtained by using \(\|(\bar{\mathbf{U}}_{2}^{\top})^{-1}\mathbf{T}^{-1}\bar{\mathbf{U}}_{2}^{\top} \|_{1}{\leq}\sqrt{n-1}\|(\bar{\mathbf{U}}_{2}^{\top})^{-1}\mathbf{T}^{-1}\bar {\mathbf{U}}_{2}^{\top}\|{\leq}\frac{\sqrt{n-1}}{1-\alpha},\) and the last inequality is obtained by using \(\|(\mathbf{I}_{n-1}+\mathbf{1}_{n-1}\mathbf{1}_{n-1}^{\top})\|_{1}=n\) and \(\|\bar{\mathbf{P}}\|_{1}{\leq}\|\mathbf{P}\|_{1}{\leq}1-\frac{\hat{a}^{2}(n-1 )}{4(na^{2}+1)}+(n-1)\frac{\hat{a}^{2}}{na^{2}+1}\leq 2.\)
Thus, by [18, Theorem 2], \(\mathscr{M}_{2}(\mathbf{d})\) preserves \(((1-1/h)\epsilon,0)\)-differential privacy with \(\sigma_{\eta}\) satisfying (18). Further, \(\mathscr{M}_{2}(\mathbf{d})-\mathbf{e}_{r}\eta_{r}\) preserves \(((1-1/h)\epsilon,0)\)-differential privacy, as the differential privacy is resilient to post-processing [16].
Towards this end, we note that \(\mathscr{M}_{2}(d)-\mathbf{e}_{r}\eta_{r}=(\bar{\mathbf{U}}_{2}^{\top})^{-1} \mathbf{T}^{-1}\mathbf{U}_{2}^{\top}(\mathbf{I}_{n}-\mathbf{e}_{k^{\prime}} \mathbf{1}_{n}^{\top})\mathscr{M}(d).\) By defining
\[\mathbf{Q}=\begin{bmatrix}\mathbf{1}_{n}^{\top}\\ (\bar{\mathbf{U}}_{2}^{\top})^{-1}\mathbf{T}^{-1}\mathbf{U}_{2}^{\top}( \mathbf{I}_{n}-\mathbf{e}_{k^{\prime}}\mathbf{1}_{n}^{\top})\end{bmatrix}\,,\]
we then can conclude that the mechanism
\[\mathscr{M}_{\dagger}(\mathbf{d}):=\mathbf{Q}\mathscr{M}(\mathbf{d})\]
is \((\epsilon,0)\)-differentially private by noting that \(\mathscr{M}_{\dagger}(\mathbf{d})\) is a composition of mechanisms \(\mathscr{M}_{1}\) and \(\mathscr{M}_{2}(\mathbf{d})-\mathbf{e}_{r}\eta_{r}\)[16, Theorem 3.14]. This thus completes the proof by verifying that \(\mathbf{Q}\) is invertible and recalling that the differential privacy is resilient to post-processing [16].
|
2309.17222 | Fine-Resolution Silicon Photonic Wavelength-Selective Switch Using
Hybrid Multimode Racetrack Resonators | In this work, we describe a procedure for synthesizing racetrack resonators
with large quality factors and apply it to realize a multi-channel
wavelength-selective switch (WSS) on a silicon photonic chip. We first
determine the contribution of each component primitive to propagation loss in a
racetrack resonator and use this data to develop a model for the frequency
response of arbitrary order, coupled-racetrack channel dropping filters. We
design second-order racetrack filters based on this model and cascade multiple
such filters to form a 1x7 WSS. We find good agreement between our model and
device performance with second-order racetrack that have ~1 dB of drop-port
loss, ~2 GHz FWHM linewidth, and low optical crosstalk due to the quick filter
roll-off of ~ 5.3 dB/GHz. Using a control algorithm, we show three-channel
operation of our WSS with a channel spacing of only 10 GHz. Owing to the high
quality factor and quick roll-off of our filter design, adjacent channel
crosstalk is measured to be <-25 dB for channels spaced on a 10 GHz grid. As a
further demonstration, we use five of seven WSS channels to perform a
demultiplexing operation on both an 8 GHz and a 10 GHz grid. These results
suggest that a low-loss WSS with fine channel resolution can be realized in a
scalable manner using the silicon photonics platform. | Lucas M. Cohen, Saleha Fatema, Vivek V. Wankhade, Navin B. Lingaraju, Bohan Zhang, Deniz Onural, Milos Popovic, Andrew M. Weiner | 2023-09-29T13:20:09Z | http://arxiv.org/abs/2309.17222v1 | Fine-Resolution Silicon Photonic Wavelength-Selective Switch Using Hybrid Multimode Racetrack Resonators
###### Abstract
In this work, we describe a procedure for synthesizing racetrack resonators with large quality factors and apply it to realize a multi-channel wavelength-selective switch (WSS) on a silicon photonic chip. We first determine the contribution of each component primitive to propagation loss in a racetrack resonator and use this data to develop a model for the frequency response of arbitrary order, coupled-racetrack channel dropping filters. We design second-order racetrack filters based on this model and cascade multiple such filters to form a \(1\times 7\) WSS. We find good agreement between our model and device performance with second-order racetrack that have \(\approx 1\) dB of drop-port loss, \(\approx 2\) GHz FWHM linewidth, and low optical crosstalk due to the quick filter roll-off of \(\approx 5.3\) dB/GHz. Using a control algorithm, we show three-channel operation of our WSS with a channel spacing of only 10 GHz. Owing to the high quality factor and quick roll-off of our filter design, adjacent channel crosstalk is measured to be \(<-25\) dB for channels spaced on a 10 GHz grid. As a further demonstration, we use five of seven WSS channels to perform a demultiplexing operation on both an 8 GHz and a 10 GHz grid. These results suggest that a low-loss WSS with fine channel resolution can be realized in a scalable manner using the silicon photonics platform.
Wavelength-selective switch, Wavelength-division multiplexing, telecommunications, microresonators, silicon photonics.
## I Introduction
As internet traffic volume and the demand for data increases, existing optical transport architectures will require hardware upgrades for the improvement of transmission capacity and capabilities of optical networks. Reconfigurable optical add-drop multiplexers (ROADMs) are a critical building block of optical networks, enabling flexibility in wavelength routing and assignment between users on an optical network. ROADMs are comprised principally of a wavelength-selective switch (WSS, shown schematically in Fig. 1(a)), and WSSs that are actively deployed today are most commonly based on diffraction grating based spectral dispersers and liquid crystal on silicon (LCoS) technology [1]. Although well suited to optical network requirements, such WSSs require assembly of bulk or micro-optic components, are typically limited to spectral resolutions at ca. 10 GHz and above, and induce excessive optical loss especially when modified to perform filtering operations at unusually fine spectral resolutions in the few GHz range [2].
Due to the inherent scalability, low-cost, and robust component catalog of silicon photonics technology (SiP), there has been an increasing research effort towards realizing a SiP WSS [3]. A promising approach for a SiP WSS is to use microresonators as a filtering element due to their resonant selectivity, compact footprint, and simple operation [4, 5]. An \(M\) input spatial channel, \(L\) output wavelength channel \(M\times L\) WSS [6], a flexible-grid WSS [7], and even an \(M\) input spatial channel, \(N\) output spatial channel, \(L\) wavelength channel/port \(M\times N\times L\) WSS [8] have been demonstrated using microresonators with SiP technology. However, current demonstrations have used microresonators with relatively low quality factors, rendering them incapable of meeting the fine-resolution requirements that could be asked of ROADMs in the future [9]. For example, over the past decade there has been a push towards utilizing optical superchannels to improve spectral efficiency and total throughput in an optical network [10, 11]. There are scenarios in which subchannel add/drop capabilities within a superchannel are desirable. Performing these functions optically has a number of potential benefits to the network, but it requires high-selectivity filtering (resolution at the single GHz level) to e.g. add guard bands between adjacent subchannels as well as to assist in picking out the subchannel. Although there have been impressive demonstrations of such hyperfine resolution filtering [12, 13, 14, 15, 16], such demonstrations rely on substantially more complex optical setups or exotic components and are subject to increased insertion loss, as mentioned above.
Selective wavelength filtering (sub-GHz linewidths) is also important in applications like quantum information science and microwave photonics. In particular, many view modular architectures as essential to the scale-up of quantum computing systems, where communications and entanglement between individual computational modules is mediated by photons [17]. These photons, which must interface with matter-based qubits, will have linewidths on the order of 10s to 100s of MHz. Consequently, low-loss and narrowband filters can help selectively manipulate or route these modes over local area or larger networks [18]. Microwave photonics, the science of
processing radiofrequency signals in the optical domain [19], similarly requires the ability to control and isolate narrowband optical signals with low loss in order to realize systems with low noise figures. Such systems can be harnessed for applications like aptitrary waveform generation and shaping of low-repetition rate combs, among others [20].
In this paper, we propose and experimentally demonstrate a multi-channel WSS using racetrack resonators with a hybrid geometry that includes wide waveguide segments to make possible optical filtering at a fine-resolution. To this end, we first develop a methodology for the robust modeling of arbitrary order coupled-racetrack devices using experimental data. The developed model shows good agreement with experimental results and enables first-time-right designs of narrowband filters. Next, we use the model to design a 1\(\times\)7 WSS with a second-order filter response with \(\approx\) 1 dB of drop-port loss, 2 GHz FWHM channel linewidth, and 5.3 \(\mathrm{dB/GHz}\) of roll-off, and we demonstrate WSS operation using 3 filter channels. A subset of results from this manuscript was presented at the IEEE Photonics Conference [21] and Conference on Lasers and Electro-Optics [22]. Here, we significantly extend the design methodology and results from previous proceedings. Our work takes a step towards showing the SiP platform is capable of meeting the fine-resolution filtering requirements of future-generation optical networks.
## II Filter Model
Our racetrack filters comprise three elements: a single-mode coupling region, adiabatic tapers from 0.5 \(\mu m\) wide to 2 \(\mu m\) wide waveguides, and 2 \(\mu m\) wide multimode waveguides. Adiabatic tapers facilitate the propagation of the fundamental mode from a single-mode waveguide to multimode without excitation or loss to higher-order modes. The multimode waveguide, when operated in the fundamental mode, significantly reduces the dominant loss mechanism for SiP waveguides of field-sidewall overlap, thus providing low-loss [23]. For sufficiently long multimode waveguide sections, the average round-trip loss through the racetrack is dominated by this element, thereby enabling one to flexibly tune the resonator's quality factor with the multimode waveguide length [5].
To characterize the loss contributions of each element of our racetrack resonators, we design structures on a full-stack active multi-project wafer (MPW) run [21]. These structures follow the weakly-coupled cavity method [24] in which a resonator is sufficiently weakly coupled to bus waveguides such that the linewidth of its frequency response is dominated by the intrinsic loss of the resonator as opposed to the external losses from the coupling to the bus waveguides. A careful balance must be achieved so as to be sufficiently weakly coupled with the resonator yet coupled enough to have a strong signal at the drop port. In this way, the loss contributions from an arbitrary component can be measured without taking up an excessive footprint on the chip.
The three classes of test structures are shown in Fig. 1(b-d). Each consists of a resonator comprised of a pair of what we term _standardized couplers_ that sandwich either two pairs of tapers or two pairs of tapers and a pair of multimode waveguides that we want to characterize. The standardized coupler is a single-mode region formed by two 90\({}^{\circ}\) graduated radii b with a 5 \(\mu m\) long straight waveguide in between. This straight coupling section provides a long interaction region to realize sufficiently high bus-racetrack coupling even for large bus-racetrack gaps, which are better tolerated by the fabrication process and can also lead to reduced coupling losses. The 90\({}^{\circ}\) bends are hybrid structures comprising two Euler curves and a circular arc of constant radius that matches the minimum radius of the Euler curves [25]. In this way, the Euler curve reduces mode mismatch loss from the interface with a straight waveguide, while the circular section minimizes the footprint of the composite curve. These standardized couplers simplify the formation of coupled-resonator designs since the physical coupling structure is identical for both bus-resonator and resonator-resonator couplings.
These test structures were fabricated through an active MPW run through AIM Photonics on 220 nm-thick silicon-on-insulator (SOI) wafers [26]. There are devices for three variations of the standardized coupler - variants with minimum
Fig. 1: (a) Conceptual diagram of a 3-channel WSS. (b) Weakly coupled resonator to characterize standardized couplers. (c) Weakly coupled resonator to characterize adiabatic tapers. (d) Full hybrid multimode racetrack filter comprised of single-mode standardized couplers, adiabatic single to multimode waveguide tapers, and multimode waveguides. (e) Example spectral response from a weakly coupled structure with the geometry of Fig. 1(b) with a doublet resonance mode and fit.
radii (\(R_{\min}\)) of 3 \(\mu m\), 4 \(\mu m\), and 5 \(\mu m\). Our test devices covered a range of linear taper lengths from 30 \(\mu m\) to 200 \(\mu m\). Besides the linear tapers, we also included a set of full racetracks (Fig 1(d)) with adiabatic tapers where the tapered shape synthesized is based on a Fourier modal method so that the taper length is minimized for a given loss [27, 28]. Also, we have two devices to measure 2 \(\mu m\) wide waveguide loss with different 2 \(\mu m\) waveguide lengths of 700 \(\mu m\) and 1200 \(\mu m\). These devices contain a 100 \(\mu\)m long linear taper to expand the waveguide to a 2 \(\mu\)m width. Finally, we have one test device with a weakly coupled circular microring resonator of radius \(R=20\)\(\mu m\) with 0.5 \(\mu m\) wide waveguides to gather a baseline single-mode waveguide loss for reference. All the test devices are designed with identical input and drop bus coupling gaps, and we vary this gap over a broad range such that we can successfully characterize a weakly coupled device.
To couple light into the chips, we use a standard SMF-28 optical fiber array with an angled facet adjusted by an external polarization controller. A representative spectrum with fitting from a single weakly-coupled device with the structure of Fig. 1(b) is shown in Fig 1(e). The average propagation loss of the racetrack resonator can be computed from the fitted quality factor and device parameters. The results are shown in Table 1, where we tabulate the intrinsic quality factor of racetrack resonators comprising different combinations of sub-components, as well as break out the contribution to loss from the sub-component or structure of interest [21]. In the table, the resonator intrinsic Q and average loss are computed directly from the frequency response of the device, while the contribution to loss from each sub-component is isolated after the contributions from previously characterized sub-components are removed.
We see that standardized couplers with \(R_{\min}=\) 5 \(\mu m\) offered not only the lowest average loss, but also the lowest total structure loss, making them a good choice for multimode racetrack filters with high quality factors. For the linear tapers, a clear decrease in the taper's average loss is measured as the length of the taper increases, while the structure loss shows a more complicated trend. From the data in Table 1, we see that the Fourier tapers are measured to have the largest average loss but have the lowest insertion loss owing to their short length (11.36 \(\mu m\)). Finally, we computed the losses of 2 \(\mu m\)-wide multimode waveguides of different lengths. Because of their significant path length, they account for the dominant loss contribution in the racetrack yet their average loss of \(\approx\) 0.25 dB/cm is significantly lower than what we measure for a 0.5 \(\mu m\) wide single mode waveguide of 1.8 dB/cm.
In Fig. 2(a), we plot the power coupling ratio between standardized couplers of \(R_{\min}=\) 5 \(\mu m\) as a function of the gap between them measured for the TE polarization. We are able to extract such information because of our variation in standardized coupler gaps from our test devices. Full 3-D finite-difference time domain (FDTD) simulation results show good agreement with what we extract from measurement, with a slight underestimate in simulation at smaller gaps (\(<\) 0.25 \(\mu m\)).
To quantitatively describe the frequency response of a racetrack comprised of a variation of these subcomponents, we developed a model using our experimentally extracted data. The model relies on well-known analytical formulas [4, 5]
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Device & Structure of Interest & Resonator & Resonator & Structure & Structure \\ & (Units in \(\mu\)m) & Intrinsic Q & Average Loss (dB/cm) & Loss (dB) & Average Loss (dB/cm) \\ \hline Weakly Coupled Microring & Ring Resonator, (\(R=\) 20) & \(4.37\times 10^{5}\) & 1.8 & 0.0110 & 1.8 \\ Figure 1(b) & Standardized Coupler \(R_{\min}=3\) & \(1.43\times 10^{5}\) & 5.43 & 0.0099 & 5.43 \\ Figure 1(b) & Standardized Coupler \(R_{\min}=4\) & \(2.33\times 10^{5}\) & 3.32 & 0.0075 & 3.32 \\ Figure 1(b) & Standardized Coupler \(R_{\min}=5\) & \(3.29\times 10^{5}\) & 2.33 & 0.0064 & 2.33 \\ Figure 1(c) & Linear Taper, Length \(=\) 30 & \(3.26\times 10^{5}\) & 2.16 & 0.0062 & 2.09 \\ Figure 1(c) & Linear Taper, Length \(=\) 50 & \(5.57\times 10^{5}\) & 1.24 & 0.0047 & 0.95 \\ Figure 1(c) & Linear Taper, Length \(=\) 100 & \(8.26\times 10^{5}\) & 0.82 & 0.0062 & 0.62 \\ Figure 1(c) & Linear Taper, Length \(=\) 200 & \(9.68\times 10^{5}\) & 0.70 & 0.0117 & 0.59 \\ Figure 1(d) & Fourier Taper\({}^{*}\), Length \(=\) 11.36 & \(1.64\times 10^{6}\) & 0.40 & 0.0034 & 3.01 \\ Figure 1(d) & Multimode Waveguide, Length \(=\) 700 & \(1.67\times 10^{6}\) & 0.40 & 0.0170 & 0.24 \\ Figure 1(d) & Multimode Waveguide, Length \(=\) 1200 & \(1.89\times 10^{6}\) & 0.35 & 0.0300 & 0.25 \\ \hline \hline \end{tabular}
* Measured from device of Fig. 1(d) with 700 \(\mu m\) long multimode waveguides.
\end{table} TABLE I: **Extracted Loss for Racetrack Resonators and Individual Subcomponents**
Fig. 2: (a) Simulation and experimentally extracted cross coupling ratio versus gap for standardized couplers with \(R_{\min}=\) 5 \(\mu m\). Simulated (b) Intrinsic quality factor, (c) percent increase in 3dB linewidth relative to racetrack with Fourier tapers, (d) drop-port insertion loss versus round-trip acrextrack resonator length for all measured taper variants, respectively, and assuming a constant power coupling of 5%, respectively.
and is scalable to higher-order coupled racetrack structures thanks to the use of our standardized coupler geometry. An example of data from our model is shown in Fig. 2 for a first-order racetrack designed with input and output bus waveguides having a coupling ratio of 5% (a gap near 0.25 \(\mu m\) using our data from Fig. 2(a)). We use the standardized coupler with \(R_{\min}=\) 5 \(\mu m\) and plot a few useful quantities for a single resonance mode at \(\lambda=\) 1550nm for all our measured taper variants. The round-trip length of the racetrack is held fixed in the simulations by modifying the length of the multimode waveguide and therefore, for all taper variants to within a small error, the free spectral range (FSR) of the racetrack at a particular round-trip length is a constant. A group index of 3.5 is chosen in simulation and is near the measured value presented in Section III.
From Fig. 2, we see that the Fourier tapers should be used to maximize the quality factor of the racetrack. Interestingly, the intrinsic quality factor increases for linear tapers as we increase the length from 30 \(\mu m\) to 50 \(\mu m\), then it decreases slightly at 100 \(\mu m\) and more rapidly for 200 \(\mu m\). From this trend, we reason that the optimal linear taper length is somewhere between 50 and 100 \(\mu m\) long. Longer than that and the taper expands too slowly for it to have a loss advantage. For all tapers, as the round-trip length increases, the resonance mode linewidth asymptotes towards a value around 500 MHz where light propagation in the racetrack becomes the dominant source of cavity loss. This value is based on our fixed coupling coefficient used in the simulation. However, the drop-port loss also increases quickly as the racetrack is now larger and has a greater total round-trip loss.
When making conclusions from our model towards designing useable racetrack devices, it is necessary to recognize the tradeoffs involving the resonator's FSR and its quality factor [4]. In particular, the intrinsic quality factor of a resonator is \(Q_{i}=2\pi n_{g}/\alpha_{avg}\lambda\) where \(n_{g}\), \(\alpha_{avg}\), and \(\lambda\) are the group index, length-averaged round-trip loss, and resonance wavelength. The length-averaged round-trip loss can be expressed for our resonators as
\[\alpha_{avg}=\frac{\sum_{i=0}^{n}L_{i}\alpha_{i}}{\sum_{i=0}^{n}L_{i}}. \tag{1}\]
where \(\alpha_{i}\) and \(L_{i}\) are the length-averaged loss and the length for the \(i^{th}\) subcomponent of the resonator. In this form, it is clear that as the length of a particular component is increased, its contribution to the average loss of the resonator is as well. However, the FSR of the resonator, \(\Delta\nu=c/\sum_{i=0}^{n}L_{i}n_{g,i}\), is inversely proportional to its round-trip length. Hence, increasing the length of the lowest-loss component does indeed increase the quality factor of the resonator but at the cost of a reduced FSR, limiting the usable bandwidth of the device. Obviously, each parameter of the racetrack must be carefully chosen with the design application in mind.
## III WSS Design and Operation
A 1\(\times\)N wavelength selective switch (schematically shown in Fig. 1(a) ) is a device which takes as input a broadband optical field and, in a re-programmable fashion, separates and routes distinct slices of that input spectrum to N distinct outputs. Our designed on-chip WSS structure is shown in 3(b). We use seven identical second-order racetrack filters coupled to a common input bus waveguide with unique drop-port I/O to form a 1\(\times\)7 WSS. Our racetrack filters are designed following the methodology described in Section II, and we choose second-order structures for the increase in roll-off off of resonance. The bus-racetrack and racetrack-racetrack standardized coupler gaps are set to \(\approx\) 190 nm and 370 nm for power coupling coefficients of \(\approx\) 18 and 0.8 \(\%\), respectively. In this way, we can pack filter channels more closely together than a single-order structure for the same level of inter-channel optical crosstalk. Each individual racetrack of the filter has the same round-trip length of \(\approx\) 1500 \(\mu m\), uses Fourier tapers, and standardized couplers with \(R_{\min}=\) 5 \(\mu m\). Standardized coupler gaps are set to achieve a flat passband frequency response with the smallest possible linewidth with the constraint that drop port loss does not exceed 1 dB.
The WSS chips were fabricated through an AIM Photonics MPW program at the SUNY Polytechnic Institute in a state-of-the-art 300mm facility. Photonics-grade 220 nm thick SOI wafers with a thick buried oxide are used with 193 nm immersion lithography to define silicon optical waveguiding structures. Embedded microheaters in the form of doped silicon slabs with various vias and metal interconnect layers for electrical contact are placed 1.25 \(\mu\)m from the optical waveguide in each racetrack resonator. In this way, an injected current in the slab can generate heat via the Joule heating effect to locally modify the refractive index and tune the racetrack resonance through silicon's thermo-optic effect.
The fabricated chips were sent out for electrical wirebonding and packaging at the testing and packaging (TAP) facilities at Rochester Institute of Technology using a custom-developed electronic interposer and printed circuit board (PCB). To control each of the 7 WSS second-order racetrack filters, 21 wirebond connections were made. The full footprint of the 1\(\times\)7 WSS is about 4 mm by 1.5 mm but can be significantly reduced by placing electrical and optical I/O more compactly. Excluding I/O, the 1x7 WSS occupies a footprint of 1.5 mm by 1.5 mm. The wirebonded package was placed on a custom temperature controlled chuck for compatibility with our optical probe station. A top-down view of the wirebonded chip package is shown in Fig. 3(a). The WSS was tested using a 16 channel SMF-28 optical fiber array with a flat facet fed by an Agilent 81632A tunable laser source with an external polarization controller. For thermal tuning, we use a 64-channel source measurement unit (SMU) with electronic cabling to interface with our package.
We first characterize the individual filter response of our WSS by sending laser light into the common through port and monitoring the output power at the common through port and drop port of channel 1. If we sweep the laser wavelength as we apply increasing electrical power to the thermo-optic heaters of a single racetrack of the filter, we can find the thermal state where the two resonators comprising the second-order filter are optically aligned. The final tuned state of the filter at a wavelength near 1550 nm is shown in Fig. 4 along with a simulation using our model. We see good
agreement between the two, further indicating the utility of the model. The passband shows a flat response with a \(\approx\) 2.1 GHz linewidth and 1 dB insertion loss. The FSR of the filter is about 54.5 GHz from which we extract an average group index of 3.54.
The performance of the second-order filter was also characterized over a range input optical powers to elucidate the impact of thermal effects [29] and nonlinear absorption [30] on linewidth. Fig. 5 shows the frequency response of this filter for on-chip input powers of -6.0 dBm, -0.2 dBm, and 4.1 dBm. The inset of Fig. 5 shows the change in filter linewidth for nine different on-chip input power levels and we observe broadening that increases linearly with optical power in dBm over a range the spans input powers that span -6.0 dBm to 4.1 dBm. For powers above 4.1 dBm we began to observe bistability in the filter response.
To use our system as a WSS, we need to tune each filter to its optimal response and then align them on a hypothetical frequency grid. To this end, we developed a Python algorithm with open-source libraries. In the algorithm, a continuous-wave laser is input into the chip and repeatedly tuned between each frequency in the grid. An optical power meter measures the transmitted power at the common through port while a multi-objective minimization routine using the NGSAII sampler from the open-source library Optuna [31] drives the microheaters appropriately - even in the presence of mutual thermal crosstalk - to minimize the transmitted optical power at each frequency position. In this way, individual racetracks are optimally tuned to a common resonance mode and shifted to align with the frequency grid at drive states that give the minimum objective function value. For each state of the WSS, filters are first manually tuned to be relatively close to their location on the frequency grid in order to minimize the runtime of the algorithm. A single WSS state is then gathered in roughly 30 minutes using our serially updated electronics. Using an SMU with a faster or parallel update rate would decrease this runtime.
We programmed a three channel subset of our WSS for
Fig. 4: The measured optical frequency response for a single mode of the filter at \(\lambda=1550\) nm plotted with the designed filter response using our model developed in Section II. The inset shows the flat-top passband.
Fig. 5: Frequency of a second-order racetrack filter for different on-chip input powers. The spectra are normalized with respect to the peak of each passband. The inset shows the change in the 3 dB linewidth of the filter as a function of optical power coupled onto the chip.
Fig. 3: (a) Top-down microscope view of the silicon photonic WSS package. (b) Layout of the on-chip WSS. (c) Zoom-in view of an individual racetrack of the coupled-resonator filter.
operation on a 10 GHz spaced grid near 1550 nm. Each channel was aligned to a unique position on this three-point grid [22]. There are six permutations for three channels on this grid, and the spectra for each state taken from a tunable laser sweep are shown in Fig. 6. From the figure, we can clearly see at the center of any channel better than 25 dB dB of isolation from the (both) adjacent channel(s). The average electrical power consumption for each racetrack is \(\approx\) 50 mW for thermal tuning. However, racetracks are initially detuned from unused channels to prevent light leakage, contributing overhead to this power budget.
Using the spectra of Fig. 6 measured from a scanned laser, we computed an average detuning from the predefined frequency grid of \(\approx\) 0.8\(\pm\)0.5 GHz for each channel in the WSS. To avoid possible wavelength registration errors, which can arise during successive wavelength sweeps from a tunable laser, we perform a single-shot measurement over all frequency channels in a permutation with an optical spectrum analyzer (OSA). For this purpose, we send a broadband amplified spontaneous emission (ASE) spectrum from an erbium-doped fiber amplifier (EDFA) to the input port of our WSS and measure the output of each channel for all permutations with our OSA. An overlay of the measured spectra are shown in Fig. 7[22]. We see a slight increase in optical crosstalk from the OSA measurement that can be attributed to the 0.01 nm (\(\approx\) 1.247 GHz) resolution of our OSA. We compute an average frequency detuning of 0.7\(\pm\)0.1 GHz for each filter of the WSS using our OSA. We also measured a loss variation of \(<\) 0.5 dB from the nominal 1 dB drop-port loss for each channel of the WSS using both laser and OSA spectra.
As a final extension of our device, we program the WSS for a 1\(\times\)5 demultiplexing operation on both an 8 and 10 GHz frequency grid. Operating the WSS with 5 channels would require 5! optimized permutations, which is possible given the need and enough time. However, here we show only a single permutation as an example. The spectra for both 8 and 10 GHz grids are shown in Fig. 8 as measured from a swept tunable laser source. Channels 3, 4, and 5 are the channels used in Fig. 6 and Fig. 7 for WSS operation. We see from the blue and red traces that adding channels 1 and 2 for the demultiplexer results in a slight increase in optical crosstalk at both the high and low-frequency region of the spectrum. Nonetheless, the crosstalk for both 8 and 10 GHz spacing remains below \(<-\)20 dB in all cases. Besides the crosstalk, we see a transmission variation of \(<\) 0.75 dB between each channel for both grids.
## IV Discussion
The primary challenges with scaling the high quality devices shown here are: (1) extending the FSR, (2) reducing the electrical power needed for thermal tuning, and (3) developing a robust control solution. Since the quality factor of the racetrack increases with the multimode waveguide length, it is common for the FSR of the high quality devices to be on the order of 10s of GHz, if not lower. A common way of increasing the FSR is by using coupled-resonator structures with different round-trip lengths in a Vernier configuration [32]. This has been shown to extend the operating FSR of the device. Yet it requires significant engineering to get large suppression of the interstitial resonance peaks, among other challenges [33]. Likewise adding frequency dependence to the coupling coefficient in the form of an interferometer could function to extend the FSR of the device [34]. However, an interferometer would add an extra phase tuning element because of fabrication imperfections and thus complicate the operation.
Reducing electrical power consumption from thermal tuning can be accomplished in a number of ways. For a demultiplexer, frequency positions of each racetrack can be prebiased in fabrication by adjusting the round-trip path lengths. While thermal
Fig. 6: Spectra from the six permutations of the three channel WSS as measured from a swept tunable laser source. For each permutation, traces are normalized to the channel with maximum transmission.
Fig. 7: Overlayed spectra from the six permutations of the three-channel WSS as measured from an OSA. For each permutation, traces are normalized to the channel with maximum transmission. The input of the WSS is a broadband, flat ASE spectrum from an EDFA.
tuning would still be required to compensate for variation in fabrication, it would take less power than if racetracks were identical like our design here. Still this method is not effective for a WSS since filters need to be tuned to each position on the frequency grid. Second, by thermal isolation trenhing of top and bottom oxide materials as well as removing the silicon substrate, the electrically generated heat can be kept from spreading away from the optical waveguide [35]. Such processes would also help alleviate thermal crosstalk effects further reducing the required electrical power consumption.
A number of control methods could help realize a large-scale on-chip WSS. A common approach to lock microrings to an optical carrier is to monitor the intra-cavity optical power using a photoconductive element embedded within the optical cavity, often in the form of a doped (n, p, or p-n) region of the waveguide [36]. However, this fundamentally leads to excess loss thereby reducing the resonator quality factor. Another scheme uses a contactless probe placed adjacent to a waveguide capable of sensing light in the waveguide by measuring a conductance change induced by free-carriers being generated at the silicon-insulator interface. A low-frequency modulation of an optical carrier can then be detected by the probe and used to lock microrings to a laser even in the presence of multiple laser sources [37, 38]. In a similar vein, low-frequency thermal modulation of a microring has been used to encode a shallow amplitude modulation onto an optical carrier which can, upon photodetection, induce an asymmetric error signal appropriate for frequency locking [39]. As monolithic electronic-photonic systems [40] become increasing available, on-chip control of silicon photonic concepts such as ours may become feasible.
## V Conclusion
In conclusion, the design methodology and results were presented for a WSS implementing high quality factor microresonator filtering elements. The procedure outlined here can be used on any integrated platform to realize first-time-right devices. By tuning parameters of the subcomponents of the high quality racetrack, arbitrary filter responses can be achieved. For our designed WSS, we demonstrate second-order filters with \(\approx 1\) dB of drop-port loss, \(\approx 2\) GHz FWHM linewidth, and quick off-resonance roll-off of \(\approx 5.3\) dB\(/\)GHz. We use our system to experimentally show 3 channel WSS operation on a 10 GHz grid and 5 channel multi/demultiplexer operation on both an 8 and 10 GHz grid, both with \(<-\) 20 dB of inter-channel optical crosstalk. A flexible grid spacing and channel count can be accommodated for by applying the appropriate amount of electrical power to each racetrack. The performance of our devices shows promise towards realizing fine-resolution filtering in future silicon photonic systems.
## Acknowledgments
This work was funded under NSF grant 2034019-ECCS and by AFRL grant FA8750-20-P-1705 under an STTR through Freedom Photonics. The authors would like to thank Matthew van Niekerk and Stefan Preble at Rochester Institute of Technology for assistance with wirebonding and package assembly, and Cale Gentry from SRI International for discussions.
|
2302.14459 | Five-year in-orbit background of Insight-HXMT | Purpose: We present the five-year in-orbit background evolution of
Insight-HXMT since the launch, as well as the effects of the background model
in data analysis. Methods: The backgrounds of the three main payloads, i.e.,
Low-Energy Telescope, Medium-Energy Telescope and High-Energy Telescope, are
described, respectively. The evolution of the background over time is obtained
by simply comparing the background in every year during the in-orbit operation
of Insight-HXMT. Results: The major observational characteristics of the
Insight-HXMT in-orbit background are presented, including the light curve,
spectrum, geographical distribution, and long-term evolution. The systematic
error in background estimation is investigated for every year. Conclusion: The
observational characteristics of the five-year in-orbit background are
consistent with our knowledge of the satellite design and the space
environment, and the background model is still valid for the latest
observations of Insight-HXMT. | Jin-Yuan Liao, Shu Zhang, Juan Zhang, Gang Li, Zhi Chang, Yu-Peng Chen, Ming-Yu Ge, Jing Jin, Xue-Feng Lu, Yuan You, Xue-Lei Cao, Yong Chen, Yue Huang, Wei-Chun Jiang, Xiao-Bo Li, Xu-Fang Li, Zheng-Wei Li, Cong-Zhan Liu, Ying Tan, Yan-Ji Yang, Yi-Fei Zhang, Hai-Sheng Zhao, Fang-Jun Lu, Yu-Peng Xu, Jin-Lu Qu, Li-Ming Song, Shuang-Nan Zhang | 2023-02-28T10:07:23Z | http://arxiv.org/abs/2302.14459v1 | # Five-year in-orbit background of _Insight_-Hxmt
###### Abstract
**Purpose:** We present the five-year in-orbit background evolution of _Insight_-HXMT since the launch, as well as the effects of the background model in data analysis. **Methods:** The backgrounds of the three main payloads, i.e., Low-Energy Telescope, Medium-Energy Telescope and High-Energy Telescope, are described, respectively. The evolution of the background over time is obtained by simply comparing the background in every year during the in-orbit operation of _Insight_-HXMT. **Results:** The major observational characteristics of the _Insight_-HXMT in-orbit background are presented, including the
light curve, spectrum, geographical distribution, and long-term evolution. The systematic error in background estimation is investigated for every year. **Conclusion:** The observational characteristics of the five-year in-orbit background are consistent with our knowledge of the satellite design and the space environment, and the background model is still valid for the latest observations of _Insight_-HXMT.
**Keywords:** Instrumentation: detectors, Space vehicles: instruments,
Methods: data analysis, X-rays: general
## 1 Introduction
Up to June 2022, the Hard X-ray Modulate Telescope (dubbed as _Insight_-HXMT), the first space X-ray telescope of China, has been operated successfully in orbit for five years since the launch on June 15, 2017 (Zhang et al., 2020). There are three main payloads onboard _Insight_-HXMT, namely, Low Energy Telescope (LE; Chen et al.2020), Medium Energy Telescope (ME; Cao et al.2020) and High Energy Telescope (HE; Liu et al.2020). All three payloads working in the collimated way are combined to cover a broad energy range from 1 keV to 250 keV. The layout of the three telescopes on the main structure is shown in Fig. 1, and the main parameters are shown in Table 1.
There are three main observation modes assigned to _Insight_-HXMT: pointing observation, Galactic plane scanning survey and gamma-ray burst observation. Although the data analysis methods and processes are different, the background estimation is one of the crucial steps for fulfilling each task. For gamma-ray burst observation, the background is usually taken from pre- and post-burst (Luo et al., 2020). For scanning observation, the background is derived from the scanning light curves with sources subtracted (Sai et al., 2020). However, the background estimation of the pointing observation is much more complicated, since the traditional on-off mode of _BeppoSAX_/PDS (Frontera et al., 1997, 1997) and _RXTE_/HEXTE (Rothschild et al., 1998) for observation of the background was not adopted in the design of _Insight_-HXMT. Therefore, in order to estimate the background accurately, a dedicated background model has been constructed during the operation of _Insight_-HXMT (Liao et al., 2020, 2020; Guo et al., 2020).
Before the launch of _Insight_-HXMT, thorough background simulation works have been done (e.g., Li et al.2009; Xie et al.2015). With the Geant4 tools and the mass model of _Insight_-HXMT, these works result in the expected in-orbit background of _Insight_-HXMT, such as the background spectra induced by various particles. The first two-year background observation of _Insight_-HXMT proved the consistency between the in-orbit observation and the on-ground simulation, which also suggests that our understanding of the in-orbit background of _Insight_-HXMT is reliable (Zhang et al., 2020).
_Insight_-HXMT is a low Earth orbit satellite. The orbit is approximately a circle with an attitude 550 km and an inclination 43\({}^{\circ}\). As shown in previous works (e.g., Li et al.2009; Xie et al.2015; Liao et al.2020, 2020; Guo et al.2020; Zhang et al.2020), the orbital space environment of _Insight_-HXMT is complex and various particles can interact with the satellite platform and instruments to generate many background components (e.g., Alcaraz et al.2000, 2000). The cosmic-ray protons contribute most to the background of _Insight_-HXMT, while the electrons, neutrons, cosmic X-ray background (CXB), as well as earth albedo of gamma-rays can also contribute to the background. _Insight_-HXMT can explore the environment of charged particles with its Particle Monitor (PM; Lu et al.2020), which is on board in the top panel of the satellite platform and sensitive to protons (\(>20\) MeV) and electrons (\(>1.5\) MeV). As shown in Fig. 2, the geographical distribution of the PM count rate has varied little over the past five years for the whole orbit of _Insight_-HXMT, including the low-latitude and high-latitude region, as well as the South Atlantic Anomaly (SAA). For the region with low count rate near the magnetic equator, the average count rate is stable at 1.7-1.8 cts s\({}^{-1}\). Therefore, the space environment has changed very little during the last five years. However, we have observed some long-term variations in backgrounds, which are largely related to some degradation effects for LE and ME detectors and the activation effects of HE by bombardment of charged particles. Therefore, the background of _Insight_-HXMT after five years of operation needs to be reviewed, including the evolution of the observed characteristics and the validity of the background model. In this paper, the blank sky observations in the high Galactic latitude region (Table 2) during the five years since the launch of _Insight_-HXMT are used to investigate the in-orbit background of the three main payloads of _Insight_-HXMT, including the observational characteristics and the systematic error analysis of the background model.
It is worth noting that all the three telescopes of _Insight_-HXMT have various Field of Views (FoVs) with different sizes and orientations (Fig 3 and Table 1). For the pointing observations of LE and ME, the small FoV detectors are encouraged to be used for scientific analysis. Accordingly, the observed characteristics and background models of LE and ME in this paper mainly focus on these detectors. This paper is organized as follows. The backgrounds of LE, ME and HE are described in Sections 2-4, respectively. In Section 5, a summary and conclusion are given.
## 2 Background of the Low-Energy Telescope
LE is made of Swept Charge Device (SCD) detectors with a geometrical area of 384 cm\({}^{2}\) and a band pass of 1-13 keV. LE has three boxes with the FoV orientation differing by 60\({}^{\circ}\). Each box has 20 small FoV detectors (some are broken one after another), 6 large FoV detectors, 2 detectors with their collimators blocked by aluminum covers. The blocked small FoV detector is designed to measure the particle background, and the blocked large FoV detector is
planted with a \({}^{55}\)Fe radioisotope to monitor the energy response. During the five years operation of _Insight_-HXMT, some of the LE detectors were broken and shut down. The details of the LE bad detectors can be obtained from the 'Bad Detector FITS file' that is included in the _Insight_-HXMT Data Analysis software.
Fig. 4 presents the LE light curve of a blank sky, which shows the typical outline of the LE background embedded with a series of special features. The whole time range can be divided into the abnormal and normal stages of the instrument. For the abnormal stage, LE is usually troubled by a large amount of low-energy charged particles and visible light due to the relatively large FoV, which enter from the collimator and are difficult to accurately estimate. In severe cases, LE detectors will be saturated with in-orbit storage overflows. The instrument's normal stage can be divided into three types. First is the
Figure 1: Main structure of _Insight_-HXMT.
Figure 2: Geographical distributions of the PM count rate in the first **(top)** and fifth **(bottom)** years.
time interval of earth occlusion, where the light curves of the detectors with different FoVs coincide with each other and no CXB photons are recorded. Second is the flare time interval and the flares can be detected in both the small and large FoV detectors. Moreover, the flare flux is basically proportional to the FoV size. Finally, the time interval with neither earth occlusion nor flare is considered as the good time interval (GTI). The usual scientific analysis only uses the GTI data. In order to estimate the background accurately, the background analysis procedure deals with both the regular GTI judgement, and its count rate comparison between the small and large FoV detectors (Liao et al., 2020). The observational characteristics of the LE background and the effectiveness of the background model in the past five years are shown in what follows.
### Observational characteristic and long-term evolution of the LE background
Fig. 5 shows the comparison of the geometrical distributions of the LE background before and after 2021-06-30. It can be seen that the distribution with longitude (_lon_) and latitude (_lat_) has not changed, but the intensity has increased significantly. The spectra of the same geographical region (\(55^{\circ}<lon<210^{\circ}\), \(-15^{\circ}<lat<15^{\circ}\)) are specifically investigated for every year. Fig. 6 shows the 5-year background spectra of the small FoV detectors. It can be seen that the spectra have little change at the low energy end, while showing a gradually increasing trend at the high energy end. The continuous
\begin{table}
\begin{tabular}{l l l l} \hline \hline & LE & ME & HE \\ \hline Detector Type & SCD & Si-PIN & Phoswich \\ Energy range (keV)\({}^{1}\) & 0.7–13 & 5–40 & 20–250 \\ Geometrical area (cm\({}^{2}\)) & 384 & 952 & 5096 \\ Small FoV (FWHM) & \(1.6^{\circ}\times 6^{\circ}\) & \(1^{\circ}\times 4^{\circ}\) & \(1.1^{\circ}\times 5.7^{\circ}\) \\ Large FoV (FWHM) & \(4^{\circ}\times 6^{\circ}\) & \(4^{\circ}\times 4^{\circ}\) & \(5.7^{\circ}\times 5.7^{\circ}\) \\ \hline \hline \end{tabular} \({}^{1}\)Design value that will be adjusted over time.
\end{table}
Table 1: Main instrumental parameters of LE, ME and HE.
Figure 3: FoVs of LE, ME and HE.
broadening of the emission lines is the result of the decline of the LE energy resolution. As demonstrated by the on-ground simulations (Zhang et al., 2020) and previous in-orbit observations, the LE background can be simplified into a diffuse X-ray background dominant in low energy band and a particle background dominant in high energy band. Thus, the difference between the two geographical distributions shown in Fig. 5 is mainly due to the change in the high energy band. Fig. 7 shows the 5-year background spectra of the blocked FoV detectors, and the results are consistent with these of the small FoV detectors. As described in Zhang et al. (2020), the background of _Insight_-HXMT can be produced by various incidence. The background that can be recorded immediately after the incidence is called the prompt background, and the background recorded for a long time (hour to month) after the incidence is called delayed background. It is worth noting that both the backgrounds caused by the CXB and cosmic-ray protons are prompt background.
The behaviour of the LE background light curve has changed very little over the five years. The most obvious features are that the count rate is stable in the low energy band and is modulated significantly by the geomagnetic field in the high energy band, which is also shown in Fig. 4.
### Validity of the LE background model
Liao et al. (2020) has found that the background spectral shape of LE blocked FoV detectors does not change with geographical location, and can be used to characterize the particle background spectral shape of small FoV detectors. The LE background model just takes advantage of this feature to give a simple and reliable background estimation.
There is an evolution of the LE background over the five years, although it is not very significant (Fig. 5). It can be seen from Fig. 6-7 that the small and
\begin{table}
\begin{tabular}{l l l} \hline \hline ObsID & Duration & Target1 \\ \hline P0101293 (001-191) & 2017-11-02 to 2019-06-26 & Blank Sky \\ P0202041 (001-161) & 2019-07-10 to 2020-07-22 & Blank Sky \\ P0301293 (001-115) & 2020-08-06 to 2021-08-30 & Blank Sky \\ P0401293 (001-115) & 2021-09-14 to 2022-08-29 & Blank Sky \\ \hline P0101297 (201-217) & 2017-09-13 to 2018-09-14 & PSR B0540-69 \\ P0101322 (001-001) & 2017-07-19 to 2017-07-23 & PSR B0540-69 \\ P0114550 (001-003) & 2017-09-20 to 2017-09-27 & GW 170817 \\ P0101326 (001-018) & 2017-07-08 to 2019-02-19 & Cas A \\ P0202041 (200-208) & 2019-07-13 to 2020-07-29 & Cas A \\ P0302291 (001-020) & 2020-08-23 to 2021-08-21 & Cas A \\ P0101326 (001-015) & 2021-09-17 to 2022-08-19 & Cas A \\ \hline \hline \end{tabular}
\end{table}
Table 2: Observation of _Insight_-HXMT background in five years
blocked FoV detectors have a similar evolution trend, i.e., the lower limit of the spectral energy range becomes higher and the count rate becomes larger as the in-orbit time increases. For the LE detector, a large signal can be recorded as several split-events in several pixels at the same time. However, only the events above a certain threshold will be recorded and involved in the subsequent split-event reconstruction. For example, a large signal with energy E can be recorded as two signals with E0 and E1 (\(\mathrm{E0+E1=E}\)). If E1 is less than the threshold, this large signal will be treated as a single-event with E0. With increasing irradiation damage of the LE detector, the distribution of noise signal becomes wider. In order to eliminate the influence of noise signal on the working energy band, the threshold is also adjusted higher. This will raise the lower energy limit of the LE detector, as shown in the low energy band
Figure 4: Light curve of the LE small (green), big (blue) and blocked (red) FoV detectors (ObsID: P030129303601).
Figure 5: Geographical distributions of the background of LE small FoV detectors before (top) and after (bottom) 2020-06-30.
of the spectra in Fig. 6-7. Moreover, the small signals that can be recorded and participate in the split-event reconstruction before threshold adjustment will not exceed the threshold after that, i.e., the double-events that can be reconstructed before will not be reconstructed after threshold adjustment. As the threshold becomes higher, a larger proportion of the double-events will not be reconstructed but will be treated as single-events with a lower energy. As shown in Fig. 6-7 for the evolution of the background spectra, the background spectrum shifts to the left year by year. Therefore, the increasing trend of LE background is the result of LE detector irradiation damage and split-event reconstruction in data processing. Moreover, the spectra of LE blocked FoV detectors in high energy band can be mixed with the super-threshold signal peaks, so the energy band of the blocked FoV detectors in background model has been adjusted accordingly.
Figure 6: Spectra of the LE small FoV detectors for every year since _Insight_-HXMT operate in orbit.
Figure 7: Same as Fig. 6 but for the LE blocked FoV detectors.
The validity of the background model is investigated, as it is critical for scientific analysis. With the same method in Liao et al. (2020), we perform the background estimation for every blank sky observation. Fig. 8 shows the background spectrum estimation (ObsID: P030129310101) as an example. For each year, the parameters of background model are updated to maintain the accuracy of the background estimate and then the systematic error of the background model is investigated. Fig. 9 shows the deviations of the LE background estimation in eight energy bands in the fourth year as an example. The systematic errors of different energy bands between 2 to 10 keV in each year are shown in Fig 10, and the results show that the systematic error does not change significantly compared to the first two years since the launch of _Insight_-HXMT, i.e., the background model is stable and can give accurate background estimate. However, as the data around 1.5 keV is often affected by the electronic noise, the detection threshold is adjusted upward, thus only the systematic errors above 2 keV are given in this paper.
## 3 Background of the Medium-Energy Telescope
As shown in Table 1, ME is a collimating telescope sensitive in 5-40 keV with a total geometrical area of 952 cm\({}^{2}\). It consists of 54 sensors in three boxes, and each sensor handles 32 SI-PIN pixels. For each box, there are 15 sensors with small FoV collimators and one sensor with the blocked FoV collimator that is used for background estimates. The ME background characteristics have some similarities to those of LE in high energy band, especially the light curve feature and the geographical distribution. However, the portion of each background component is very different, and the particle background is dominant in the whole energy band (Guo et al., 2020; Zhang et al., 2020). Fig. 11 shows a
Figure 8: An example of the LE background estimation (ObsID: P030129310101). Top: spectrum of a blank sky observation (black) and the estimated background spectrum (red). Bottom: residuals in terms of errors (\(\sigma\)).
comparison of the ME background geographical distribution in the first and fifth years, respectively. It can be seen that the ME background in the fifth year is slightly higher than that in the first year. In the region near SAA (\(330^{\circ}<lon<360^{\circ}\), \(0^{\circ}<lat<30^{\circ}\)), the background is significantly higher than the most regions with similar latitudes. This indicates that as the satellite passes through the high particle flux region (e.g., SAA), the ME background will rise firstly and then decay with time, i.e., the ME background has a delayed background.
Figure 10: Systematic errors of the LE background model for every year after the launch of _Insight_-HXMT.
Figure 9: Deviations of the LE background estimation for eight energy bands in 4-th year.
component. Although the ME delayed background is relatively insignificant, it results in long time scale evolution of the ME background. In order to improve the accuracy of the background estimation, the parameters of the background model should be given for every year.
### Observational characteristic and long-term evolution of the ME background
Fig. 12 shows the spectra with different geographical latitude ranges. It can be seen that the intensity varies greatly, but the spectral shape keeps almost the same. Thus the evolution of the ME background, especially the intensity of the silver line, shall be carefully addressed in order to ensure the accuracy of the background model.
The ME background light curve exhibits significant orbital modulation (Fig. 13). A clear anti-correlation between ME background and geomagnetic cut-off rigidity has been shown in Guo et al. (2020). There is an obvious peak in the light curve that is caused by particle events and is usually present in the high latitude region; accordingly, the corresponding time is excluded from GTI.
Fig. 14 shows the evolution of the background spectra of the small FoV detectors in the geographical region (\(115^{\circ}<lon<125^{\circ}\), \(-5^{\circ}<lat<5^{\circ}\)). It can be seen that the ME background level increases with the increasing in-orbit operation as a cumulative effect of the weak delayed component. Since the delayed background increases with decreasing energy roughly (Zhang et al., 2020), the evolution of ME background in low energy band is more significant than that in high energy band. It is worth noting that the low-energy noise distributions have become wider as a result of the irradiation damage for some pixels, thus the spectrum below 11 keV for the fifth year has increased significantly. The centers of the silver lines have also shifted over time, indicating a moderate change in the Energy-Channel relationship. The spectral evolution of the blocked FoV detector is shown in Fig. 15. The positions of the silver lines do not shift significantly, which indicate that the blocked FoV detectors suffer less from radiation damage than the small FoV detectors.
### Validity of the ME background model
Guo et al. (2020) has constructed the background model and the corresponding database. Since the ME background spectral shape has a non-negligible change with the geographical location compared with LE, the average background of each detector at each geographical location must be considered in the background model. In each background estimate, we first use the database to obtain the primary prediction spectra of the small FoV and blocked FoV detector in each geographical location the satellite has passed by, and then use the observation of the blocked FoV detector to make a further correction. The ME database produces the background spectra with time-averaged normalization for each individual geographical site, and the contemporary particle
intensity can be determined by the blocked FoV detector and then used to correct the background model.
In each year, the backgrounds of all blank sky observations are estimated by the background model with the model parameters in the corresponding year. Fig. 16 is an example of the ME background estimation to an blank sky observation (ObsID: P030129306901). A statistical analysis with the method in Guo et al. (2020) is performed to obtain the systematic error of the background estimation of each energy band. Fig. 17 shows the deviations of the ME background estimation for six energy bands in the fourth year. Fig. 18 shows the systematic errors for six energy bands over the five years. The results show that the systematic error has no significant increasing trend over the five years. It can be seen that the systematic errors are relatively large in 10-15 keV with a mean value of \(\sim 2\%\), and the systematic errors in 10-40 keV are \(\sim 1.6\%\), indicating that the ME background model is still reliable. It is worth mentioning that the detection below 10 keV is affected significantly by the electronic noise, so the current reliable energy range of ME begins from 10 keV.
## 4 Background of the High-Energy Telescope
HE has 18 NaI(Tl)/CsI(Na) phoswich detectors that are surrounded by 18 anti-coincidence detectors (ACDs) for active background shielding. Among the 18 detectors, 15 of which have a small FoV, two have a large FoV and one has a blocked FoV for background estimate. The on-ground simulations have shown that the NaI and CsI crystals can be activated by the charged particles around
Figure 11: Geographical distributions of the background of ME small FoV detectors in the first and fifth years since _Insight_-HXMT operation in orbit.
the orbit of _Insight_-HXMT, and the radioactive decay of the activated crystals is responsible for most of the HE background. As the satellite operates in orbit continuously, the crystals in HE detectors continue to be activated. After a significant rising in the first year, the rising trend of the HE background has gradually slowed down (Li et al., 2009; Xie et al., 2015).
### Observational characteristic and long-term evolution of the HE background
The graphical distributions of the HE background for the first and fifth years are shown in Fig. 19. It can be seen that the distribution has little difference, but the background count rate in the fifth year is significantly higher than
Figure 12: Spectra of the ME small FoV detectors in different geographical latitude ranges.
Figure 13: Light curves of the background observation by the ME small FoV detectors in six energy bands (ObsID: P040129309401, T0=2022-06-18T07:15:53.5).
that in the first year. Unlike the LE and ME backgrounds that are dominated by the prompt components, HE background is dominated by the time-delayed components due to the activation of the crystals by charged particles. Consequently, the background is very different in the ascending and descending orbit phases even for the same geographical location.
Fig. 20 shows the spectra at geographical locations in ascending and descending orbit phases for every year. The spectra in \((lon,lat)=(345^{\circ},15^{\circ})\) are shown in Panel (e) and (f). In the ascending orbital phase, when the satellite passes through the SAA, the crystals of the detectors seriously activated do not have enough time to decay, and thus the background has a relatively high level. However, the background in descending orbit phase is lower. As it has been a long time since the last time the satellite passed through intense charged particle region and hence the background is dominated by the long-decay components.
Figure 14: Background spectra of the ME small FoV detectors in every year since _Insight_-HXMT operation in orbit.
Figure 15: Same as Fig. 14 but for the blocked FoV detectors.
The HE background spectra at different geographical locations show long-term evolution. Compared with the results shown in other panels, the evolution shown in Panel (e) is less significant. This is because the satellite has just passed through the SAA, thus a large proportion of the background is contributed by the short-time scale component. Moreover, the intensity of SAA has not changed significantly over the five years (Fig. 2).
As described in Li et al. (2009) and Xie et al. (2015), the spectra of HE background consist of various emission lines, which are induced by the interactions of the detectors with high-energy particles. It can be seen from Fig. 20 that the spectral shape is stable during the five years.
Figure 16: An example of the ME background estimation (ObsID: P030129306901). Top: spectrum of a blank sky observation (black points) and the estimated background spectrum (red line). Bottom: residuals in terms of errors (\(\sigma\)).
Figure 17: Deviations of the ME background estimation for six energy bands in the fourth year.
Fig. 21 shows the light curves of HE observation of a blank sky region in six energy bands. For every energy band, the background rises to a high level when the satellite has just crossed SAA, then decays gradually and shows significant geomagnetic modulation. There are also differences between the light curves in different energy bands, since the background is composed of numerous components characterized with different portions, spectral shapes and typical variable time scales.
### Validity of the HE background model
Based on the HE background characteristics, Liao et al. (2020) has made the HE background model. The principle is similar with that of ME but more complex. In order to obtain the background at any geographical location and any time, the empirical function with time as an independent variable is constructed to describe the long-term evolution of the HE background. Thus the preliminary estimation can be obtained with the orbital parameters and observation time, and some further correction will be performed with data of the blocked FoV detector.
Therefore, HE background estimation is heavily dependent on the mathematical description of the background long-term evolution, i.e., the accuracy of the empirical function is critical to the background estimation. Fig. 22 shows the long-term evolution of background count rates at six different geographical locations in 45-70 keV. For each energy channel, the long-term evolution for each energy channel is described by a broken line with two slopes. The fitting curve is merged from the broken lines in this energy band with different break times, thus it shows a smooth transition without an obvious break. It is worth noting that the broken line with two slopes is a function we chose to depict the long-term evolution of background count rate over time in an energy channel. Although other functions might be also acceptable, the selected broken-line shape could describe the observation data very well. As predicted by the on-ground simulation (Li et al., 2009; Xie et al., 2015), the activated isotopes lead
Figure 18: Systematic errors of the ME background model in every year.
to a rapid decay of the background rate after each SAA passage and a long-term accumulation as the in-orbit operation days increase. This accumulation rises rapidly in the initial epoch after launch and becomes slower after hundreds of days as the long lifetime isotopes are not dominant. This predicted long-term evolution is consistent with the observations shown in Fig. 22.
With the background model, the background estimate for all blank sky observations is preformed. Fig. 23 is the background estimation to a blank sky observation (ObsID: P040129307701) shown as an example. For each year, the deviations in eight energy bands between 25 keV and 250 keV (Fig 24 for the fifth year as an example) are obtained to calculate the systematic errors. Following the method in Liao et al. (2020), the systematic errors can be obtained for each year (Fig. 25). The results show that they are all less than 3%, which is not much different from the results of the previous two years. So the HE background model is still effective.
background spectrum is the result of the decline of the LE energy resolution. With the increasing operation in orbit, the ME background level increases as a cumulative effect of the weak delayed component, and the variation of ME background in low energy band is more significant than that in high energy band. In addition, the ME background in low energy band can also be affected by the low-energy noise of some pixels. The crystals of HE detectors continue to be activated that is why the background intensity increases obviously with the background intensity.
Figure 20: Spectra of the HE background (DetID = 0 & \(>\) 30 keV) in every year with different orbital phases and geographical locations. Left: Panel (a), (c) and (e) are these for \((lon,lat)=(140^{\circ},0^{\circ})\), \((180^{\circ},0^{\circ})\) and \((345^{\circ},15^{\circ})\) in ascending orbital phase. Right: same as the left panels but for the descending orbital phase.
time. During the first five years, the increasing trend gradually slows down and shows a behavior similar to saturation. The background evolution at different energies is not consistent, which means that the shape of the background spectrum also evolves with time for a certain geographical location.
Although the time evolution characteristics of the LE and ME backgrounds are not significant, in order to maintain the accuracy of the background estimation, the background model parameters are updated in each year. For HE background model, the evolution has been taken into account from the beginning of the model construction. The statistical analysis shows that the systematic errors of the three telescopes change little during the first five years
Figure 21: Light curves of the HE (DetID = 0) background observation in six energy bands (ObsID : P040129309001, T0 = 2022-05-22T18:15:38.5). The gap is due to the protective shutdown of HE instrument when the satellite pass through SAA.
Figure 22: Long-term background evolution of the HE detector (DetID = 0) in 45–70 keV at six geographical locations.
of _Insight_-HXMT operation, thus the background models are still effective and reliable.
As described in Liao et al. (2020), the LE background model constructed with the blank sky observations can effectively estimate for both the particle background and the diffuse background caused by the CXB. Therefore, it can be used to the pointing observation with the target in high Galactic latitude (\(|b|>10^{\circ}\)). In order to estimate the diffuse background accurately in low Galactic latitude, the background model is not able to estimate the diffuse background accurately.
Figure 23: An example of the HE background spectrum estimation (ObsID: P040129307701). Top: spectrum of a blank sky observation (black) and the estimated background spectrum (red). Bottom: residuals in terms of errors (\(\sigma\)).
Figure 24: Deviations of the HE background estimation for eight energy bands in 5-th year.
Galactic latitude (\(|b|\leq 10^{\circ}\)), the diffuse X-ray background in the Galactic Plane (Jin et al., 2022) should be used in the LE background estimation.
It is worth noting that the current background models of the three telescopes rely heavily on the blocked FoV detectors. So the blocked FoV detectors are critical, especially for HE as it has only one blocked FoV detector. This is a potential hazard for background estimates because of insufficient robustness. Therefore, an alternative to background estimation that does not rely on blocked FoV detectors must be planned in advance, such as using ACD and PM as prompt particle monitors for background estimation of LE and ME. For HE, a parametric background model that does not rely on blocked FoV detectors has been built. By considering the physical factors that generate the HE background, a mathematical model considering these physical processes has been successfully constructed (You et al., 2021).
Acknowledgments.This work made use of the data from _Insight_-HXMT mission, a project funded by China National Space Administration (CNSA) and the Chinese Academy of Sciences (CAS). The authors thank supports from the National Key R&D Program of China (2021YFA0718500), the National Natural Science Foundation of China under Grants Nos. U1838202, U1838201. This work was partially supported by International Partnership Program of Chinese Academy of Sciences (Grant No.113111KYSB20190020).
## Declarations
On behalf of all authors, the corresponding author states that there is no conflict of interest.
|
2309.15435 | Semantics-Driven Cloud-Edge Collaborative Inference | With the proliferation of video data in smart city applications like
intelligent transportation, efficient video analytics has become crucial but
also challenging. This paper proposes a semantics-driven cloud-edge
collaborative approach for accelerating video inference, using license plate
recognition as a case study. The method separates semantics extraction and
recognition, allowing edge servers to only extract visual semantics (license
plate patches) from video frames and offload computation-intensive recognition
to the cloud or neighboring edges based on load. This segmented processing
coupled with a load-aware work distribution strategy aims to reduce end-to-end
latency and improve throughput. Experiments demonstrate significant
improvements in end-to-end inference speed (up to 5x faster), throughput (up to
9 FPS), and reduced traffic volumes (50% less) compared to cloud-only or
edge-only processing, validating the efficiency of the proposed approach. The
cloud-edge collaborative framework with semantics-driven work partitioning
provides a promising solution for scaling video analytics in smart cities. | Yuche Gao, Beibei Zhang | 2023-09-27T06:53:09Z | http://arxiv.org/abs/2309.15435v1 | # Semantics-Driven Cloud-Edge Collaborative Inference
###### Abstract
With the proliferation of video data in smart city applications like intelligent transportation, efficient video analytics has become crucial but also challenging. This paper proposes a semantics-driven cloud-edge collaborative approach for accelerating video inference, using license plate recognition as a case study. The method separates semantics extraction and recognition, allowing edge servers to only extract visual semantics (license plate patches) from video frames and offload computation-intensive recognition to the cloud or neighboring edges based on load. This segmented processing coupled with a load-aware work distribution strategy aims to reduce end-to-end latency and improve throughput. Experiments demonstrate significant improvements in end-to-end inference speed (up to 5x faster), throughput (up to 9 FPS), and reduced traffic volumes (50% less) compared to cloud-only or edge-only processing, validating the efficiency of the proposed approach. The cloud-edge collaborative framework with semantics-driven work partitioning provides a promising solution for scaling video analytics in smart cities.
Keywords: cloud computing, edge computing, video analytics, license plate recognition, smart cities
## I Introduction
Intelligent transportation is a crucial domain in the urban brain [1]. It utilizes modern information technologies to conduct intelligent management and planning of urban traffic, improving transportation efficiency, reducing congestion, enhancing safety, and raising service levels. Video systems are a key component of intelligent transportation, used for monitoring road traffic, vehicle movements, traffic violations, etc., providing data support for smart transportation. For example, in an urban brain smart transportation monitoring system, cameras can monitor urban intersections and highways in real time, analyzing video streams for optimal monitoring.
However, due to high road density and vehicle volume in cities, fully monitoring urban transportation requires numerous cameras, generating massive video data streams around the clock. Efficiently processing such vast amounts of video data poses huge challenges for back-end information systems. Directly sending video streams to the cloud for processing incurs massive communication and computational overheads. Therefore, cloud-edge collaboration (Fig. 1) has become a common approach for handling massive video data [2]. Edge servers collect video streams from multiple nearby cameras for real-time processing, then send processed results (e.g. vehicle volume, tracking info, pedestrian volume) to the cloud, significantly reducing communication costs and cloud server workloads.
In an urban brain context, edge servers are numerous. Controlling their configurations is key to reducing overall system costs [3]. Therefore, edge servers may become overloaded and unable to process data in real time when bursts of high traffic volumes occur, causing business interruptions. To prevent this, video processing workloads can be shifted to neighboring edge servers or the cloud. However, as modern video streams are typically 1080P or 2160P (4K) high definition, a 1080P video stream has a bitrate of 3500kbps. If an edge server connects to 8 cameras, the traffic totals 28Mbps. Whether redirected to the cloud or neighboring edges, such workloads impose huge burdens on communication networks and target compute nodes.
The key contributions of this paper include: 1) With license plate recognition, the most common scenario in smart transportation of city brains, as an example, comparing common license plate detection and recognition methods and proposing a semantics-driven edge-cloud collaborative license plate detection method. 2) Proposing an edge-cloud collaborative queue processing mechanism with congestion resistance based on the producer-consumer model. 3) Demonstrating the performance improvements of the proposed method in terms of end-to-end latency, throughput, edge-cloud traffic, device utilization and other evaluation metrics.
## II Related Work
The existing methods are based on fast object detection in general video streams, which improve efficiency while ensuring accuracy. They mainly include detection and tracking combined video object detection, video stream object detection based on motion information feature migration or fusion, etc.
Detection and tracking combined video object detection is a common video object detection method. Its basic idea is to first detect targets in each frame of the video as static image object detection, and then use multi-target tracking algorithms to track the target boxes and use the tracking results to correct the previous detection results to improve stability and continuity. The advantage of this method is that it can utilize existing single-frame object detectors and multi-target trackers without the need to design complex network structures or training processes. The disadvantage of this method is that it relies on the performance of single-frame object detectors and multi-target trackers. If one of them fails, it will affect the effect of the entire video object detection. A representative work is T-CNN [4], which proposes a video object detection framework based on tracking and regression. It first uses Faster R-CNN [5] to detect targets in each frame of the video, then uses MDNet for multi-target tracking of the detection results, and finally uses a regression network to optimize and reorder the tracking results.
Video stream object detection based on motion information feature migration or fusion is a method that uses optical flow and other motion information to estimate feature changes between adjacent frames, and then transfers or fuses features from key frames to other frames to reduce duplicate calculations and improve consistency [6]. The advantage of this method is that it can use motion information to enhance the spatial and temporal information in the video to improve the accuracy and robustness of object detection. The disadvantage is that it requires additional computation of optical flow and other motion information, which increases the amount of computation and time overhead. A representative work is Deep Feature Flow [7], which proposes an optical flow based feature propagation method that maps features from key frames to other frames through optical flow, then fuses the propagated features and current frame features with a fusion module, and finally performs object detection with a detection module.
Although existing high performance encoding technologies represented by H.264 [8] can compress over 100:1, existing technologies do not effectively detect semantic information combined with business scenarios. In cloud-edge collaborative scenarios, it is necessary to transmit the complete video stream to adjacent nodes and use video object detection methods for repetitive processing. For edge scenarios of city brains, since high-definition cameras are used, the traffic is large, and edge servers are often cost-sensitive and have limited processing capabilities. In the face of sudden large traffic and sudden events, etc., processing delays may occur, leading to business flow interruptions and other events.
## III System Model and Algorithm
As Fig. 1 shows, an edge license plate recognition system typically comprises: 1) Cameras, 2) Edge servers, and 3) Cloud servers. Cameras capture and push video streams to edge servers.
n receiving streams, edge servers extract video frames for processing, then return results to the cloud for archiving/analysis.
License plate detection is a key smart transportation technology, used to automatically identify and extract plate information from vehicle images, accurately locating and recognizing plates in complex scenarios to provide important data support for transportation management, vehicle tracking, security monitoring, etc.
Typically, a license plate detection algorithm involves the following main steps:
1. Image preprocessing: Perform preprocessing operations like enhancement, de-noising, edge detection, etc. In the preprocess of a frame of video, the filtering-based methods such as median filtering and adaptive Wiener filtering can be used on input vehicle images to improve subsequent plate localization accuracy and robustness.
2. Plate localization: Leveraging various image processing techniques and feature analysis methods, accurately locate the plate region in a preprocessed vehicle image. This may involve color, shape, edge and other types of analysis.
3. Plate segmentation: Further process the localized plate region to segment out each character/digit, typically involving character spacing analysis, projection, template matching and other techniques to obtain clear character images.
4. Character recognition: Recognize and categorize the segmented character images into their corresponding characters/digits. This can be done using conventional machine learning like pattern recognition and feature extraction, or deep learning techniques like Convolutional Neural Networks (CNNs).
5. Post-processing: Correct and validate recognition results using rules and algorithms to eliminate errors and improve accuracy and reliability.
We refer to the first three steps as semantics extraction, and the last two as semantics recognition. For a video stream's continuous frames, the edge server places each frame into a task queue (FIFO queue) forming a frame queue \(N=\{n_{1},n_{2},...,n_{k}\}\) and performs the five steps on each queued frame. However, the edge server has limited processing capacity. When bursts occur, queued frames (\(|N|=k\)) may exceed the
Figure 1: Edge license plate recognition system
queue's maximum capacity ( \(N_{max}\) ), i.e. \(|N|>N_{max}\). To address this, we propose a segmented processing approach that separates semantics extraction and recognition, using the edge server to process local streams for semantics extraction. The extracted license plate image patches are packaged into objects \(p_{i}\) to form a patch queue \(Q=\{p_{1},p_{2},...p_{k}\}\) (Algorithm 1). Then, for character recognition, when the length \(|Q|=k\) is smaller than or equal to the maximum capacity, i.e. \(|N|\leq N_{max}\), the local edge server works as the consumer, processing from the head. Or when the length (\(|Q|=k\)) exceeds \(N_{max}\), the second consumer begins working for collaborative processing. It distributes patches at the head to neighboring edges, if available, or to the cloud, if neighboring edges are also overloaded. Finally, extracted plate fields are reported to the cloud (Algorithm 2).
## IV Experimental Results
Our edge nodes were low-power Intel i7-10510U 1.8GHz CPUs with 16GB RAM. The cloud server had an NVIDIA RTX 2080Ti GPU. Experiments used the standardized CCPD (Chinese City Parking Dataset) and video streams from a campus parking lot. Three combinations of semantics extraction and recognition algorithms were tested (Table 1). _A._ End-to-end latency, _B._ throughput, _C._ cloud-edge traffic, and _D._ device utilization were measured and compared between direct edge processing and our approach. The following results indicate improvements in all metrics:
### _latency decreased for all cases using our approach, by up to 1/5 of the original (Fig. 2)._
_Average throughput increased 5x with our approach, reaching \(\sim\)9 frames per second (FPS) with Hyperlr, \(\sim\)8 FPS with YoLo, and \(\sim\)9 FPS with MTCNN (Fig. 3)._
_With traffic bursts requiring cloud-edge collaboration, semantics-extracted patches were sent to other edges/cloud for processing. Compared to directly transferring unprocessed frames, total data transfer (measured in KB) on data links decreased in all cases. With Hyperlrr it averaged 845.22KB (51.81% of original). With YoLo it was 850.6KB (52.14% original). With MTCNN it was 841.15KB (51.57% original) (Fig. 4)._
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Algorithm** & **Semantics** & **Semantics** \\
**Group** & **Extraction** & **Recognition** \\ \hline Hyperlr [9] & Mobilene-ssd & CTC \\ \hline YoLo [10] & YoLov5 & CRNN \\ \hline MTCNN [11] & MTCNN & LPRNET \\ \hline \end{tabular}
\end{table}
Table 1: License plate recognition algorithms
Figure 4: Data Transfer
Figure 3: Throughput
Figure 2: End-to-end inference latency
_For device Utilization, we evaluated edge CPU and cloud GPU usage after applying our algorithm, measuring cloud GPU usage when sending patches there for semantics recognition (Fig.5 and Fig.6)._
License Plate Recognition from Campus Video Streams: The above method was applied to campus video streams, accelerating license plate recognition as shown in Fig. 7.
## V Conclusion
In conclusion, we propose a novel semantics-driven cloud-edge collaborative approach to accelerate video analytics for smart city applications. Using license plate recognition as a case study, we develop a segmented processing methodology that separates semantics extraction on the edge and computation-intensive recognition on the cloud/edge. This allows us to efficiently extract visual semantics at the edge before intelligently distributing recognition workloads based on server loads. Comprehensive experiments demonstrate that our approach can significantly reduce end-to-end latency, improve throughput, decrease traffic volumes, and better utilize devices. The collaborative framework provides a scalable and efficient solution to meet the growing demands of real-time video analytics for smart transportation and other domains. While focusing on license plate recognition here, the semantics-driven methodology can be extended to other smart city scenarios. Further research can be done to explore more advanced semantics extraction and optimal work distribution strategies.
|
2309.04538 | Quantum Signal Processing with the one-dimensional quantum Ising model | Quantum Signal Processing (QSP) has emerged as a promising framework to
manipulate and determine properties of quantum systems. QSP not only unifies
most existing quantum algorithms but also provides tools to discover new ones.
Quantum signal processing is applicable to single- or multi-qubit systems that
can be qubitized so one can exploit the SU$(2)$ structure of system evolution
within special invariant two-dimensional subspaces. In the context of quantum
algorithms, this SU$(2)$ structure is artificially imposed on the system
through highly nonlocal evolution operators that are difficult to implement on
near-term quantum devices. In this work, we propose QSP protocols for the
infinite-dimensional Onsager Lie Algebra, which is relevant to the physical
dynamics of quantum devices that can simulate the transverse field Ising model.
To this end, we consider QSP sequences in the Heisenberg picture, allowing us
to exploit the emergent SU$(2)$ structure in momentum space and synthesize QSP
sequences for the Onsager algebra. Our results demonstrate a concrete
connection between QSP techniques and Noisy Intermediate Scale quantum
protocols. We provide examples and applications of our approach in diverse
fields ranging from space-time dual quantum circuits and quantum simulation, to
quantum control. | V. M. Bastidas, S. Zeytinoğlu, Z. M. Rossi, I. L. Chuang, W. J. Munro | 2023-09-08T18:01:37Z | http://arxiv.org/abs/2309.04538v1 | # Quantum Signal Processing with the one-dimensional quantum Ising model
###### Abstract
Quantum Signal Processing (QSP) has emerged as a promising framework to manipulate and determine properties of quantum systems. QSP not only unifies most existing quantum algorithms but also provides tools to discover new ones. Quantum signal processing is applicable to single- or multi-qubit systems that can be "quiltized" so one can exploit the SU(2) structure of system evolution within special invariant two-dimensional subspaces. In the context of quantum algorithms, this SU(2) structure is artificially imposed on the system through highly nonlocal evolution operators that are difficult to implement on near-term quantum devices. In this work, we propose QSP protocols for the infinite-dimensional Onsager Lie Algebra, which is relevant to the physical dynamics of quantum devices that can simulate the transverse field Ising model. To this end, we consider QSP sequences in the Heisenberg picture, allowing us to exploit the emergent SU(2) structure in momentum space and "synthesize" QSP sequences for the Onsager algebra. Our results demonstrate a concrete connection between QSP techniques and Noisy Intermediate Scale quantum protocols. We provide examples and applications of our approach in diverse fields ranging from space-time dual quantum circuits and quantum simulation, to quantum control.
## I Introduction
Originally inspired by composite pulse sequences in nuclear magnetic resonance (NMR), quantum signal processing (QSP) has emerged as a framework to unify existing quantum algorithms and discover new ones using well-developed tools from functional analysis [1; 2; 3; 4]. QSP is a successful framework for precisely controlling the evolution of quantum systems when one is given repeatable access to basic quantum processes (unitary evolutions). The iterative structure of QSP appears in many contexts, and suggests the applicability of similar ideas to improve understanding of control protocols in many-body quantum systems. Indeed, most explorations into the non-equilibrium behavior of condensed matter systems [5; 6], including those studying quantum annealing [7; 8; 9; 10; 11; 12; 13], discrete time crystals [14; 15; 16; 17], and space-time dual quantum circuits [18; 19; 20; 21; 22], rely on the fact that the dynamics depend on iterated processes. If this structural similarity is sufficient to import QSP techniques and precisely control many-body quantum systems currently realized in experiments [23], we can expand both our understanding of non-equilibrium dynamics and our capacity to control and manipulate the quantum systems.
The application of QSP protocols in current experimental platforms is difficult as conventional circuit instantiations of QSP protocols rely on highly nonlocal unitaries that are difficult to implement in Noisy-intermediate scale quantum (NISQ) devices. QSP and its multi-qubit extension, quantum singular value transformation (QSVT) [24], rely on strong conditions known as subitization [25; 2], which ensure that the dynamics of the system can be described as a direct sum of two-dimensional subspaces whose dynamics are surmarizable in terms of SU(2) operations. The two conventional methods to impose such a structure rely on the use of highly non-local interactions [26]. In the first method, subitization [2; 25] can be imposed on the dynamics by implementing a highly non-local partial reflection operation acting on the whole system. In the second method, one uses non-local interactions between the system and a single ancilla to condition the dynamics of the system on the ancilla. Then, the tensor product structure can be used to endow the overall dynamics with the desired behavior. More recently, Refs. [23; 27] propose more natural implementations of QSP protocols. However, these restricted protocols still rely on highly non-local interactions between the system and a single ancillary qubit. Hence, whether the subitization conditions can be satisfied for the dynamics of an extended system evolving under local dynamics will determine the applicability of QSP to the study of near-term many-body quantum systems. Moreover, there are mathematical challenges in trying to use QSP for multi-qubits systems that are not subitized and for other Lie groups beyond SU(2). A recent effort in this direction is the development QSP algorithms for continuous variables described by the SU(1,1) Lie group [28]. Most importantly, QSP is a framework built in the context of finite dimensional vector spaces. Consequently the validity of applying similar techniques to the analysis of infinite dimensional systems is not obvious. We show below that this condition requires us to either simplify how we represent these systems (by identifying underlying symmetries) or substantially alter the basic structure of QSP.
In this paper we apply QSP-inspired techniques to the one-dimensional quantum transverse-field Ising model (TFIM), a condensed matter system which is of general theoretical interest from quantum annealing [7] to space-time dual circuits [18; 21], and one that is routinely realized experimen
tally in diverse NISQ platforms [29; 30]. We define QSP sequences for the Onsager algebra [31; 32; 33; 34]. This is an infinite-dimensional Lie algebra underlying the solution of the Ising model that shares some basic traits with the su(2) algebra undergirding conventional QSP. We further determine the conditions under which repeated access to unitary evolutions induced by Hamiltonian terms of the TFIM allows one to implement generic QSP protocols. Lastly, we highlight the power of the proposed QSP sequences by applying them to a wide range of scenarios of current interest, ranging from space-time dual quantum circuits and Hamiltonian engineering to composite pulse sequences in spin systems.
To achieve these results, we rely on two core ingredients. First, we use a Jordan-Wigner mapping [35] between the TFIM and a non-interacting fermionic model [36]. Because TFIM is integrable, the associated fermionic Hamiltonian is a quadratic form in terms of fermionic ladder operators in each momentum sector. Second, unlike the conventional QSP approach that is interested in state evolution, we consider the action of the Hamiltonian evolution operator on the fermionic ladder operators in the Heisenberg picture. The terms in the TFIM Hamiltonian generate SU(2)-like transformations of fermionic operators. These transformations are then cascaded into QSP-like iterative protocols, defined by a set of parameters each assigned for one iteration. We then identify the special points in the parameter space for which the evolution is as expressible as standard QSP acting on the space of fermionic operators.
QSP and its related algorithms are far more flexible than initially considered. Under specially tuned conditions, the evolution of many complex condensed matter systems is succinctly described and controlled by methods that are quite similar to QSP, even when they are evolving under local dynamics. The QSP methodology brings new insights into our understanding of the dynamics of quantum systems and allows us to design novel control sequences to improve the performance of near-term quantum devices. We discuss the application of QSP methods to dual quantum circuits [18; 19; 20; 21; 22], which could be used to define QSP sequences in hybrid quantum circuits composed of unitary operations and measurements[21]. Moreover, we show that the proposed QSP sequences can be used to control the dynamics of single-particle fermionic excitations by engineering their dispersion relation. Similarly, we can use this ability to engineer the single-particle dispersion relation to simulate various spin Hamiltonians which correspond to non-interacting fermionic Hamiltonians.
Our results point towards further challenges for QSP to subsume, as well as avenues toward the utility of QSP protocols in describing locally interacting multi-qubit systems. Unlike in the standard case, QSP in the Heisenberg picture can be easily extended to the non-unitary evolution of the fermionic operators by using the space-time duality [18; 19; 20; 21; 22]. Additionally, QSP-like sequences of SU(2) transformations will allow us to design control sequences for a wider range of experimental scenarios and to strengthen our understanding of iteratively-evolved quantum mechanical systems.
The structure of our paper is as follows. In section II we provide a brief summary of conventional QSP using SU(2) operations. In section III we introduce the Onsager Lie algebra and Krammers-Wannier duality, and define the QSP sequences terms of the "seed operators" of this Lie algebra. In section IV we discuss the intimate relation between Onsager algebra and the Ising model and discuss the physical implementation of QSP in terms of single- and two-qubit operations. In section V we demonstrate that after a Jordan-Wigner transformation, we can obtain simple QSP sequences for fermionic operators in the Heisenberg picture when we work in momentum space. We also discuss the expressivity of QSP in the Heisenberg picture. In addition, in section VI we provide specific examples of QSP sequences using Onsager algebra in the context of space-time dual quantum systems, Hamiltonian engineering and composite pulse sequences in spin chains. Lastly, we provide concluding remarks and an outlook in section VII.
## II Quantum signal processing (QSP) revisited
In nuclear magnetic resonance (NMR) there exist many composite pulse techniques designed to achieve specific goals, such as the precise control of the dynamics of quantum systems [37; 38; 39; 40; 41] and the reduction of noise. One can think of a sequence of parameterized unitary operations, in analogy to how they are used in NMR, as a means to calculate a response function. Recently, Quantum Signal processing (QSP) has emerged as general theory of composite pulse sequences, and has proven itself as a versatile approach to design quantum circuits, ultimately permitting the unification and simplification of most of the known quantum algorithms [1; 2]. In the language of QSP, a sequence of unitaries allows one to process an unknown signal encoded in said unitaries, such that measurement results can depend on said signal in highly-non-linear, near arbitrary ways [2].
We briefly summarize the major takeaways of QSP in terms of the su(2) algebra by first defining the signal operator [2]
\[\hat{W}(x)=e^{\mathrm{i}\frac{\delta}{2}x}=\begin{bmatrix}x&\mathrm{i}\sqrt{ 1-x^{2}}\\ \mathrm{i}\sqrt{1-x^{2}}&x\end{bmatrix}\,, \tag{1}\]
where \(\delta=-2\cos^{-1}x\) with \(x\in[-1,1]\) while \(X,Y,Z\) are Pauli matrices generating the su(2) Lie algebra. The signal \(\delta\) is processed through a sequence of rotations that do not commute with the signal operator, defined by
\[\hat{S}(\phi_{i})=e^{\mathrm{i}\phi_{i}Z}\,. \tag{2}\]
If the sequence contains \(d+1\) rotations used to process the signal, it is convenient to organize the angles into a vector \(\vec{\phi}=(\phi_{0},\phi_{1},\dots\phi_{d})\). A theorem of QSP establishes that given QSP sequence parameterization \(\hat{U}_{\vec{\phi}}\) induces a polynomial transformation of \(x\) as follows
\[\hat{U}_{\vec{\phi}}=e^{\mathrm{i}\phi_{i}Z}\prod_{r=1}^{d}\hat{W}(x)e^{ \mathrm{i}\phi_{i}Z}=\begin{bmatrix}P(x)&\mathrm{i}Q(x)\sqrt{1-x^{2}}\\ \mathrm{i}Q^{r}(x)\sqrt{1-x^{2}}&P^{*}(x)\end{bmatrix}\,. \tag{3}\]
Further, there is a sequence of rotations \(\vec{\phi}\) for any polynomials \(P(x)\) and \(Q(x)\) satisfying mild requirements [2] on parity and norm. The cornerstone of this result is that, given function one wishes to apply to \(x\), there exists an efficiently computable sequence of angles \(\vec{\phi}\) encoding a polynomial approximation of it [41].
In its original form [2], this theorem was defined considering the structure of the su(2) algebra \([X,Y]=2\mathrm{i}Z\), \([Y,Z]=2\mathrm{i}X\), and \([Z,X]=2\mathrm{i}Y\) which is finite dimensional and generates both the signal and signal processing operators belonging to the compact group SU(2). The simple form of the QSP operation sequence \(\hat{U}_{\vec{\phi}}\) is possible due to well-known properties of the Pauli matrices, e.g., \(X^{2}=Y^{2}=Z^{2}=\hat{1}\). Previous works mostly use subitization to obtain QSP sequences in a many-qubit system by exploiting the SU(2) dynamics within two-dimensional invariant subspaces [2; 25]. However, this procedure either requires controlled versions of \(n\)-qubit unitaries (requiring extra ancillae) or evolutions generated by \(n\)-qubit reflection operators (which have to be highly nonlocal). Hence, it is desirable to find QSP-like schemes that are easy to implement with local interactions.
The question we want to answer in this work is whether QSP sequences can be defined in infinite dimensional Lie algebras [42] such as the Kac-Moody [43; 44] or the Virasoro algebra in conformal field theory [45; 46]. These algebras play an important role in diverse fields ranging from low-energy regimes (low temperatures and long wavelength excitations) in condensed matter physics [47; 48] to high-energy physics [49] and string theory [50].
For concreteness, in this work, we focus on the Onsager algebra appearing in the Ising model, which is an infinite-dimensional algebra of importance in statistical physics and the study of critical phenomena [31; 32; 33; 34]. For instance, this algebra has representations as transfer matrices of the classical 2D Ising model [31]. In the next section, we briefly summarize the basic aspects of the Onsager algebra and provide its representation in terms of the quantum Ising model [36].
## III QSP with the Onsager Lie algebra
In the previous section, we discussed how QSP depends on the su(2) algebra. In this section, we explore an infinite-dimensional algebra known as the Onsager algebra [31; 33; 34], widely used in statistical physics and theory of integrability [33; 34]. The Onsager algebra is defined in terms of operators \(\hat{A}_{n}\) and \(\hat{G}_{n}\), which are recursively generated from "seed" operators \(\hat{A}_{0}\) and \(\hat{A}_{1}\) via the following relations
\[[\hat{A}_{n},\hat{A}_{0}] =4G_{n}\] \[[\hat{G}_{1},\hat{A}_{n}] =2(\hat{A}_{n+1}-\hat{A}_{n-1})\;. \tag{4}\]
From these relations it is possible to build the complete structure of the algebra, as follows
\[[\hat{A}_{n},\hat{A}_{m}] =4\hat{G}_{n-m}\] \[[\hat{G}_{n},\hat{A}_{m}] =2(\hat{A}_{m+n}-\hat{A}_{m-n})\] \[[\hat{G}_{n},\hat{G}_{m}] =0\;, \tag{5}\]
where \(\hat{G}_{-n}=-\hat{G}_{n}\). An important aspect of this algebra is that the "seed" operators should satisfy the so called Dolan-Grady conditions [34]
\[[\hat{A}_{0},[\hat{A}_{0},[\hat{A}_{0},\hat{A}_{1}]]] =16[\hat{A}_{0},\hat{A}_{1}]\] \[[\hat{A}_{1},[\hat{A}_{1},[\hat{A}_{1},\hat{A}_{0}]]] =16[\hat{A}_{1},\hat{A}_{0}]\;. \tag{6}\]
These relations reveal a fundamental symmetry of statistical mechanics known as the Krammers-Wannier duality [51; 52; 53], which is related to the theory of the two-dimensional Ising model and the one-dimensional quantum Ising model in a transverse field. More specifically, the duality means that we can get an equivalent theory by exchanging the "seeds" of the algebra as follows: \(\hat{A}_{0}\rightarrow\hat{A}_{1}\) and \(\hat{A}_{1}\rightarrow\hat{A}_{0}\).
Before discussing any particular representation of the Onsager algebra, let us explore the feasibility of defining a QSP sequence using the generators \(\hat{A}_{n}\), as they are the fundamental units used to build the full algebra. As the algebra is constructed in a recursive fashion, it is reasonable to define a QSP using the exponential map \(\exp:\mathcal{G}\to G\), allowing one to map a Lie algebra \(\mathcal{G}\) to a corresponding Lie group [54]. From now on, we will assume the existence of an infinite-dimensional unitary representation of the group \(G\) associated to the Onsager algebra.
Inspired by the definition of QSP in the case of a single qubit, we define here the signal operator
\[\hat{W}^{\mathrm{O}}(\theta)=\exp\left(\mathrm{i}\theta\hat{A}_{1}\right)\;. \tag{7}\]
Correspondingly, let us also define the signal-processing unitary operator
\[\hat{S}^{\mathrm{O}}(\phi_{r})=\exp\left(\mathrm{i}\phi_{r}\hat{A}_{0}\right)\;. \tag{8}\]
Considering the combined action of these two operators, we can define a QSP variant in terms of Onsager generators:
\[\hat{U}^{\mathrm{O}}_{\vec{\phi}}(\theta)=\hat{S}^{\mathrm{O}}(\phi_{0})\prod _{r=1}^{d}\hat{W}^{\mathrm{O}}(\theta)\hat{S}^{\mathrm{O}}(\phi_{r})\;. \tag{9}\]
Furthermore, in contrast to previous works in QSP, the Dolan-Grady conditions [32; 33; 34] allow us to build a "Dual Onsager QSP" sequence
\[\hat{U}^{\mathrm{DO}}_{\vec{\phi}}(\theta)=\hat{S}^{\mathrm{DO}}(\phi_{0}) \prod_{r=1}^{d}\hat{W}^{\mathrm{DO}}(\theta)\hat{S}^{\mathrm{DO}}(\phi_{r})\;. \tag{10}\]
by exchanging \(\hat{A}_{0}\rightarrow\hat{A}_{1}\) and \(\hat{A}_{1}\rightarrow\hat{A}_{0}\) in Eq. 9. Here, \(\hat{W}^{\mathrm{DO}}(\theta)\) and \(\hat{S}^{\mathrm{DO}}(\phi_{r})\) are the dual signal and signal processing operators.
Although the nature of the Onsanger algebra is fundamentally different from that of su(2) in standard QSP, this modified QSP sequence still exploits the non-commuting character of the "seed" operators to build up a nontrivial set of physical operations. We can consider a spin representation of the Onsager algebra [34] with "seed" operators \(\hat{A}_{0}=\sum_{j=1}^{N}X_{j}\) and \(\hat{A}_{1}=\sum_{j=1}^{N}Z_{j}Z_{j+1}\) where \(X_{j},Y_{j},Z_{j}\) are Pauli matrices at a given site \(j\) with periodic boundary conditions
\(Y_{1}\), and \(Z_{N+1}=Z_{1}\). In this case, we can explicitly see the nontrivial character of the duality that maps product states (eigenstates of \(\sum_{j=1}^{N}X_{j}\)) to maximally-entangled states (eigenstates of \(\hat{A}_{1}=\sum_{j=1}^{N}Z_{j}Z_{j+1}\)).
## IV The Onsager Lie algebra and the one dimensional quantum Ising model in a transverse field
To implement a manybody version of quantum signal processing, one needs to build a discrete sequence of physical operations that can be interpreted as a program to calculate a desired function. As such to build a discrete sequence of operations in a manybody system, we focus here on a time dependent one dimensional quantum Ising model [36]
\[\hat{H}(t)=-\hbar g(t)\sum_{j=1}^{N}Z_{j}-\hbar J(t)\sum_{j=1}^{N}X_{j}X_{j+1}\;, \tag{11}\]
where \(g(t)\) is a global time dependent transverse field while \(J(t)\) is a time-dependent interaction strength. At this stage is important to emphasize that our approach requires some knowledge of \(g(t)\) and \(J(t)\) and in terms of a particular implementation, it requires controllability of these parameters. Recent experiments [29; 30] demonstrate the high degree of control of the parameters \(g(t)\) and \(J(t)\) using arrays of superconducting qubits. In this way, we can build discrete single- and two-qubit operations by modulating the parameters \(g_{j}(t)\) and \(J(t)\), respectively.
The crucial point of the theory of the quantum Ising model is that the Hamiltonian Eq. (11) is an integrable model built in terms of generators of the Onsager algebra [34].
With these elements at hand, we can define a manybody QSP sequence as follows
\[\hat{U}_{\vec{g}}(\theta)=e^{\mathrm{i}\phi_{0}\sum_{j=1}^{N}Z_{j}}\prod_{r=1} ^{d}e^{i\phi_{2}\sum_{j=1}^{N}X_{j}X_{j+1}}e^{\mathrm{i}\phi_{r}\sum_{j=1}^{N} Z_{j}}\;. \tag{12}\]
Now that we have establish the relation between the Ising model and the Onsager algebra, we can explore the physical meaning of the duality and understand its nontrivial character. To do this, let us consider the time independent case \(g(t)=g_{0}\) and \(J(t)=J_{0}\). When the transverse field strength is much stronger than the spin interaction, the system is in the paramagnetic phase. In the opposite regime, the system is in the ferromagnetic phase, which is characterized by long-range correlations between the spins. What the Krammers-Wannier duality does is to exchange the role of the terms giving us the dual Hamiltonian [53; 55]
\[\hat{H}^{\mathrm{D}}(t)=-\hbar g(t)\sum_{j=1}^{N}\tilde{X}_{j}\tilde{X}_{j+1} -\hbar J(t)\sum_{j=1}^{N}\tilde{Z}_{j}\;, \tag{13}\]
where \(\tilde{X}_{j},\tilde{Y}_{j},\tilde{Z}_{j}\) are Pauli matrices in the dual lattice. Geometrically, this duality can be understood as replacing links by nodes and nodes by links in the chain [53]. At the critical point \(g_{0}=J_{0}\), the system is self-dual, as the Hamiltonian looks the same both in the original and dual representations. This is not only a mathematical curiosity. In fact, as stated before, the duality can be interpreted as symmetry in statistical mechanics where the self-dual point is a quantum critical point of the model [36]. Further, the quantum Ising chain can be mapped to the two-dimensional classical Ising chain. The quantum critical point naturally maps to the critical temperature at which the classical phase transition occurs in the classical 2D Ising model [31; 36; 51].
Next, by using the Krammers-Wannier duality, we can define the dual QSP sequence that exchanges the role of signal and signal processing operators
\[\hat{U}_{\vec{\phi}}^{\mathrm{D}}(\theta)=e^{\mathrm{i}\phi_{0}\sum_{j=1}^{N} \tilde{X}_{j}\tilde{X}_{j+1}}\prod_{r=1}^{d}e^{i\phi_{2}\sum_{j=1}^{N}\tilde{ Z}_{j}}e^{\mathrm{i}\phi_{r}\sum_{j=1}^{N}\tilde{X}_{j}\tilde{X}_{j+1}}\;. \tag{14}\]
Although this expression looks fairly simple, it is highly nontrivial, due to the non-commuting character of the signal and signal-processing operators. Moreover, there is a operational relation between the original and dual quantum circuits depicted in Fig. 1 c) and d), which is given by
\[\hat{U}_{\vec{\phi}}^{\mathrm{D}}(\theta)=\hat{U}_{-\vec{\phi}}^{\dagger}(- \theta)\;. \tag{15}\]
Next, it is important to discuss the experimental feasibility of our proposal. A recent experiment [29] implemented a
Figure 1: QSP with the Ising chain and Krammers-Wannier duality. Here a) and b) illustrate the Ising chain and its dual. Moreover, c) and d) depicts the corresponding quantum circuit to implement the QSP sequence dependent on the Onsager algebra. Under the duality transformation, lattice sites map to links in the dual lattice and vice-versa. The dashed lines in a) and b) show the links and sites of the dual and original lattice, respectively. In terms of a practical implementation of our ideas in NISQ devices, the lattice sites and bonds in a) and b) represent the single- and two-qubit gates in c) and d), respectively.
spin-spin interaction of the form \(\theta\sum_{j}Z_{j}Z_{j}\) term and the transverse field \(\phi\sum_{j}X_{j}\). In their experiment, they chose values of the parameters such that \(\theta\in[0.5\pi,1.5\pi]\) and \(\phi\in[-\pi,\pi]\). Our model can be exactly mapped to the model realized experimentally by using spin rotations.
Another important point that we want to emphasize is that so far, as the Onsager algebra is infinite-dimensional, the algebraic structure of the problem is not related to the su(2) algebra used in the case of the single-qubit QSP. In the next section, we extend the notion of QSP sequence at the level of the operators and then map the system to a fermionic representation. This allows us to simplify the complexity of the problem.
## V Jordan-Wigner transformation and QSP in the Heisenberg picture
In this section, we briefly summarize how to use tools from the theory of the TFIM to effectively reduce the dynamics of the model to a pseudo-spin representation in the Heisenberg picture. This will enable us to work using the su(2) algebra.
One of the most interesting aspects of the one-dimensional quantum Ising model is that it can be mapped to a system of non-interacting fermions described by a quadratic Hamiltonian [35; 36]. The transformation that allows us to do this is a non-local mapping known as the Jordan-Wigner (JW) transformation [35]. By working in momentum space, one can see that the Hamiltonian creates pairs of excitations with opposite momenta, which is known as a P-wave superconductor [56]. This effectively allows us to decompose the dynamics in terms of independent two-level systems in the particle hole basis [36].
### Bogoliubov the Gennes Hamiltonian and pseudo-spin representation
After applying the JW transformation and the discrete Fourier transformation \(\hat{f}_{j}=\frac{e^{\mathrm{i}\hat{\tau}}}{\sqrt{N}}\sum_{k}\hat{F}_{k}e^{ \mathrm{i}kj}\) to the Ising model in Eq. (11), we obtain a fermionic Hamiltonian [36; 57],
\[\hat{H}(t)=\sum_{k\geq 0}\hat{\Psi}_{k}^{\dagger}\mathbf{H}_{k}\hat{\Psi}_{k}\, \tag{16}\]
where \(\hat{\Psi}_{k}^{\dagger}=(\hat{F}_{k}^{\dagger},\hat{F}_{-k})\). In appendix A we provide a detailed derivation of Eq. (16). The matrix representation
\[\mathbf{H}_{k}=2\hbar[g(t)-J(t)\cos k]\sigma_{z}+2\hbar J(t)\sin k\ \sigma_{x} \tag{17}\]
of the fermionic quadratic form is known as the Bogoliubov de Gennes Hamiltonian and describes a one-dimensional P-wave superconductor [56]. Here \(\sigma_{x},\sigma_{y}\), and \(\sigma_{z}\) are Pauli matrices in the particle-hole basis. Importantly, as the Hamiltonian is quadratic the Heisenberg equations of motion are linear and can be written in terms of the entries of the Bogoliubov de Gennes Hamiltonian as follows
\[\mathrm{i}\frac{d}{dt}\begin{bmatrix}\hat{F}_{k}\\ \hat{F}_{-k}^{\dagger}\end{bmatrix}=\begin{bmatrix}2(g(t)-J(t)\cos k)&2J(t) \sin k\\ 2J(t)\sin k&-2(g(t)-J(t)\cos k)\end{bmatrix}\begin{bmatrix}\hat{F}_{k}\\ \hat{F}_{-k}^{\dagger}\end{bmatrix}\, \tag{18}\]
which has a general solution \(\hat{\Psi}_{k}(t)=\mathbf{U}_{k}(t)\cdot\hat{\Psi}_{k}(0)\), where
\[\mathbf{U}_{k}(t)=\begin{bmatrix}\mathcal{U}_{k}(t)&\mathcal{V}_{k}^{*}(t)\\ \mathcal{V}_{k}(t)&\mathcal{U}_{k}^{*}(t)\end{bmatrix} \tag{19}\]
is a propagator for the operators in the Heisenberg picture [57]. In appendix B we provide a detailed explanation of the relation between the evolution of the fermionic operators in the Heisenberg picture and the explicit mapping to spin states in the Schodinger picture.
### QSP for fermionic operators in the Heisenberg picture
At the formal level, now we can use the propagator of the fermionic operators in Eq. (19) to do QSP in the Heisenberg picture. The advantage that we have of working in this framework is that we effectively reduce the problem of the infinite-dimensional Onsager algebra to an effective su(2) algebra in the Heisenberg picture. In fact, from the general QSP protocol defined in Eq. (12), we can construct a QSP protocol in the Heisenberg picture by using the Bologiubov de Gennes Hamiltonian in Eq. (33) as follows
\[\mathbf{U}_{k,\vec{\theta}}(\theta)=e^{-2\mathrm{i}\phi_{0}\sigma_{z}}\prod_{r=1} ^{d}e^{2\mathrm{i}\theta(\sigma_{z}\cos k-\sigma_{z}\sin k)}e^{-2\mathrm{i} \phi_{r}\sigma_{z}}. \tag{20}\]
This iterative gate sequence resembles the conventional QSP protocol. However, in order to use the conventional QSP methods to design and analyze the action of the gate sequence in the fermionic mode space, we need to identify the signal and processing unitaries [1] associated with the proposed gate sequence.
It is worth mentioning that the Krammers-Wannier duality also has a representation in terms of Bologiubov de Gennes Hamiltonian in Eq. (33). We can show that the dual QSP in Eq. (14) is obtained by exchanging the order of the operations and roles of the parameters \(\phi_{r}\) and \(\theta\) in Eq. (20), as follows
\[\mathbf{U}_{k,\vec{\theta}}^{\mathrm{D}}(\theta)=e^{2\mathrm{i}\phi_{0}(\sigma_{z }\cos k-\sigma_{z}\sin k)}\prod_{r=1}^{d}e^{-2\mathrm{i}\theta\sigma_{z}}e^{2 \mathrm{i}\phi_{r}(\sigma_{z}\cos k-\sigma_{z}\sin k)}. \tag{21}\]
The Krammers-Wannier duality becomes extremely simple in the Heisenberg picture when we use the particle-hole basis. In fact, the QSP protocols in Eqs. (21) and (20) are related by the combined action of a rotation and complex conjugation, as follows
\[\mathbf{U}_{k,\vec{\theta}}^{\mathrm{D}}(\theta)=e^{-\mathrm{i}\frac{\mathrm{i}} {2}\sigma_{r}}\mathbf{U}_{-k,\vec{\theta}}^{*}(\theta)e^{\mathrm{i}\frac{\mathrm{ i}}{2}\sigma_{r}}. \tag{22}\]
This relation resembles Eq. (15) for the quantum circuits shown in Fig. 1.
### Expressivity of QSP in the Heisenberg picture
The main difference between the gate sequence in Eq. (20) and the usual subitization/QSP setup is that in the proposed scheme, the rotation axes of the two single-qubit rotations in each iteration are not orthogonal to one another. Moreover, the angle between the two rotation axes depends on the momentum \(k\) of the fermionic mode. Consequently, the identification of the signal and processing unitaries is not immediate. However, this problem can be resolved by noticing the following identity for the \(k\) dependent generator \(\mathrm{SU}(2)\) rotations
\[e^{2\mathrm{i}\theta(\sigma_{z}\cos k-\sigma_{x}\sin k)}=e^{i\frac{\pi}{2} \sigma_{z}}e^{-i\frac{\pi}{2}\sigma_{x}}e^{2\theta\sigma_{z}}e^{i\frac{\pi}{2} \sigma_{z}}e^{-i\frac{\pi}{2}\sigma_{z}}\,. \tag{23}\]
From this identity we obtain the QSP sequence
\[\mathbf{U}_{k,\vec{q}}(\theta)=e^{i(\pi/4-2\phi_{0})\sigma_{z}}\left(\prod_{r=1}^{ d}e^{-i\frac{\pi}{2}\sigma_{x}}e^{2\theta\sigma_{z}}e^{i\frac{\pi}{2}\sigma_{x}} e^{-2\delta\phi,\sigma_{z}}\right)e^{-i\frac{\pi}{2}\sigma_{z}}\,. \tag{24}\]
The sequence in parentheses is identical to to the QSVT scheme in Ref. [24], except that the phase sequence is constrained by \(\theta\). The data processed with QSP are encoded in the projected unitary
\[|0\rangle_{k}\langle 0|_{k}e^{i\frac{\pi}{2}\sigma_{z}}|0\rangle_{k}\langle 0|_{ k}=\cos{(k/2)}|0\rangle_{k}\langle 0|_{k}. \tag{25}\]
The achievable set of polynomial functions of \(\cos{(k)}\) using the constrained phase sequence is smaller than that of standard QSVT. First, it is clear that only even parity functions of the signal can be implemented. Otherwise, the constraints seem to be not very strong.
We first show that when \(\theta=\pi/4\), the evolution of the fermionic creation and annihilation operators for each momentum sector can be simplified. To obtain the desired simplification, first consider taking \(\sigma_{z}\) as the generator of the processing unitary. Then the the block-encoded signal is \(\cos{(2\theta)}+\mathrm{i}\cos{(k)}\sin{(2\theta)}\) because
\[e^{2\mathrm{i}\theta(\sigma_{z}\cos k-\sigma_{x}\sin k)}=\cos(2 \theta)\hat{1}+\mathrm{i}(\sigma_{z}\cos k-\sigma_{x}\sin k)\sin(2\theta)\,. \tag{26}\]
Crucially, the block encoded signal is \(\cos{(k)}\) when \(\theta=\pi/4\). Physically, this value allows to create maximally-entangled states in arrays of qubits via the Ising interaction [58]. In terms of experimental implementations, this value of \(\theta\) is within reach in currently available arrays of superconducting qubits [29].
Next, we discuss in more detail the special case mentioned above. By inspecting Eq. (24), we see that if we set \(\theta=\pi/4\) in Eq. (24) we obtain the QSP sequence in the canonical form
\[\mathbf{V}_{k,\vec{q}} =e^{i(\pi/4-2\phi_{0})\sigma_{z}}\left(\prod_{r=1}^{d}e^{-ik \sigma_{x}}e^{i(\pi/2-2\phi_{r})\sigma_{z}}\right)e^{-i\frac{\pi}{4}\sigma_{z}}\] \[=e^{i\phi_{0}\sigma_{z}}\prod_{r=1}^{d}e^{-ik\sigma_{x}}e^{i\phi _{0}\sigma_{z}}\,, \tag{27}\]
where the signal operator is a rotation along \(x\)-axis with an angle proportional to the quasimomentum \(k\). The signal processing can be accomplished through a sequence of rotations along the \(z\)-axis by new angles defined as
\[\vec{\Phi}=(\Phi_{0},\Phi_{1},\Phi_{2},\ldots,\Phi_{d-1},\Phi_{d})\,, \tag{28}\]
where this sequence is obtained by defining the endpoints phases \(\Phi_{0}=\pi/4-2\phi_{0}\) and \(\Phi_{d}=\pi/4-2\phi_{d}\) and \(\Phi_{r}=\pi/2-2\phi_{r}\) for \(r=1,\ldots,d-1\), where \(\phi_{r}\) are the phases of the original sequence \(\vec{\phi}=(\phi_{0},\phi_{1},\ldots\phi_{d})\).
For convenience, from now on in our paper we use the notation \(\mathbf{V}_{k,\vec{q}}=\mathbf{U}_{k,\vec{q}}(\pi/4)\) to distinguish this special unitary. We will also use \(\hat{V}^{\mathrm{O}}_{\vec{q}}=\hat{U}_{\vec{q}}(\pi/4)\) to denote the corresponding QSP sequence in terms of the Onsager algebra. Later on, we will provide examples to highlight the importance of \(\mathbf{V}_{k,\vec{q}}\) for applications.
As the signal and signal processing operator are rotations along orthogonal axis, we can use standard techniques and exploit Eq. (3) to obtain QSP sequence for \(\vec{\Phi}\)
\[\mathbf{V}_{k,\vec{\phi}}=\begin{bmatrix}P(x_{k})&\mathrm{i}Q(x_{k})\sqrt{1-x_{k }^{2}}\\ \mathrm{i}Q^{*}(x_{k})\sqrt{1-x_{k}^{2}}&P^{*}(x_{k})\end{bmatrix}\,. \tag{29}\]
From this it follows that any (bounded, definite parity) polynomial of \(x_{k}=\cos{(k)}\) can be implemented. In turn, for \(\theta=\pi/4\), the QSP protocol achieves an optimal expressivity for all the values of \(k\) because the axis for the signal and signal processing rotations are orthogonal. Moreover, as the QSP sequence Eq. (27) and its dual in Eq. (21) are related via Eq. (22), the dual QSP sequence also exhibits a high expressivity for \(\theta=\pi/4\). This, follows from Eq. (22) because the the Y-rotations can be further decomposed into Z-conjugated X rotations according to \(k\) and this means that the dual protocols have the same form as the original protocols, with the addition of one additional iterate (signal oracle). This asymmetry is due to the fact that the general QSP protocol has \(d\) signal operators and \(d+1\) controllable phases.
**Remark**.: _QSP is mainly a statement about the mathematical form of a product of parameterized \(\mathrm{SU}(2)\) operations. Usually we denote the signal by \(\theta\), and consider it an unknown [1; 2], but whenever an unknown appears and parameterizes such a product, it can be treated in place of \(\theta\). In the QSP sequence \(\mathbf{U}_{k,\vec{q}}(\theta)\) of Eq. (24), a new variable (the momentum \(k\)) appears given our problem statement. As we have multiple choices for the signal, in some situations it makes sense to tune the (known, and thus controllable) \(\theta\) dependence, effectively removing it by setting \(\theta=\pi/4\), and leaving the momentum to be processed within each subspace labelled by \(k\). In the general case, one can still use Eq. (24) when \(\theta\) is unknown, but one has to determine the expressivity a two-variable QSP sequence with not orthogonal axis. In appendix C we discuss a modified QSP sequence for arbitrary \(\theta\) and \(k\) in such a way that the signal and signal processing operations are rotations along orthogonal axis. In contrast to the usual QSP, the axis of the signal operator is defined by \(k\) and \(\theta\) in a nonlinear fashion. This is of course an interesting problem by itself, but it is beyond the scope of our current work._
Applications and examples of QSP with the Onsager algebra
At this stage it is important to consider some particular examples to see how QSP works in the Heisenberg picture by using the QSP sequence \(\mathbf{V}_{k,\vec{\Phi}}\) of Eq. (27) in momentum space with angles \(\vec{\Phi}\). As we discussed above, in some cases, it is useful to fix \(\theta=\pi/4\) to treat the momentum \(k\) as the signal to be processed. This particular value of \(\theta\) is extremely important for applications as it allows the maximum expressivity for QSP sequences in momentum space. We will start with an example where we discuss the trivial QSP sequence. In the second example, we discuss QSP sequences for the Onsager algebra and the relation to space-time dual quantum circuits, which are relevant in quantum information processing and in the study of quantum signatures of manybody chaos [18; 19; 20; 21; 22]. The next two examples are related to the use of our scheme for quantum simulation of Hamiltonians. The last example reframes a well-known protocol in NMR to synthesize a BB1 sequence [37] for the Onsager algebra [31; 34].
### Trivial QSP sequence in momentum space
The simplest example of a QSP sequence can be obtained by considering \(\vec{\Phi}=(0,0,0)\) in Eq. (27). This gives us the trivial QSP sequence in momentum space
\[\mathbf{V}_{k,\vec{\Phi}}=e^{-i2k\sigma_{x}}\,. \tag{30}\]
From this, we obtain the associated polynomial transformation of the input \(P(x_{k})=2\lambda_{k}^{2}-1\). Similarly, for \(\vec{\Phi}=(0,0,0,0)\) we obtain \(P(x_{k})=4\lambda_{k}^{3}-3x_{k}\). For a trivial protocol with length \(d\), one can show that the resulting polynomial transformation is given by the Chebyshev polynomials of the first kind \(P(x_{k})=T_{d}(x_{k})\) as in Ref. [2]. The purpose of this example is to show the versatility of Eq. (27). As this has the canonical form of the QSP known in the literature, we can use it to analyze QSP sequences with rotations \(\vec{\Phi}\) in momentum space. Then, we can translate those back into angles \(\vec{\phi}\) defining the corresponding QSP sequence for the Onsager algebra. For example, in the case of \(\vec{\Phi}=(0,0,0,0)\), the original angles are given by
\[\vec{\phi}=(\pi/8,\pi/4,\pi/4,\pi/8)\, \tag{31}\]
and define the QSP sequence \(\hat{V}^{\rm O}_{\vec{\phi}}\) for the Onsager algebra [see Eq. (12)].
### Space-time rotation and dual quantum circuits
Now let us consider a more involved example related to the theory of space-time dual quantum circuits. Motivated by a recent work [21], we consider dual quantum circuit in the absence of disorder. Recently space time duality has attracted much attention, with connections to topics ranging from quantum signatures of manybody chaos [18; 22] to dynamical quantum phase transitions [59]. One of the most appealing aspects of this theory is that it allows one to obtain analytical results even when dynamics are ergodic [20].
To make the connection between the theory space-time dual quantum circuits and QSP for the Onsager algebra, we can consider the sequence of operations in Eq. (12) for fixed \(\theta=\pi/4\) and \(\vec{\phi}=[0,\frac{\pi}{2}(1-2\epsilon),\frac{\pi}{2}(1-2\epsilon),\dots, \frac{\pi}{2}(1-2\epsilon)]\), where \(\epsilon\) is an error in the rotation angle [see Eq. (28)]. The QSP sequence with \(d\) time steps for a lattice with \(N\) sites reads
\[\hat{V}^{\rm O}_{\vec{\phi}}=\prod_{r=1}^{d}e^{i\frac{\pi}{2}\sum_{j=1}^{N}X_{ j}X_{j+1}}e^{i\frac{\pi}{2}\sum_{j=1}^{N}Z_{j}} \tag{32}\]
with \(\phi_{r}=\frac{\pi}{2}(1-2\epsilon)\).
To build a space-time dual QSP, we change the roles of space and time. In other words, the dual QSP sequence corresponds to \(N\) iterations in time of a Hamiltonian acting on \(d\) sites in space, as follows
\[\hat{V}^{\rm QST}_{\vec{\phi}}=\prod_{r=1}^{N}e^{i\vec{\phi}_{r}\sum_{j=1}^{d }Z_{j}}e^{i\vec{\phi}\sum_{j=1}^{d}\tilde{X}_{j}\tilde{X}_{j+1}}\, \tag{33}\]
where \(\widetilde{\phi}_{r}=-\pi/4\) and \(\widetilde{\theta}=-\pi/4+\mathrm{i}/2\log\{\tan[\pi/2(1-2\epsilon)]\}\)[21]. We note that this has the same form as dual Onsager QSP sequence in Eq. (14). The main difference is that the Krammers-Wannier duality exchanges the roles of signal and signal processing sequence, while keeping the evolution unitary [53]. Under the space-time duality, however, the QSP sequence is not unitary. In terms of the parameter \(\epsilon\), there is a special value \(\epsilon=1/4\) for which the dual quantum circuit is unitary and \(\widetilde{\theta}=-\pi/4\).
Next, let us explore some properties of the QSP sequence in Eq. (32) by working in quasimoment space
\[\mathbf{V}_{k,\vec{\Phi}}=\prod_{r=1}^{d}e^{i\frac{\pi}{2}(\sigma_{r}\cos k- \sigma_{x}\sin k)}e^{-2\mathrm{i}\pi(1-2\epsilon)\sigma_{x}}\, \tag{34}\]
As the QSP protocols involve constant phases, at each time step the evolution is given as a product of two unitaries. Thus by using Floquet theory, we can extract most relevant information from the evolution operator in one period of the sequence, defining the Floquet operator
\[\mathcal{F}_{k}=e^{i\frac{\pi}{2}(\sigma_{r}\cos k-\sigma_{x}\sin k)}e^{-i\pi( 1-2\epsilon)\sigma_{x}}. \tag{35}\]
The eigenvalues of the Floquet operator are \(\lambda_{k}=\exp(-\mathrm{i}\mu_{k})\) and \(\mu_{k}\) are the Floquet exponents. For example when \(k=0,\pi\), the Floquet exponents are \(\mu_{0}=\pi-2\pi|\epsilon-1/4|\) and \(\mu_{\pi}=2\pi|\epsilon-1/4|\). When \(\epsilon_{c}=1/4\), there is a \(\pi\)-energy gap for \(k=0\) and a zero energy gap for the mode \(k=\pm\pi\) indicating a quantum critical point at \(\epsilon_{c}\) that is the self-dual point under space-time duality [21]. For quasimomentum \(k=\pi/2\) the Floquet exponent is independent of the error and is given by \(\mu_{\pi/2}=\pi/2\). In Appendix D we discuss the QSP sequence in for \(k=\pi/2\). Figure 2 a) shows the Floquet exponents \(\mu_{k}\) as a function of the quasimomentum and the error. From this we can see the \(0\)- and \(\pi\)-gaps indicating the self-dual point.
To obtain more information about the space-time dual QSP sequence in Eq. (33), we consider the momentum representation
\[\mathbf{V}_{k,\vec{\theta}}^{\rm DST}=\prod_{r=1}^{N}e^{\frac{i\Psi}{2}\sigma_{z}}e^{2 \widetilde{i}\widetilde{i}\widetilde{\sigma}_{z}\cos k-\sigma_{z}\sin k)}\;. \tag{36}\]
Similarly to the QSP sequence in Eq. (47) discussed above, due to the periodicity, it is enough to study spectral properties of the non-unitary version of the Floquet operator
\[\mathcal{F}_{k}^{\rm DST}=e^{\frac{i\Psi}{2}\sigma_{z}}e^{2\widetilde{i} \widetilde{i}\widetilde{\sigma}_{z}\cos k-\sigma_{z}\sin k)}\;. \tag{37}\]
In contrast to its unitary version, the eigenvalues \(\lambda_{k}^{\rm DST}\) of the Floquet operator \(\mathcal{F}_{k}^{\rm DST}\) are not restricted to lie along the unit circle. In fact, depending on the momentum \(k\) and the error \(\epsilon\), they may satisfy \(|\lambda_{k}^{\rm DST}|<1\) or \(|\lambda_{k}^{\rm DST}|>1\). Figure 3 depicts a region plot in the \(k-\epsilon\) parameter space where the white region is determined by the condition of unitarity \(|\lambda_{k}^{\rm DST}|=1\). Interestingly, and as we discussed below, the momentum \(|k|=\pi/2\) lies in the white region for all values of the error and there is correspondence between the \(0-\) and \(\pi-\) gaps in Fig. 2 a) and the behavior of the line \(\epsilon=1/4\). In fact, in the shaded region, each eigenvalue satisfying \(|\lambda_{k}^{\rm DST}|<1\) has an exact partner such as \(|\lambda_{k}^{\rm DST}|>1\). That being said, some modes are amplified [60] and others are suppressed for parameters within the shaded region in Fig. 2 b). This spectral properties have important consequences. For example, due to long-lived quasiparticle pairs with purely real energy, the dual quantum circuit reaches a steady state with volume-law entanglement [21].
### Design of pulse sequences to simulate the response under a target spin Hamiltonian
In the previous sections, we have been focusing on describing the general formalism for QSP in terms of the spin representation of the Onsager algebra and in the Heisenberg picture. In this subsection, we will provide an example of a possible application of QSP to simulate a Hamiltonian by designing a pulse sequence. With this aim, let us consider the a simple target Hamiltonian of the form
\[\hat{H}_{\rm Target}(t) = -\hbar g_{0}\sum_{j=1}^{N}Z_{j}-\hbar J_{x}\sum_{j=1}^{N}X_{j}Z_ {j+1}Z_{j+2}Z_{j+3}X_{j+4} \tag{38}\] \[-\hbar J_{y}\sum_{j=1}^{N}Y_{j}Z_{j+1}Z_{j+2}Z_{j+3}Y_{j+4}\;.\]
It is convenient to introduce the notation \(J_{x}=J(1+\gamma)/2\) and \(J_{y}=J(1-\gamma)/2\), where \(\gamma\) is a dimensionless parameter characterizing the anisotropy of the interaction. Certainly, it is a nontrivial task to find a sequence of rotations \(\vec{\phi}\) in such a way that the resulting unitary \(\hat{U}_{\vec{\phi}}(\theta)\) from the QSP sequence in the spin representation of Eq. (12) is close to our desired target Hamiltonian for arbitrary \(\theta\). As the algebra is infinite dimensional in the limit \(N\to\infty\), the number of commutators required makes the procedure impractical. However, as we will
Figure 2: Spectral properties of a QSP sequence and its space time dual. a) Depicts the Floquet exponent \(\mu_{k}\) of the iterator as a function of the error and the quasimomentum \(k\). There are both \(0\)- and \(\pi\)-gaps and the Floquet exponents are independent on the error for \(k=\pi/2\), as we predicted using QSP methods. The critical point at \(\epsilon=1/4\) are ensured by space-time duality (see main text). b) Depicts the phase diagram determining the features of the spectrum of the space time dual QSP sequence. For parameters within the white region the eigenvalues satisfy the condition \(|\lambda_{k}^{\rm DST}|=1\) and the evolution is unitary. For the self-dual point \(\epsilon=1/4\) of the space time dual quantum circuit there is a singularity at momenta \(k=0\) and \(|k|=\pi\) in correspondence with the gapless excitation spectrum in shown in panel a).
show below, one can obtain an enormous simplification of the problem in the Heisenberg picture in the fermionic representation when we set \(\theta=\pi/4\) and work with the QSP sequences in Eqs. (27) and (29).
By applying the Jordan-Wigner transformation and the discrete Fourier transformation of the fermionic operators as we did in the case of the Ising chain, we can obtain the Bogoliubov de Gennes Hamiltonian
\[\mathbf{H}_{k}^{\rm Target}=2\hbar[g_{0}-J\cos 4k]\sigma_{z}+2\hbar J\gamma\sin 4 k\sigma_{x}. \tag{39}\]
corresponding to Eq. (38). We can rewrite this in the form \(\mathbf{H}_{k}^{\rm Target}=\hbar\Omega_{k}\mathbf{n}_{k}\cdot\mathbf{\sigma}\), where \(\mathbf{\sigma}=(\sigma_{x},\sigma_{y},\sigma_{z})\) and
\[\Omega_{k} =2\sqrt{[g_{0}-J\cos 4k]^{2}+(J\gamma)^{2}\sin^{2}4k}\] \[\mathbf{n}_{k} =\frac{2[g_{0}-J\cos 4k]}{\Omega_{k}}\sigma_{z}+\frac{2J\gamma \sin 4k}{\Omega_{k}}\sigma_{x}. \tag{40}\]
This defines \(n_{x}(k)=2J\gamma\sin 4k/\Omega_{k}\) and \(n_{z}(k)=2[g_{0}-J\cos 4k]/\Omega_{k}\). After an evolution time \(T\), the quantum evolution under \(\mathbf{H}_{k}^{\rm Target}\) is given by the unitary operator
\[\mathbf{U}_{k}=\begin{bmatrix}\cos(\Omega_{k}T)-\mathrm{i}n_{z}(k)\sin(\Omega_{k} T)&-\mathrm{i}n_{x}(k)\sin(\Omega_{k}T)\\ -\mathrm{i}n_{x}(k)\sin(\Omega_{k}T)&\cos(\Omega_{k}T)+\mathrm{i}n_{z}(k)\sin( \Omega_{k}T)\end{bmatrix}. \tag{41}\]
From this we can see that the matrix elements are functions that could be approximated using QSP in the Heisenberg picture. That is, there is a sequence \(\tilde{\Phi}\) that acts as a polynomial transformation of the input
\[\langle 0|_{k}\mathbf{U}_{k}|0\rangle_{k} =\cos(\Omega_{k}T)-\mathrm{i}n_{x}(k)\sin(\Omega_{k}T)\] \[\approx\langle 0|_{k}\mathbf{V}_{k,\tilde{\Phi}}|0\rangle_{k}\, \tag{42}\]
where \(\mathbf{V}_{k,\tilde{\Phi}}\) was defined in Eqs. (27) and (29). In the previous discussion we faced a restriction when the signal \(x_{k}=\cos(k)=\pm 1\), or equivalently, when \(k=0,\pi\). In this case the signal is proportional to the identity and the QSP sequence turns out to be a single Z-rotation. Keeping this in mind, in terms of the numerical implementation we can accurately approximate the function
\[\langle+|_{k}\mathbf{U}_{k}|+\rangle_{k}=\cos(\Omega_{k}T)\approx\langle+|_{k}\bm {V}_{k,\tilde{\Phi}}|+\rangle_{k}\, \tag{43}\]
where \(|+\rangle_{k}=(|0\rangle_{k}+|1\rangle_{k})/\sqrt{2}\). Figure 3 shows the behavior of this response function for different values of \(k\). The expressivity of the QSP sequence in the standard form of Eq. (29) has been widely investigated. Therefore, there are efficient ways to obtain a sequence of phases \(\tilde{\Phi}\) that gives us a good polynomial approximation to a desired function. In turn, this sequence of operations can be used to design a QSP sequence \(\tilde{V}_{\tilde{\Phi}}^{\rm G}\) in terms of the original Pauli operators to simulate the action of the evolution operator Eq (41) that is generated by the Hamiltonian Eq. (39).
### Reverse engineering of spin Hamiltonians from response functions in momentum space
In this subsection let us present another example example based on the idea of reverse engineering spin Hamiltonians from a given polynomial transformation in momentum space. For simplicity, we consider a phase sequence that has a simple limiting behavior in momentum space and then show that there is a pre-image spin Hamiltonian in real space which would induce this evolution.
As a starting point to construct our example, we assume a simple form for the unitary evolution
\[\mathbf{U}_{k}=e^{-\mathrm{i}\Omega_{k}T\sigma_{x}}=\begin{bmatrix}\cos(\Omega_{k }T)&-\mathrm{i}\sin(\Omega_{k}T)\\ -\mathrm{i}\sin(\Omega_{k}T)&\cos(\Omega_{k}T)\end{bmatrix}. \tag{44}\]
Clearly, the response function associated to this evolution is given by \(\langle 0|_{k}\mathbf{U}_{k}|0\rangle_{k}=\cos(\Omega_{k}T)\). We can think of defining a "reversed engineered" Hamiltonian \(\mathbf{H}_{k}^{\rm RE}=\hbar\Omega_{k}\sigma_{x}\). For concreteness, we will focus here on an example provided in the appendix D of Ref. [2] of a phase sequence as a polynomial approximation for phase estimation function.
\[\cos(\Omega_{k}T)=2\Pi(3x_{k}/2)-1. \tag{45}\]
where \(\Pi(z)\) denotes the box distribution (also known as the Heaviside Pi function). It follows that the angular frequency dispersion \(\Omega_{k}=\pi/T-\pi\Pi(3x_{k}/2)/T\). We now employ Fourier analysis to obtain the expression
\[\Pi\left(\frac{3x_{k}}{2}\right)=\frac{3}{4\pi}\int_{-\infty}^{\infty}\frac{ \sin(3\omega/4)}{3\omega/4}e^{\mathrm{i}\omega n}d\omega=\frac{3}{4\pi}\sum_{n=- \infty}^{\infty}\mathrm{i}^{n}e^{\mathrm{i}n}G_{n}\, \tag{46}\]
where \(G_{n}=\int_{-\infty}^{\infty}\sin(3\omega/4)/(3\omega/4)\mathcal{J}_{n}(\omega)d\omega\) with \(\mathcal{J}_{n}(\omega)\) being a Bessel function of the first kind [61]. From these relations, we can obtain a closed form for the Hamiltonian
\[\mathbf{H}_{k}^{\rm RE}=\frac{\hbar}{T}\left[\pi-\frac{3}{2}\left(\sum_{n=0}^{ \infty}(-1)^{n}\cos(2nk)G_{2n}\right)\right]\sigma_{x}\, \tag{47}\]
where we have exploited the symmetry \(G_{2n}=G_{-2n}\) and the fact that \(G_{2n+1}=0\). With all these elements at hand, we can obtain the fermionic Hamiltonian \(\hat{H}^{\rm RE}=\sum_{k\geq}\hat{\mathbf{\Psi}}_{k}^{\dagger}\mathbf{H}_{k}^{\rm RE} \hat{\mathbf{\Psi}}_{k}\), as follows
\[\hat{H}^{\rm RE} =-\frac{3\hbar}{2T}\sum_{k\geq 0}\sum_{n=0}^{\infty}(-1)^{n}G_{2n} \left(\cos(2nk)\hat{F}_{-k}\hat{F}_{k}+\mathrm{h.c}\right)\] \[=-\frac{3\hbar}{4T}\sum_{j}\sum_{n=0}^{\infty}(-1)^{n}G_{2n}(\hat{ f}_{j-2n}\hat{f}_{j}+\hat{f}_{j+2n}\hat{f}_{j})+\mathrm{h.c}. \tag{48}\]
We will not show the derivation here, but the fermionic terms can be re-written in terms of Pauli matrices, giving rise to non-local spin Hamiltonians of the form
\[\sum_{j}(\mathrm{i}\hat{f}_{j-2n}\hat{f}_{j}+\mathrm{h.c})=\sum_{j}(X_{j-2n}\hat{ M}_{j}^{\epsilon}X_{j}-Y_{j-2n}\hat{M}_{j}^{\epsilon}Y_{j}). \tag{49}\]
Here \(\hat{M}_{j}^{\epsilon}=Z_{j-2n+1}\cdots Z_{j-1}\) arises from the Jordan Wigner string connecting the sites \(j-2n\) and \(j\). We refer to the interested reader to Ref. [2], that provides the explicit phase sequence required to approximate the phase estimation function.
The example presented above shows there is always some pre-image of a QSP transformation in momentum space in the form of a time-independent spin Hamiltonian in real space that matches the evolution we achieve. However, in general, the pre-image spin Hamiltonian is highly non-local, as we can see from our example. Nevertheless, the QSP sequence in terms of the Onsager algebra \(\hat{V}^{0}_{\hat{\phi}}\) is given as a sequence of single- and two-qubit gates.
### BB1 protocol for the quantum Ising chain
In final subsection our main focus will be to use a paradigmatic composite sequence from the NMR community in the context of our QSP sequence in the momentum space. In turn, our result allows us to define a BB1 protocol for the Onsager algebra applicable to quantum Ising chains.
To start, let us consider the QSP sequence \(\mathbf{V}_{k,\vec{\Phi}}\) in Eq. (27) for a fixed angle \(\theta=\pi/4\). Notably, if we forget the physical meaning of the quasimomentum \(k\), we can interpret it as a signal and the QSP sequence has the same structure as the canonical form of QSP sequence for SU(2) in Eq. (3). Naively, we can use known QSP sequences for su(2) in the literature to "synthesize" new QSP sequences for the Onsager algebra.
For concreteness, let us consider a paradigmatic composite pulse sequence in NMR known as the "BB1" sequence [37; 2]. In the context of our QSP sequence in momentum space, we can do some signal processing of the quasimomentum \(k\), by considering a sequence of rotations
\[\vec{\Phi}_{\text{BB1}}=(\pi/2,-\chi,2\chi,0,-2\chi,\chi)\, \tag{50}\]
where \(\chi=1/2\cos^{-1}(-1/4)\). This has exactly the same form as the BB1 "composite-pulse" sequence used in NMR. From Eq. (28) and (50) we can retrieve the original phases
\[\vec{\phi}_{\text{BB1}}=\left(-\frac{\pi}{8},\frac{\chi}{2}+\frac{\pi}{4},- \chi+\frac{\pi}{4},\frac{\pi}{4},\chi+\frac{\pi}{4},-\frac{\chi}{2}+\frac{\pi }{8}\right) \tag{51}\]
which allows us to define the BB1 sequence for the Onsager algebra \(\hat{V}^{0}_{\hat{\phi}_{\text{BB1}}}=\hat{U}_{\hat{\phi}_{\text{BB1}}}(\pi/4)\) in Eq. (12). In momentum space, the signal to be processed is the momentum \(k\) and we can define a QSP sequence \(\mathbf{V}_{k,\vec{\Phi}_{\text{BB1}}}\) as in Eq. (27). To understand the effect of the BB1 sequence, it is illustrative to obtain the probability in the absence of any processing, i.e., for \(\vec{\Phi}=(0,0)\) and a given momentum \(k\)
\[R_{k}=|\langle 0|_{k}\mathbf{V}_{k,(0,0)}|0\rangle_{k}|^{2}=x_{k}^{2}. \tag{52}\]
Now, if we apply the BB1 sequence, we obtain the modified transition probability
\[R_{k}^{\text{BB1}} = |\langle 0|_{k}\mathbf{V}_{k,\vec{\Phi}_{\text{BB1}}}|0\rangle_{k}|^{2} \tag{53}\] \[= \frac{1}{8}x_{k}^{2}\left[3x_{k}^{8}-15x_{k}^{6}+35x_{k}^{4}-45x_ {k}^{2}+30\right]\,\]
where \(x_{k}=\cos(k)\). In NMR, the BB1 sequence is known for allowing the two level system to remain unflipped for a wide range of signals. In our case, in a region around \(k=0\) and \(k=\pi\). This sequence shows a sharp transition for \(|k|\approx\pi/3\) and \(|k|\approx 2\pi/3\). As a consequence, when applying the BB1 sequence, we obtain a high sensitivity to specific values of the momentum \(k\). Here it is important to remark that this step function can be made arbitrarily sharp [37; 2]. The main benefit of BB1, besides its historical status, is that the protocol is relatively short, and its achieved polynomial transform is easy to write down.
But what are the consequences of this sensitivity? Well, the QSP sequence keeps both long-wavelength (\(k\approx 0\)) and short-wavelength excitations (\(k\approx\pi\)) frozen, while it flips excitations with momentum close to \(k\approx\pi/2\). That is, if we prepare an initial spin state \(|\Psi(0)\rangle=\prod_{k=-\pi}^{\pi}|0\rangle_{k}=|\uparrow,\uparrow,\dots, \uparrow,\uparrow\rangle\), we can calculate the probability
\[R=|\langle\Psi(0)|\hat{V}_{\hat{\phi}_{\text{BB1}}}|\Psi(0)\rangle|^{2}=\prod_ {k=-\pi}^{\pi}R_{k}^{\text{BB1}}=0. \tag{54}\]
This turns out to be exactly zero because \(P_{\pm\pi/2}^{\text{BB1}}=0\).
## VII Conclusions
In summary, we have investigated QSP protocols for the Onsager algebra, an infinite dimensional Lie algebra that naturally appears in the theory of the Ising model. We have shown that by mapping the Ising model to a system of non-interacting fermions, we can define QSP protocols for the fermionic operators in the Heisenberg picture respecting the su(2) algebra. This naturally allows one to exploit the tools of standard QSP with SU(2) operations. We then applied such sequences to illustrate various examples and applications in diverse fields ranging from space-time dual quantum circuits, quantum engineering of spin Hamiltonians, and composite pulse sequences in spin chains. These examples highlight the wide utility of our approach and how one can translate QSP sequences in momentum space based on su(2) algebra in the Heisenberg picture to well-defined protocols dependent on the Onsager algebra in the Schodinger picture.
Figure 4: BB1 QSP sequence in momentum space and its effect on the transition probability. The green curve depicts the transition probability \(R_{k}\) in Eq. (52) without signal processing. The blue curve shows the transition probability \(R_{k}^{\text{BB1}}\) in Eq. (53) after applying the BB1 sequence.
There are of course some remaining open questions that are worth exploring. For example, when we start with the Onsager algebra in the Schodinger picture, after a set of transformations, the evolution of the operators in the Heisenberg picture can be entirely described by the standard theory of QSP. For tuned values of system, we reach the optimal expressivity for QSP sequences in momentum space. However, it remains unclear how generalizable this approach is to other systems defined by other algebras and at other tuned points. It would be worthwhile to determine which classes of physical models permit QSP-like control. This could allow one to make statements about the robustness of QSP in the context of condensed matter systems and quantum simulation. For example, it would be interesting to explore QSP sequences in spin chains such as the XXZ model, which cannot be mapped to systems of interacting fermions [48; 62]. To deal with this problem, one can use bosonization to map problems of interacting fermions at half-filling to squeezed collective bosonic modes [63]. This will of course require one to use recently developed QSP sequences based on the su(1, 1) algebra for continuous variables [28]. It would be interesting to explore the use of QSP methods to treat non-integrable models such as high-dimensional version of the TFIM. For example, a two-dimensional lattice can be represented as a family of coupled one-dimensional TFIMs. In certain regimes, our approach for the one-dimensional TFIM can provide a good approximation for a two-dimensional problem. Other possible extension of our work is to investigate QSP sequences in two-band topological insulators and topological superconductors which can be described using a pseudo-spin approach in momentum space [64].
_Acknowledgments.--_ The authors would like to thank NTT Research Inc. for their support in this collaboration. The authors are thankful for fruitful discussions with S. Sugiura. WJM and VMB acknowledge partial support through the MEXT Quantum Leap Flagship Program (MEXT Q-LEAP) under Grant No. JPMXS0118069605. ZMR was supported in part by the NSF EPiQC program, and ILC was supported in part by the U.S. DoE, Office of Science, National Quantum Information Science Research Centers, and Co- design Center for Quantum Advantage (C2QA) under Contract No. DE-SC0012704
## Appendix A Jordan Wigner transformation and P-wave superconductivity
The Jordan Wigner transformation allows one to represent the Pauli matrices in terms of fermionic operators. This mapping is highly non-local is given by
\[X_{j} =(\hat{f}_{j}^{\dagger}+\hat{f}_{j})\prod_{m=1}^{j-1}(1-2\hat{f} _{m}^{\dagger}\hat{f}_{m})\] \[Y_{j} =-\mathrm{i}(\hat{f}_{j}^{\dagger}-\hat{f}_{j})\prod_{m=1}^{j-1}( 1-2\hat{f}_{m}^{\dagger}\hat{f}_{m})\] \[Z_{j} =1-2\hat{f}_{j}^{\dagger}\hat{f}_{j}. \tag{10}\]
Here the operators \(\hat{f}_{j}^{\dagger}\) and \(\hat{f}_{j}\) are the fermionic creation and annihilation operators in real space satisfying the anticommutation relations \(\{\hat{f}_{i},\hat{f}_{j}^{\dagger}\}=\delta_{i,j}\) and \(\{\hat{f}_{i},\hat{f}_{j}\}=\{\hat{f}_{i}^{\dagger},\hat{f}_{j}^{\dagger}\}=0\).
After applying the JW transformation to the Ising model in Eq. (11), we obtain the fermionic quadratic Hamiltonian
\[\hat{H}(t) =-\hbar g(t)\sum_{j=1}^{N}(1-2\hat{f}_{j}^{\dagger}\hat{f}_{j})- \hbar J(t)\sum_{j=1}^{N-1}(\hat{f}_{j}^{\dagger}-\hat{f}_{j})(\hat{f}_{j+1}^{ \dagger}+\hat{f}_{j+1})\] \[=2\hbar\sum_{k\geq}(g(t)-J(t)\cos k)(\hat{F}_{k}^{\dagger}\hat{F} _{k}-\hat{F}_{-k}\hat{F}_{-k}^{\dagger})\] \[+2\hbar J(t)\sum_{k\geq}\sin k(\hat{F}_{k}^{\dagger}\hat{F}_{-k}^ {\dagger}+\hat{F}_{-k}\hat{F}_{k})\] \[=\sum_{k\geq}\hat{\Psi}_{k}^{\dagger}H_{k}\hat{\Psi}_{k}\, \tag{11}\]
where \(\hat{\Psi}_{k}^{\dagger}=(\hat{F}_{k}^{\dagger},\hat{F}_{-k})\). Here \(\hat{F}_{k}^{\dagger}\) and \(\hat{F}_{k}\) are fermionic creation and annihilation operators in momentum space. The matrix representation of the fermionic quadratic form is known as the the Bogoliubov de Gennes Hamiltonian
\[\mathbf{H}_{k}=\begin{bmatrix}2\hbar[g(t)-J(t)\cos k]&2\hbar J(t)\sin k\\ 2\hbar J(t)\sin k&-2\hbar[g(t)-J(t)\cos k]\end{bmatrix} \tag{12}\]
and describes a P-wave superconductor. Here the superconducting term describes the creation of pairs of fermions with opposite momenta [56].
## Appendix B Mapping QSP in the Heisenberg picture to the Schrodinger picture: The BCS ansatz
In the main text, we show that after applying Jordan-Wigner transformation and the discrete Fourier transform, we were able to reduce problem to a QSP sequence in the Heisenberg picture using SU(2) group. This was possible due to the pseudo-spin structure in momentum space. The natural question is how to map the QSP in terms of spins in real space.
A solution to this problem is to exploit the structure of the fermionic Hamiltonian Eq. (16) in the reciprocal space. This Hamiltonian breaks the conservation of particles and allows the creation of pairs of spinless fermions moving in opposite directions. The creation of pairs characterized by a time dependent pairing potential \(\Delta_{k}(t)=2\hbar J(t)\sin k\) is odd under motion reversal symmetry \(k\rightarrow-k\), which is a signature of a p-wave superconductor. As the excitations are created in pairs, one can show that any state of the system in the Schrodinger picture can be written using the well known BCS Ansatz from the theory of superconductivity [57]
\[|\Psi(t)\rangle=\prod_{k>0}\Big{[}v_{k}(t)+u_{k}(t)\hat{F}_{k}^{\dagger}\hat{F }_{-k}^{\dagger}\Big{]}|0\rangle_{k}\, \tag{13}\]
where \(|0\rangle_{k}\) is the vacuum for the \(k\)-th fermionic mode. The key point of this approach is that the time-dependent coefficients appearing in the Ansatz can be obtained by using the relation
and has a general solution
\[\begin{bmatrix}u_{k}(t)\\ v_{k}(t)\end{bmatrix}=\begin{bmatrix}\mathcal{U}_{k}(t)&\mathcal{V}^{*}_{k}(t)\\ \mathcal{V}_{k}(t)&\mathcal{U}^{*}_{k}(t)\end{bmatrix},\begin{bmatrix}u_{k}(0)\\ v_{k}(0)\end{bmatrix}\,. \tag{30}\]
The propagator in this equation is the same as the propagator \(\mathbf{U}_{k}(t)\) in Eq. (19) for the operators \(\hat{F}_{k}(t)\) and \(\hat{F}^{\dagger}_{k}(t)\) in the Heisenberg picture. One can think of this approach in terms of a pseudo spin approach, where the state of the two level system is described by a spinor \(\psi^{\mathsf{T}}_{k}(t)=[u_{k}(t),v_{k}(t)]\).
To have an intuitive understanding of this it is instructive to consider a simple example. Next we focus on the Ising Hamiltonian Eq. (11) in the case of a constant transverse field \(g(t)=g_{0}\) and in the absence of interactions \(J(t)=0\). In this case the Bogoliubov de Gennes Hamiltonian Eq. (29) is diagonal \(\mathbf{H}_{k}=2\hbar g_{0}\sigma_{z}\) and the propagator is \(\mathbf{U}_{k}(t)=\exp\left(-\mathrm{i}\mathbf{H}_{k}t/\hbar\right)\). Now we can exploit the pseudospin picture to understand the physics of the problem. For example, when the states \(\psi^{\mathsf{T}}_{k}=(0,1)\) with negative energy \(E^{(-)}_{k}=-2\hbar g_{0}\) are fully populated, we obtain the ground state of the system \(|\mathbf{0}\rangle=\prod_{k>0}|0\rangle_{k}=|\uparrow,\uparrow,\dots,\uparrow\rangle\) with \(|\uparrow\rangle\) and \(|\downarrow\rangle\) being the eigenstates of \(Z_{j}\). In the theory of the Ising model this is known as the paramagnetic ground state. In terms of fermions, this state describes a system with no pairs of counterpropagating excitations. The recipe to build up the excited states is to populate states with positive energies for a given wave vector \(k_{0}\). That is, to create a pair of excitations with the desired momentum
\[|1_{k_{0}},1_{-k_{0}}\rangle =\hat{F}^{\dagger}_{k_{0}}\hat{F}^{\dagger}_{-k_{0}}|\mathbf{0} \rangle=\mathrm{i}\sum_{i,j}e^{\mathrm{i}k_{0}(i-j)}\hat{f}^{\dagger}_{j}\hat {f}^{\dagger}_{j}|\mathbf{0}\rangle\] \[=\frac{\mathrm{i}}{2}\sum_{s,r}e^{\mathrm{i}k_{0}r}\hat{f}^{ \dagger}_{s}\hat{f}^{\dagger}_{s+r+1}\uparrow,\uparrow,\dots,\uparrow\rangle\] \[=\frac{\mathrm{i}}{2}\sum_{s,r}e^{\mathrm{i}k_{0}r}|\uparrow, \uparrow,\downarrow_{s},\uparrow\uparrow\dots\uparrow\uparrow,\downarrow_{s +r},\uparrow\rangle\,, \tag{31}\]
where \(\hat{f}^{\dagger}_{s}=(X_{s}+\mathrm{i}Y_{s})/2\prod_{m=1}^{s-1}Z_{m}\) and \(\hat{f}^{\dagger}_{s+r}=\left(\prod_{m=1}^{s+r-1}Z_{m}\right)(X_{s+r}+\mathrm{ i}Y_{s+r})/2\). As \(X_{m}^{2}=1\), we obtain the expression \(\hat{f}^{\dagger}_{s}\hat{f}^{\dagger}_{s+r}=1/4(X_{s}+\mathrm{i}Y_{s})\left( \prod_{m=1}^{s+r-1}Z_{m}\right)(X_{s+r}+\mathrm{i}Y_{s+r})\). The operator \(\prod_{m=s}^{s+r-1}Z_{m}\) is the Pauli string connecting the sites \(s\) and \(s+r\). To obtain this equation, we used the inverse Fourier transform \(\hat{F}_{k}=\frac{e^{\mathrm{i}\frac{\pi}{2}}}{\sqrt{N}}\sum_{j}\hat{f}_{j}e^ {-\mathrm{i}k_{j}}\) to write the fermionic operators \(\hat{F}^{\dagger}_{k_{0}}\) in terms of real space fermionic operators \(\hat{f}^{\dagger}_{j}\). We also inverted the Jordan- Wigner transformation Eq. (28) in order to write the fermionic operators \(\hat{f}^{\dagger}_{j}\) in terms of spin operators in real space. From the perspective of the pseudo spin, this is equivalent to apply a spin flip to the negative energy state with momentum \(k_{0}\) to obtain a positive energy state \(\psi^{\mathsf{T}}_{k_{0}}=(1,0)\). In terms of the original spins in real space, this corresponds to the creation of a quantum superposition of localized spin flips.
Alternatively, we can also study wave packets directly in the momentum representation. For example, for a two-particle initial state \(|\Psi(0)\rangle=\sum_{k}G(k)\hat{F}^{\dagger}_{k}(0)\hat{F}^{\dagger}_{-k}(0)| \mathbf{0}\rangle\) with momentum distribution \(G(k)\), the time evolution \(|\Psi(t)\rangle=\sum_{k}G(k)\hat{F}^{\dagger}_{k}(t)\hat{F}^{\dagger}_{-k}(t)| \mathbf{0}\rangle\) can be obtained by considering the evolution of the operators in the Heisenberg picture
\[\hat{F}^{\dagger}_{-k}(t)=\mathcal{V}_{k}(t)\hat{F}_{k}+\mathcal{U}^{*}_{k}(t )\hat{F}^{\dagger}_{-k}\,, \tag{32}\]
where \(\mathcal{V}_{k}(t)\) and \(\mathcal{U}^{*}_{k}(t)\) are matrix elements of the propagator \(\mathbf{U}_{k}(t)\) in Eq. (19) for the operators \(\hat{F}_{k}(t)\) and \(\hat{F}^{\dagger}_{k}(t)\) in the Heisenberg picture. Thus, the time evolution of the wave packet can be written as
\[|\Psi(t)\rangle =\sum_{k}G(k)\mathcal{V}_{-k}(t)\mathcal{U}^{*}_{k}(t)|\mathbf{0}\rangle\] \[+\sum_{k}G(k)\mathcal{U}^{*}_{k}(t)\mathcal{U}^{*}_{k}(t)|\mathbf{1}_ {k},1_{-k}\rangle\,. \tag{33}\]
Importantly, this wave packet can be interpreted as a quantum superposition of the paramagnetic ground state and a wavepacket of two spin flip excitations by considering Eq. (31).
These simple examples captures the essence of our approach. To design a QSP sequence using the generators of the Onsager algebra in the Schrdinger picture is a cumbersome task. However, we can easily design a QSP sequence in the Heisenberg picture for the operators \(\hat{F}_{k}\) and \(\hat{F}^{\dagger}_{k}\) using the SU(2) pseudo spin representation. In turn, QSP sequences giving the propagator \(\mathbf{U}_{k}(t)\) in the pseudo-spin representation can be directly mapped to operations in real space using the BCS Ansatz in Eq. (30).
## Appendix C QSP Sequences for general \(\theta\)
In this appendix, we discuss QSP sequences for general values of \(k\) and an unknown \(\theta\). As we are processing two independent variables, the QSP sequence is more complicated that the one discussed in the main text. In our manuscript, one of the restrictions we found is that the signal and signal processing operations are rotations along non-orthogonal axes. To overcome this restriction, we can define a modified QSP sequence for the Onsager algebra
\[\hat{U}^{M}_{\vec{\theta}}(\theta)=\prod_{r=1}^{d}e^{\mathrm{i} \theta\sum_{r=1}^{N}X_{r}j_{r+1}}e^{\mathrm{i}\frac{\pi}{2}\sum_{r=1}^{N}Z_{j}}e^ {-\mathrm{i}\theta\sum_{r=1}^{N}X_{r}j_{r+1}}e^{\mathrm{i}\theta\sum_{r=1}^{N}Z_ {j}}\,. \tag{34}\]
In arrays of superconducting qubits, if the parameter \(\theta\) is known, its sign can be controlled using microwave control lines [29]. When the parameter \(\theta\) is unknown, its sign can be effectively changed from positive to negative by applying \(\pi/2\) rotations along the \(Z\) axis to the even or odd sites. Next, let us explore the form of our modified QSP sequence in momentum space, which reads
\[\mathbf{U}^{M}_{k,\vec{\theta}}(\theta)=\prod_{r=1}^{d}e^{\mathrm{i} \theta(\sigma_{z}\cos k-\sigma_{z}\sin k)}e^{-\mathrm{i}2\theta(\sigma_{z}\cos k +\sigma_{z}\sin k)}e^{-\mathrm{i}2(\phi_{r}+\pi/4)\sigma_{z}}\,. \tag{35}\]
It is worth noting that the rotation \(e^{\mathrm{i}\frac{\pi}{2}\sum_{r=1}^{N}Z_{j}}\) maps to a pseudo spin rotation \(e^{-\mathrm{i}\frac{\pi}{2}\sigma_{z}}\) in momentum space. Also, the first two
terms in the QSP sequence are rotations along an the axis \(\hat{n}_{k}=[-\sin k,0,\cos k]\) and its reflection \(\hat{m}_{k}=[-\sin k,0,-\cos k]\) along the x axis. By using the fundamental properties of SU(2) rotations, we obtain the general QSP sequence
\[\mathbf{U}_{k,\vec{\theta}}^{M}(\theta)=\prod_{r=1}^{d}e^{\mathrm{i}\Omega_{k}(A_{ \sigma_{i}}\sigma_{i}+B_{\sigma_{i}}\sigma_{r})}e^{-2\mathrm{i}(\theta_{r}+ \pi/4)\sigma_{z}}\, \tag{10}\]
where \(\cos\Omega_{k}=\cos^{2}(2\theta)+\cos(2k)\sin^{2}(2\theta)\) and the new axis is defined by the parameters
\[A_{k} =-\frac{\sin k\sin(4\theta)}{\sin\Omega_{k}}\] \[B_{k} =\frac{\sin(2k)\sin^{2}(2\theta)}{\sin\Omega_{k}}. \tag{11}\]
Even if the parameter \(\theta\) is unknown, this QSP sequence is composed by rotations along orthogonal axis. However, in contrast to the QSP sequence discussed in the main text, here the signals parameters \(k\) and \(\theta\) define the rotation axis in the \(x-y\) plane in a nonlinear fashion, while the signal processing takes place along the \(z\) axis.
## Appendix D Space-time Dual QSP for \(k=\pi/2\)
In this appendix, we discuss the QSP sequence \(\mathbf{V}_{k,\vec{\theta}}\) for the space-time dual quantum circuit in the main text. As a first step, it is useful to consider the QSP sequence in momentum space
\[\mathbf{V}_{k,\vec{\theta}}=e^{\mathrm{i}\pi/4\sigma_{z}}\left(\prod_{r=1}^{d}e^{ -\mathrm{i}\pi\sigma_{z}}e^{-\mathrm{i}\frac{\pi}{2}(1-4\epsilon)\sigma_{z}} \right)e^{-\mathrm{i}\pi/4\sigma_{z}}, \tag{12}\]
where we took \(\phi_{r}=\pi/2(1-2\epsilon)\) in the definition of \(\vec{\Phi}\) according to Eq. (27).
We notice that the evolution \(e^{2\mathrm{i}\theta(\sigma_{z}\cos k-\sigma_{z}\sin k)}\) in Eq. (26) of the fermionic operators under the Ising interaction becomes \(e^{\mathrm{i}\pi/2\sigma_{z}}=\pi i\sigma_{x}\) when \(k=\pm\pi/2\) and \(\theta=\pi/4\). Then, we can write the composite pulse sequence (up to a constant phase) as
\[\mathbf{V}_{\pi\pi/2,\vec{\theta}} =(\mp i)^{d}\sigma_{x}e^{-\mathrm{i}\pi(1-2\epsilon)\sigma_{z}} \sigma_{x}e^{-\mathrm{i}\pi(1-2\epsilon)\sigma_{z}}\cdots\sigma_{x}e^{- \mathrm{i}\pi(1-2\epsilon)\sigma_{z}}\] \[\propto\left\{\begin{array}{ll}\mathbf{\mathrm{I}}&\mathrm{if}\,d \in\mathrm{even}\\ \sigma_{x}e^{-\mathrm{i}\pi(1-2\epsilon)\sigma_{z}}&\mathrm{if}\,d\in\mathrm{ odd}\end{array}\right. \tag{13}\]
Hence, the resulting unitary approximates the dynamics up to an error \(\epsilon\) in the phase rotation.
|
2309.13044 | What is the Title of this Paper? Solving logic puzzles using algorithms | This work delves into the realm of logic puzzles by focusing on the Knight
and Knave problems popularized by Raymond Smullyan in his book series "What is
the Name of This Book?". The puzzles revolve around characters known as Knights
(truth-tellers) and Knaves (liars), challenging solvers to determine the true
identity of each person based on their statements. This paper explores the
utilization of Python algorithms to automate the process of solving these
puzzles, offering a computational approach that enhances efficiency and
accessibility. In this work, we aim to develop a Python algorithm capable of
parsing and analyzing the statements provided in the Knight and Knave puzzles.
A logical reasoning framework is integrated within the algorithm to deduce the
identities of the characters based on their statements. The algorithm processes
the input statements, create a knowledge base, and make deductions following
the rules of Knight and Knave logic. The developed algorithm is thoroughly
tested on various instances of Knight and Knave puzzles, comparing its results
to known solutions and manual approaches. We further expand the scope of the
problem by introducing a Normal (who can sometimes lie and sometimes say the
truth). | Ujaan Rakshit, Nishchal Dwivedi | 2023-06-30T08:56:56Z | http://arxiv.org/abs/2309.13044v1 | # What is the Title of this Paper? Solving logic puzzles using algorithms.
###### Abstract
This work delves into the realm of logic puzzles by focusing on the Knight and Knave problems popularized by Raymond Smullyan in his book series "What is the Name of This Book?" The puzzles revolve around characters known as Knights (truth-tellers) and Knaves (liars), challenging solvers to determine the true identity of each person based on their statements. This work explores the utilization of Python algorithms to automate the process of solving these puzzles, offering a computational approach that enhances efficiency and accessibility. In this research we aim to develop a Python algorithm capable of parsing and analyzing the statements provided in the Knight and Knave puzzles. A logical reasoning framework is integrated within the algorithm to deduce the identities of the characters based on their statements. The algorithm processes the input statements, create a knowledge base, and make deductions following the rules of Knight and Knave logic. The developed algorithm is thoroughly tested on various instances of Knight and Knave puzzles, comparing its results to known solutions and manual approaches. We further expand the scope of the problem by introducing a Normal (who can sometimes lie and sometimes say the truth).
## 1 Introduction
Puzzles have long been a fascination of the human psyche [1][2], challenging our ability to reason and think critically[3]. Among these puzzles, the problem of the Knights and Knaves has fascinated puzzle enthusiasts for decades. Written by the famous logician Raymond Smullyan in his 1978 book, "What Is the Name of This Book? The Riddle of Dracula and Other Logical Puzzles"[4], these puzzles have characters known as Knights (always truth tellers) and Knaves (always liars), who provide the solver with puzzles that make sense.
Python, renowned for its simplicity, readability, and extensive libraries, has emerged as a prominent language in various domains, including artificial intelligence, data analysis, and problem-solving. Its flexibility in handling logical operations, combined with its wide range of libraries, makes it a promising candidate for solving complex logical puzzles.
We are motivated to explore how to design algorithms to solve puzzles. Such an algorithm will require an expression of the puzzle in a logical form and the logic algebra should be used to solve these answers.
The Knight and Knave puzzles are based on a fictional island populated by inhabitants who can be classified as either Knights, Knaves or Normal people. Knights always tell the truth, Knaves always lie, while Normal people can do both at random. The objective is to identify the true identity of each person on the island based on their statements. These puzzles not only serve as an intellectual exercise but also as a practical application of logical reasoning, deductive inference, and problem-solving skills.
Traditionally, solving the Knight and Knave problems has relied heavily on manual reasoning and the use of truth tables, which can become increasingly complex as the number of characters and statements in a puzzle grows. With the public release of text-to-text AI models such as GPT-4 [5], the question on the logical thinking ability of such models has been raised. This research seeks to leverage the power of Python to develop an algorithm capable of solving various instances of the Knight and Knave puzzles.
## 2 Algorithm
In this work, we have extended the assignment by the Harvard CS50 course[6]. We studied their public code and generalised it[7]. The algorithm, coded in Python, consists of two separate files: logic.py and puzzle.py. The logic.py file is responsible for handling the conversion of logical statements presented in puzzle.py, which defines the specific puzzle being solved, into a format that can be processed by a computer. This algorithm is designed to solve problems that can be broken down using logical operations such as And, Or, Not, and Implication.
### And() Operator
The And statement produces a positive output only if all the statements within the And() statement evaluate to true. In other words, if both X and Y are true, the And(X,Y) statement will evaluate to true; otherwise, it will evaluate to false. This can be visualized using a binary truth table.
### Or() Operator
The Or statement produces a positive output if any or all of the statements within the Or() statement are true. For example, the Or(X,Y) statement will evaluate to true if either X or Y or both X and Y are true. It will only evaluate to false if both X and Y are false. This can be shown using a binary truth table.
\begin{tabular}{|c c|c|} \(X\) & \(Y\) & \(X\lor Y\) \\ \hline \(T\) & \(T\) & \(T\) \\ \(T\) & \(F\) & \(T\) \\ \(F\) & \(T\) & \(T\) \\ \(F\) & \(F\) & \(F\) \\ \end{tabular}
### Not() Operator
The Not statement negates the input it receives. For instance, the Not(X) statement will evaluate X as false, regardless of its initial truth value. This can be represented using a binary truth table.
\begin{tabular}{|c|c|c|} \(X\) & \(\bar{X}\) & \\ \hline \(T\) & \(F\) & \\ \(F\) & \(T\) & \\ \end{tabular}
### Implication() Operator
The Implication statement generates a positive output for any combination of truth values in the statements within the Implication() statement, except when the first statement is true and the second statement is false. For example, the Implication(X,Y) statement evaluates to true for all cases except when X is true and Y is false; in that particular scenario, it evaluates to false. This behavior can be demonstrated using a binary truth table.
\begin{tabular}{|c|c|c|} \(X\) & \(Y\) & \(X\implies Y\) \\ \hline \(F\) & \(F\) & \(T\) \\ \(F\) & \(T\) & \(T\) \\ \(T\) & \(F\) & \(F\) \\ \(T\) & \(T\) & \(T\) \\ \end{tabular}
## 3 Solving the Questions
Puzzle.py defines the specific puzzle being solved, into a format that can be processed by a computer. This is done by first defining the problem as a set of logical operands and setting a set of basic rules.
### Symbols
In order for the algorithm to determine the nature of each person, first a set of symbols needs to be determined. These symbols are
XKnave, XKnight and XNormal
in which X can be replaced with any person as the problem determines. The algorithm uses these as flags set to either true and false, outputting the only one for which the problem is solved and hence have a true flag.
### Setting the Problem
In order the solve the problem, first a problem needs to be determined. For example, a problem could be
A says "We are both Knaves."
B says nothing.
For this problem, the solution is, assuming neither are Normal, that A is a Knave and that B is a Knight.
### Defining the Problem
Next, the problem has to be represented in logical statement form. The first rule to be defined is that both A and B are either a knight or knave, and that neither can be both. This can be defined as,
And(Or(AKnave,AKnight), Not(And(AKnave,AKnight)))
And(Or(BKnave,BKnight), Not(And(BKnave,BKnight)))
Next, the statements made have to be represented using Implication() statements
Implication(AKnave,Not(And(AKnave,BKnave)))
Implication(AKnight,(And(AKnave,BKnave)))
These statements state every case possible and outline what would happen for each case scenario of true and false symbols. Finally, all of the statements need to be combined by wrapping them in an And() logical operator
And(
And(Or(AKnave,AKnight), Not(And(AKnave,AKnight))),
And(Or(BKnave,BKnight), Not(And(BKnave,BKnight))),
Implication(AKnave,Not(And(AKnave,BKnave))),
Implication(AKnight,(And(AKnave,BKnave))) )
Now, the algorithm can accept the input and run through the statements given to find the set of symbols for which all the statements hold true. This is only possible for AKnave and BKnight, giving a final output of
A is a Knave
B is a Knight
## 4 Results
After rewriting a total of 15 different problems in such a manner as above, the following results were yielded.
### Regular Problems
Regular problems are problems with only Knights and Knaves, no Normals, with only one possible solution. The algorithm was able to solve regular problems, such as the one above, with no specialities. For example, for the question
A says either "I am a knight." or "I am a knave.", but you don't know which.
B says "A said 'I am a knave'."
B says "C is a knave."
C says "A is a knight."
the algorithm correctly determines that
A is a Knight
B is a Knave
C is a Knight
### Indeterminate Problems
Indeterminate problems are problems with only Knights and Knaves, no Normals, with multiple solutions. The algorithm was able output the determinate parts of indeterminate problems, that is to say, the parts that are true for every solution of the problem. For example, for the question
A says "B is a knave."
B says "A and C are of the same type."
the possible solutions are that A is a knave, B is a knight and C is a knave, or that A is a knight, B is a knave and C is a knave. As can be seen, the only constant, or determinate part of the solution is that in both cases C is a knave. As such, the algorithm outputs
C is a Knave
### Normal Problems
Normal problems are problems with have Knights, Knaves, and Normals. In Normal problems, it is assumed that, for a 3 person problem, one person is a knight, one person is a knave and the the last one is Normal. This is because, without this restriction, the Normal person could easily be substituted for a knaight or a knave in case of a truth or a lie respectively. For example, for the question
A says "I am Normal"
B says "That is true."
C says "I am not Normal."
One is a knight, one is a knave, the other is Normal.
the same assumptions as before will not work. They have to be modified to account for the possibility of being Normal.
Or(
[MISSING_PAGE_POST]
Doing so allows the algorithm to correctly determine that
A is a Knave
B is Normal
C is a Knight
## 5 Discussion
The utilization of Python algorithms in solving the Knight and Knave problems presents several advantages and implications worth discussing. This section explores the implications of employing computational approaches, the benefits of automation, and the limitations that may arise from such an approach.
The introduction of a Python algorithm to solve the Knight and Knave puzzles brings forth a new dimension to puzzle-solving methodologies. By automating the reasoning process, the algorithm provides a systematic and efficient approach that can handle complex scenarios with numerous characters and statements. This computational approach tries to substitute the need for manual reasoning with truth tables.
One significant advantage of the Python algorithm is its potential for broader accessibility. Traditional manual approaches to solving Knight and Knave puzzles often require a strong foundation in logical reasoning and a thorough understanding of truth tables. However, by implementing a computational algorithm, individuals with varying levels of expertise can engage with these puzzles. The algorithm acts as a guide, enabling users to analyze and solve the puzzles without being hindered by the complexities of manual reasoning.
Furthermore, the Python algorithm offers the advantage of speed and accuracy. As computers excel at processing large amounts of data and executing complex calculations, the algorithm can analyze multiple statements simultaneously and provide prompt and accurate solutions. This expedites the solving process, especially for puzzles with intricate scenarios, leading to increased efficiency and satisfaction for puzzle enthusiasts.
However, it is important to acknowledge certain limitations that may arise when using a computational approach. The algorithm relies heavily on the quality and accuracy of the input statements provided by the characters in the puzzles. Ambiguous or misleading statements can potentially lead to incorrect deductions or failed solutions. Therefore, the algorithm's performance is contingent upon the clarity and precision of the puzzle's design and the statements presented.
Moreover, the algorithm's success depends on the assumptions and constraints of the Knight and Knave logic. While the algorithm operates within the defined rules of the puzzles, it may face challenges when confronted with variations or extensions of the Knight and Knave problems that introduce additional elements or modified logic rules. Adapting the algorithm to handle such scenarios may require further modifications and adjustments.
## Conclusion
The work has shown how computational approaches can improve speed and accessibility in puzzle solving by introducing logical reasoning frameworks into the algorithm. There are important ramifications from using computation to solve the Knight and Knave challenges.It offers an approach that is organised and effective and can deal with complex circumstances involving several characters and assertions, doing away with the necessity for manual reasoning and truth tables.
The Python algorithm that was created has showed promise in parsing and analysing the statements given in the Knight and Knave riddles, correctly determining the identities of the characters based on their remarks. The algorithm's effectiveness has been tested on numerous problem examples, and the findings match up with known answers and manual methods. The algorithm serves as a guide, allowing users to analyse and solve the problems without being constrained by the difficulties of manual reasoning. As a result, anyone with diverse degrees of competence can participate in the puzzles.
However, it is also important to understand that a computational method has some potential drawbacks. The quality and accuracy of the input statements affect the algorithm's performance, and vague or deceptive statements may result in erroneous deductions. Furthermore, the method follows the established guidelines of the Knight and Knave logic, which may present difficulties when dealing with changes or modifications to the riddles.
In conclusion, the use of algorithms to solve the Knight and Knave riddles opens up new research opportunities for both puzzle fans and academicians. The automated method increases efficiency, accuracy, and accessibility, allowing a larger audience to enjoy these fascinating logic puzzles. However, the strength of the input statements and the constraints imposed by the Knight and Knave logic must be carefully taken into account. Future studies might examine how to modify the algorithm to tackle puzzles with more intricate variations or look into using various programming languages or methods to speed up the solution process.
|
2309.05632 | MAPS$^2$: Multi-Robot Autonomous Motion Planning under Signal Temporal
Logic Specifications | This article presents MAPS$^2$ : a distributed algorithm that allows
multi-robot systems to deliver coupled tasks expressed as Signal Temporal Logic
(STL) constraints. Classical control theoretical tools addressing STL
constraints either adopt a limited fragment of the STL formula or require
approximations of min/max operators, whereas works maximising robustness
through optimisation-based methods often suffer from local minima, relaxing any
completeness arguments due to the NP-hard nature of the problem. Endowed with
probabilistic guarantees, MAPS$^2$ provides an anytime algorithm that
iteratively improves the robots' trajectories. The algorithm selectively
imposes spatial constraints by taking advantage of the temporal properties of
the STL. The algorithm is distributed, in the sense that each robot calculates
its trajectory by communicating only with its immediate neighbours as defined
via a communication graph. We illustrate the efficiency of MAPS$^2$ by
conducting extensive simulation and experimental studies, verifying the
generation of STL satisfying trajectories. | Mayank Sewlia, Christos K. Verginis, Dimos V. Dimarogonas | 2023-09-11T17:25:08Z | http://arxiv.org/abs/2309.05632v2 | # Maps\({}^{2}\): Multi-Robot Anytime Motion Planning under Signal Temporal Logic Specifications
###### Abstract
This article presents MAPS\({}^{2}\): a distributed algorithm that allows multi-robot systems to deliver coupled tasks expressed as Signal Temporal Logic (STL) constraints. Classical control theoretical tools addressing STL constraints either adopt a limited fragment of the STL formula or require approximations of min/max operators, whereas works maximising robustness through optimisation-based methods often suffer from local minima, relaxing any completeness arguments due to the NP-hard nature of the problem. Endowed with probabilistic guarantees, MAPS\({}^{2}\) provides an anytime algorithm that iteratively improves the robots' trajectories. The algorithm selectively imposes spatial constraints by taking advantage of the temporal properties of the STL. The algorithm is distributed, in the sense that each robot calculates its trajectory by communicating only with its immediate neighbours as defined via a communication graph. We illustrate the efficiency of MAPS\({}^{2}\) by conducting extensive simulation and experimental studies, verifying the generation of STL satisfying trajectories.
## I Introduction
Autonomous robots possess the capability to solve significant problems when provided with a set of guidelines. These guidelines can be derived from either the physical constraints of the robot, such as joint limits, or imposed as human-specified requirements, such as pick-and-place objects. An efficient method of imposing such guidelines is the utilisation of logic-based tools, which enable reasoning about the desired behaviour of robots. These tools help us describe the behaviour of a robot at various levels of abstraction, such as interactions between its internal components to the overall high-level behaviour of the robot [1]. This strong expressivity helps us efficiently encode complex mission specifications into a logical formula. Recent research has focused on utilising these logic-based tools to express maximal requirements on the behaviour of robots. Once these requirements are established, algorithms are developed to generate trajectories that satisfy them.
Examples of logic-based tools include formal languages, such as Linear Temporal Logic (LTL), Metric Interval Temporal Logic (MITL), and Signal Temporal Logic (STL). The main distinguishing feature between these logics is their ability to encode time. While LTL operates in discrete-time and discrete-space domain, MITL operates in continuous-time domain but only enforces qualitative space constraints. On the other hand, STL allows for the expression of both qualitative and quantitative semantics of the system in both continuous-time and continuous-space domains [2]. STL thus provides a natural and compact way to reason about a robot's motion as it operates in a continuously evolving space-time environment. Although STL presents certain challenges, such as not being able to cast into a discrete-time discrete-space paradigm for which tractable planning algorithms already exist, we consider STL-based tasks due to the additional logical structures provided by STL over continuous-time signals and the availability of a robustness metric to determine the degree of satisfaction of a formula [3].
Another important property of autonomous robots is their ability to coordinate and work in teams. The use of multiple robots is often necessary in situations where a single robot is either insufficient, the task is high-energy demanding, or unable to physically perform certain tasks. However, multi-robot systems present their own set of challenges, such as communication overload, the need for a central authority for commands, and high computational demands. The challenge, therefore, is to derive solutions for multi-robot problems utilising logic-based tools, ensuring the achievement of specified high-level behaviour. The problem becomes more complex when interdependent constraints are imposed between the robots. This complexity is amplified when using a purely distributed approach, i.e., when each robot computes its own actions by communicating only with its neighbours, without
Figure 1: Experimental setup with three mobile bases and two 6-dof manipulators
access to the actions of other robots. Such an approach is necessary to prevent communication network congestion and minimise computational overheads.
In this article, we propose an algorithm that addresses the multi-robot motion planning problem subject to coupled STL tasks. The algorithm encodes these constraints into an optimisation function and selectively activates them based on the temporal requirements of the STL formula. While doing so, each robot only communicates with its neighbours and iteratively searches for STL satisfying trajectories. The algorithm proposed is called **MAPS\({}^{2}\)** - Multi-Robot Anytime Motion Planning under **S**ignal Temporal Logic Specifications. The article's contributions are summarised as follows:
* The algorithm ensures distributed trajectory generation to satisfy STL formulas that consist of coupled constraints for multiple robots.
* It reduces conservatism by eliminating the need for approximations, samples in continuous time to avoid abstractions, and achieves faster convergence by warm starting with initial trajectories.
* It incorporates a wide range of coupled constraints (both linear and nonlinear) into the distributed optimisation framework, enabling the handling of diverse tasks such as pick-and-place operations and time-varying activities like trajectory tracking.
* We present extensive simulation and hardware experiments that demonstrate the execution of complex tasks using MAPS\({}^{2}\).
Additionally, the algorithm presented is sound, meaning that it produces a trajectory that meets the STL formula and is complete, meaning that it will find such a trajectory if one exists.
In our prior study [4], we addressed the STL motion planning problem for two coupled agents. There, we extended the conventional Rapidly-exploring Random Trees (RRT) algorithm to sample in both the time and space domains. Our approach incrementally built spatio-temporal trees through which we enforced space and time constraints as specified by the STL formula. The algorithm employed a sequential planning method, wherein each agent communicated and waited for the other agent to build its tree. In contrast, the present work addresses the STL motion planning problem for multiple robots. Here, our algorithm adopts a distributed optimisation-based approach, where spatial and temporal aspects are decoupled to satisfy the STL formula. Instead of constructing an incremental tree, as done in the previous work, we introduce a novel metric called the _validity domain_ and initialise the process with an initial trajectory. In the current research, we only incorporate the STL parse tree and the Satisfaction variable tree from our previous work (Section III-B here). Additionally, we present experimental validation results and introduce a novel STL verification architecture.
The rest of the paper is organised as follows. Section II presents the related work, Section III presents the notations and necessary preliminaries, Section IV formulates the problem of this work, Section V presents the STL inclusion along with the underlying assumptions and important definitions, Section VI presents the main algorithm MAPS\({}^{2}\) along with analyses of the algorithm, Section VIII presents the experimental validation on a real multi-robot setup, while Section VII presents simulations. Finally, Section IX concludes the paper.
## II Related Work
In the domain of single-agent motion planning, different algorithms have been proposed to generate safe paths for robots. Sampling-based algorithms, such as CBF-RRT [5], have achieved success in providing a solution to the motion planning problem in dynamic environments. However, they do not consider high-level complex mission specifications. Works that impose high-level specifications in the form of LTL, such as [6, 7, 8, 9], resort to a hybrid hierarchical control regime resulting in abstraction and explosion of state-space. While a mixed integer program can encode this problem for linear systems and linear predicates [10], the resulting algorithm has exponential complexity, making it impractical for high-dimensional systems, complex specifications, and long duration tasks. To address this issue, [11] proposes a more efficient encoding for STL to reduce the exponential complexity in binary variables. Additionally, [12] introduces a new metric, discrete average space robustness, and composes a Model Predictive Control (MPC) cost function for a subset of STL formulas.
In multi-agent temporal logic control, works such as [13, 14] employ workspace discretisation and abstraction techniques, which we avoid in this article due to it being computationally demanding. Some approaches to STL synthesis involve using mixed-integer linear programming (MILP) to encode constraints, as previously explored in [15, 16, 17]. However, MILPs are computationally intractable when dealing with complex specifications or long-term plans because of the large number of binary variables required in the encoding process. The work in [18] encodes a new specification called multi-agent STL (MA-STL) using mixed integer linear programs (MILP). However, the predicates here depend only on the states of a single agent, can only represent polytope regions, and finally, temporal operations can only be applied to a single agent at a time. In contrast, this work explores coupled constraints between robots and predicates are allowed to be of nonlinear nature.
As a result, researchers have turned to transient control-based approaches such as gradient-based, neural network-based, and control barrier-based methods to provide algorithms to tackle the multi-robot STL satisfaction problem [11]. Such approaches, at the cost of imposing dynamical constraints on the optimisation problem, often resort to using smooth approximations of temporal operators at the expense of completeness arguments or end-up considering only a smaller fragment of the syntax [19, 20, 21, 22]. STL's robust semantics are used to construct cost functions to convert a synthesis problem to an optimisation problem that benefits from gradient-based solutions. However, such approaches result in non-smooth and non-convex problems and solutions are prone to local minima [23]. In this work, we avoid approximations and consider the full expression of the STL syntax. The proposed solution adopts a purely geometrical approach to the multi-robot STL planning problem.
## III Notations and Preliminaries
The set of natural numbers is denoted by \(\mathbb{N}\) and the set of real numbers by \(\mathbb{R}\). With \(n\in\mathbb{N}\), \(\mathbb{R}^{n}\) is the set of \(n\)-coordinate real-valued vectors and \(\mathbb{R}^{n}_{+}\) is the set of real \(n\)-vector with non-negative elements. The cardinality of a set \(A\) is denoted by \(|A|\). If \(a\in\mathbb{R}\) and \([b,c]\in\mathbb{R}^{2}\), the Kronecker sum is defined as \(a\oplus[b,c]=[a+b,a+c]\in\mathbb{R}^{2}\). We further define the Boolean set as \(\mathbb{B}=\{\top,\bot\}\) (True, False). The acronym _DOF_ stands for degrees of freedom.
### _Signal Temporal Logic (STL)_
Let \(\mathbf{x}:\mathbb{R}_{+}\rightarrow\mathbb{R}^{n}\) be a continuous-time signal. Signal temporal logic [2] is a predicate-based logic with the following syntax:
\[\varphi=\top\ |\ \mu^{h}\ |\ \neg\varphi\ |\ \varphi_{1}\mathcal{U}_{[a,b]} \varphi_{2}\ |\ \varphi_{1}\wedge\varphi_{2} \tag{1}\]
where \(\varphi_{1},\ \varphi_{2}\) are STL formulas and \(\mathcal{U}_{[a,b]}\) encodes the operator _until_, with \(0\leq a<b<\infty\); \(\mu^{h}\) is a predicate of the form \(\mu^{h}:\mathbb{R}^{N}\rightarrow\mathbb{B}\) defined by means of a vector-valued predicate function \(h:\mathbb{R}^{N}\rightarrow\mathbb{R}\) as
\[\mu^{h}=\begin{cases}\top&h(\mathbf{x}(t))\leq 0\\ \bot&h(\mathbf{x}(t))>0\end{cases}. \tag{2}\]
The satisfaction relation \((\mathbf{x},t)\models\varphi\) indicates that signal \(\mathbf{x}\) satisfies \(\varphi\) at time \(t\) and is defined recursively as follows:
\[(\mathbf{x},t)\models\mu^{h} \Leftrightarrow h(\mathbf{x}(t))\leq 0\] \[(\mathbf{x},t)\models\neg\varphi \Leftrightarrow\neg((\mathbf{x},t)\models\varphi)\] \[(\mathbf{x},t)\models\varphi_{1}\wedge\varphi_{2} \Leftrightarrow(\mathbf{x},t)\models\varphi_{2}\wedge(\mathbf{x},t) \models\varphi_{2}\] \[(\mathbf{x},t)\models\varphi_{1}\mathcal{U}_{[a,b]}\varphi_{2} \Leftrightarrow\exists t_{1}\in[t+a,t+b]\text{ s.t. }(\mathbf{x},t_{1})\models\varphi_{2}\] \[\wedge\forall t_{2}\in[t,t_{1}],(\mathbf{x},t_{2})\models\varphi_{ 1}.\]
We also define the operators _disjunction_, _eventually_, and _always_ as \(\varphi_{1}\vee\varphi_{2}\equiv\neg(\neg\varphi_{1}\wedge\neg\varphi_{2})\), \(\mathcal{F}_{[a,b]}\varphi\equiv\top\mathcal{U}_{[a,b]}\varphi\), and \(\mathcal{G}_{[a,b]}\varphi\equiv\neg\mathcal{F}_{[a,b]}\neg\varphi\), respectively. Each STL formula is valid over a time horizon defined as follows.
**Definition 1** ([24] ).: _The time horizon \(\mathrm{th}(\varphi)\) of an STL formula \(\varphi\) is recursively defined as,_
\[\mathrm{th}(\varphi)=\begin{cases}0,&\text{if }\varphi=\mu\\ \mathrm{th}(\varphi_{1}),&\text{if }\varphi=\neg\varphi_{1}\\ \max\{\mathrm{th}(\varphi_{1}),\mathrm{th}(\varphi_{2})\},&\text{if }\varphi= \varphi_{1}\wedge\varphi_{2}\\ b+\max\{\mathrm{th}(\varphi_{1}),\mathrm{th}(\varphi_{2})\},&\text{if }\varphi= \varphi_{1}\mathcal{U}_{[a,b]}\varphi_{2}.\end{cases} \tag{3}\]
In this work, we consider only time bounded temporal operators, i.e., when \(\mathrm{th}(\varphi)<\infty\). In the case of unbounded STL formulas, it is only possible to either falsify an _always_ operator or satisfy an _eventually_ operator in finite time, thus we consider only bounded time operators.
### _STL Parse tree_
An STL parse tree is a tree representation of an STL formula [4]. It can be constructed as follows:
* Each node is either a temporal operator node \(\{\mathcal{G}_{I},\mathcal{F}_{I}\}\), a logical operator node \(\{\vee,\wedge,\neg\}\), or a predicate node \(\{\mu^{h}\}\), where \(I\subset\mathbb{R}\) is a closed interval;
* temporal and logical operator nodes are called _set_ nodes;
* a root node has no parent node and a leaf node has no child node. The leaf nodes constitute the predicate nodes of the tree.
A path in a tree is a sequence of nodes that starts at a root node and ends at a leaf node. The set of all such paths constitutes the entire tree. A subpath is a path that starts at a set node and ends at a leaf node; a subpath could also be a path. The resulting formula from a subpath is called a subformula of the original formula. In the following, we denote any subformula of an STL formula \(\varphi\) by \(\bar{\varphi}\). Each set node is accompanied by a satisfaction variable \(\tau\in\{+1,-1\}\) and each leaf node is accompanied by a predicate variable \(\pi=\mu^{h}\) where \(h\) is the corresponding predicate function. A signal \(\mathbf{x}\) satisfies a subformula \(\bar{\varphi}\) if \(\tau=+1\) corresponding to the set node where the subpath of \(\bar{\varphi}\) begins. Subsequently, \(\tau(\textit{root})=+1\Leftrightarrow(\mathbf{x},t)\models\varphi\) where _root_ is the root node of \(\varphi\). An analogous tree of satisfaction and predicate variables can be drawn, called _satisfaction variable tree_. The satisfaction variable tree borrows the same tree structure as the STL parse tree. Each set node from the STL parse tree maps uniquely to a satisfaction variable \(\tau_{i}\) and each leaf node maps uniquely to a predicate variable \(\pi_{i}\), where \(i\) is an enumeration of the nodes in the satisfaction variable tree. An example of construction of such trees is shown below.
**Example 1**.: _The STL parse tree and the satisfaction variable tree for the STL formula_
\[\varphi=\mathcal{F}_{I_{1}}\Big{(}\mu^{h_{1}}\vee\mathcal{G}_{I_{2}}(\mu^{h_{2} })\Big{)}\wedge\mathcal{G}_{I_{3}}\mathcal{F}_{I_{4}}(\mu^{h_{3}})\wedge\mathcal{ G}_{I_{5}}(\mu^{h_{4}}). \tag{4}\]
_are shown in Figure 2. From the trees, one obtains the implications \(\tau_{2}=+1\implies(\mathbf{x},t)\models\mathcal{F}_{I_{1}}\Big{(}\mu^{h_{1}} \vee\mathcal{G}_{I_{2}}(\mu^{h_{2}})\Big{)}\), and \(\tau_{7}=+1\implies(\mathbf{x},t)\models\mathcal{G}_{I_{5}}(\mu^{h_{4}})\)._
Figure 2: STL parse tree and satisfaction variable tree for the formula in (4).
## IV Problem Formulation
We start by presenting the multi-robot system and defining the types of constraints, followed by presenting the topology and finally stating the problem being addressed.
### _Multi-robot system_
Consider a multi-robot system with states \(\mathbf{x}=[\mathbf{x}_{1}^{\top},\mathbf{x}_{2}^{\top},\ldots]^{\top}\) where each robot \(i\in\mathbf{V}=\{1,\ldots,\mathbf{N}\}\) consists of states \(\mathbf{x}_{i}\in\mathbb{R}^{n_{i}}\) subject to state constraints \(\mathbf{x}_{i}\in\mathcal{X}_{i}\subseteq\mathbb{R}^{n_{i}}\). Some robots further need to satisfy coupled state constraints \(\mathbf{x}_{i}\in\mathcal{X}_{ij}(\mathbf{x}_{j})\). Then, we call the robot \(j\) a neighbour of the robot \(i\) and write \(j\in\mathcal{N}_{i}\) where \(\mathcal{N}_{i}\) is the set of all such neighbours of robot \(i\). These state constraints are the task specifications derived from the STL formula i.e., they represent tasks to be executed by the robots. Additionally, these constraints can entail obstacle avoidance and are therefore outlined within the STL formula. The state constraints are defined by the inequalities
\[{}_{\alpha}h_{i}(\mathbf{x}_{i}) \leq 0,\quad\alpha\in\{1,2,\ldots,r_{i}\} \tag{5}\] \[{}_{\beta}h_{ij}(\mathbf{x}_{i},\mathbf{x}_{j}) \leq 0,\quad j\in\mathcal{N}_{i},\quad\beta\in\{1,2,\ldots,s_{i}\}\]
where \({}_{\alpha}h_{i}\) and \({}_{\beta}h_{ij}\) are continuous functions; \({}_{\alpha}h_{i}:\mathbb{R}^{n_{i}}\rightarrow\mathbb{R}\) specifies constraints on robot \(i\) and \(r_{i}\) is the number of such constraints, and, \({}_{\beta}h_{ij}:\mathbb{R}^{n_{i}}\times\mathbb{R}^{n_{j}}\rightarrow\mathbb{R}\) specifies coupled constraints between robots \(i\) and \(j\) with \(s_{i}\) being the number of such constraints. The state constraint sets are then defined as,
\[\mathcal{X}_{i} :=\{x_{i}\in\mathbb{R}^{n_{i}}|_{\alpha}h_{i}(\mathbf{x}_{i})\leq 0\}\] \[\mathcal{X}_{ij}(\mathbf{x}_{j}) :=\{x_{i}\in\mathbb{R}^{n_{i}}|_{\beta}h_{ij}(\mathbf{x}_{i}, \mathbf{x}_{j})\leq 0\}.\]
We further consider that \({}_{\beta}h_{ij}(\mathbf{x}_{i},\mathbf{x}_{j})=_{\beta}h_{ji}(\mathbf{x}_{j}, \mathbf{x}_{i})\). The inequalities \({}_{\alpha}h_{i}\) and \({}_{\beta}h_{ij}\) are predicate function inequalities of the form (2). A predicate function inequality \(h(\mathbf{x}_{i}(t))\leq 0\) corresponds to state constraint \({}_{\alpha}h_{i}(\mathbf{x}_{i})\leq 0\) and \(h(\mathbf{x}_{i}(t),\mathbf{x}_{j}(t))\leq 0\) corresponds to the state constraint \({}_{\beta}h_{ij}(\mathbf{x}_{i},\mathbf{x}_{j})\leq 0\). Next, we state a common assumption regarding the STL formula.
**Assumption 1**.: _The STL formula is in positive normal form i.e., it does not contain the negation operator._
The above assumption does not cause any loss of expression of the STL syntax (1). As shown in [17], any STL formula can be written in positive normal form by moving the negation operator to the predicate.
### _Graph topology_
The coupled state constraints \({}_{\beta}h_{ij}\) define an undirected graph over the multi-robot system. The graph is given by \(\mathbf{G}=\{\mathbf{V},\mathbf{E}\}\) where \(\mathbf{E}=\{(i,j)|j\in\mathcal{N}_{i}\}\) is the set of edges; \(\mathbf{E}\) defines the communication links between the robots in \(\mathbf{V}\).
### _Problem statement_
Let \(\mathcal{W}\subset\mathbb{R}^{\sum_{i}n_{i}}\) be defined as the workspace in which the robots operate, and let \(\mathcal{S}\subseteq\mathcal{W}\) be a compact set where a trajectory \(\mathbf{y}:[0,\mathrm{th}(\varphi)]\rightarrow\mathcal{S}\) satisfies the STL formula (as in (1)). The set \(\mathcal{S}\) is referred to as the satisfiable set. It is assumed that obstacles are defined in the STL formula, making \(\mathcal{S}\) the free space and ensuring that any continuous trajectories within \(\mathcal{S}\) satisfies the STL formula. Moreover, we have the following feasibility assumption:
**Assumption 2**.: _The set \(\mathcal{S}\) is nonempty, i.e., \(\mathcal{S}\neq\emptyset\)._
We consider the following problem formulation.
**Problem 1**.: _Given an STL formula \(\varphi\) that specifies tasks in a multi-robot system with \(\mathbf{N}\) robots, design a distributed algorithm to find the trajectory \(\mathbf{y}=[\mathbf{y}_{1}^{\top},\mathbf{y}_{2}^{\top},\ldots,\mathbf{y}_{N }^{\top}]^{\top}:[0,\mathrm{th}(\varphi)]\rightarrow\mathcal{S}\) for each robot \(i\in\{1,\ldots,N\}\), by only communicating with neighbours \(j\in\mathcal{N}_{i}\)._
## V STL Inclusion
This section presents the STL inclusion within our problem framework. First, we delve into including spatial constraints in Section V-A, followed by temporal inclusion in Section V-B.
### _Distributed Optimisation_
The planning problem is solved in a distributed way where each robot maximises a local optimality criterion. All robots solve their local optimisation problem by communicating with their neighbours. For robot \(i\), the constraints (5) are cast into the cost function \(F^{i}\) as
\[F^{i}=\sum_{\alpha=1}^{r_{i}}\frac{1}{2}\max\left(0,{}_{\alpha}h_{i}\right)^{2 }+\sum_{\beta=1}^{s_{i}}\frac{1}{2}\max\left(0,{}_{\beta}h_{ij}\right)^{2}, \tag{6}\]
and the resulting optimisation problem takes the form
\[\min F^{i} \tag{7}\]
whose solution \(\mathbf{x}_{i}^{\star}\) satisfies \(F^{i}(\mathbf{x}_{i}^{\star})=0\). The robots solve their respective optimisation problem cooperatively in a distributed manner via inter-neighbour communication. This makes the problem distributed, as every interaction between robots is part of the communication graph. The optimisation problem will be used in the main algorithm presented in Section VI ALgorithm 1, where we detail the approach to solve (7). Given the nature of the optimisation problem, there is a trade off between robustness and optimisation performance: since \(\mathbf{x}^{\star}\) converges to the boundaries imposed by the STL formula constraints making it vulnerable to potential perturbations. However, introducing a slack variable into the equation can enhance robustness, albeit at the cost of sacrificing completeness arguments.
**Example 2**.: _Consider a system with 3 agents and the corresponding states \(\{x_{1},x_{2},x_{3}\}\), and let the STL formula be: \(\varphi=(\|x_{1}-x_{2}\|>5)\ \wedge(\|x_{2}-x_{3}\|<2)\); then, the functions \(F^{i}\), for \(i\in\{1,2,3\}\), are,_
\[F^{1} =\frac{1}{2}\max(0,5-\|x_{1}-x_{2}\|)^{2}\] \[F^{2} =\frac{1}{2}\max(0,5-\|x_{1}-x_{2}\|)^{2}+\frac{1}{2}\max(0,\|x_ {2}-x_{3}\|-2)^{2}\] \[F^{3} =\frac{1}{2}\max(0,\|x_{2}-x_{3}\|-2)^{2}.\]
Note that the optimisation problem here only considers the spatial aspect and temporal inclusion is discussed below.
### _Validity Domain_
We now introduce the concept of _validity domain_, a time interval associated with every predicate and defined for every path in the STL formula. This interval represents the time domain over which each predicate applies and is defined as follows:
**Definition 2**.: _The validity domain \(\operatorname{vd}(\bar{\varphi})\) of each path \(\bar{\varphi}\) of an STL formula \(\varphi\), is recursively defined as_
\[\operatorname{vd}(\bar{\varphi})=\begin{cases}0,&\text{if }\bar{\varphi}=\mu^{h} \\ \operatorname{vd}(\bar{\varphi}_{1}),&\text{if }\bar{\varphi}=\neg\bar{\varphi}_{1} \\ [a,b],&\text{if }\bar{\varphi}=\mathcal{G}_{[a,b]}\mu^{h}\\ a\oplus\operatorname{vd}(\bar{\varphi}_{1}),&\text{if }\bar{\varphi}=\mathcal{G}_{[a,b]} \bar{\varphi}_{1},\bar{\varphi}_{1}\neq\mu^{h}\\ t^{\star}+T^{\star}\oplus\operatorname{vd}(\bar{\varphi}_{1}),&\text{if }\bar{ \varphi}=\mathcal{F}_{[a,b]}\bar{\varphi}_{1}.\end{cases} \tag{8}\]
where \(T^{\star}\in[a,b]\) and \(t^{\star}=\{t\mid(\bar{\varphi},t)\models\mathcal{F}_{[a,b]}\bar{\varphi}_{1}\}\) is a variable with initial value 0 that changes over time and captures the last instance of satisfaction for the eventually operator. This is necessary due to the redundancy of the eventually operator. We must ascertain the specific instances where the eventually condition is met to ensure finding a feasible trajectory. Additionally, we need to maintain the history of \(t^{\star}\) for nested temporal operators which require recursive satisfaction. The validity domain is determined for each path of an STL formula in a hierarchical manner, beginning at the root of the tree, and each path has a distinct validity domain. The number of leaf nodes in an STL formula is equivalent to the total number of validity domains. In Definition 2, we do not include the operators \(\wedge\) and \(\vee\) because they do not impose temporal constraints on the predicates and thus inherit the validity domains of their parent node. If there is no parent node, operators \(\wedge\) and \(\vee\) inherit the validity domains of their child node.
The validity domain is specially defined in the following cases. If a path contains only predicates, the validity domain of \(\mu^{h}\) is equal to the time horizon of \(\varphi\) (i.e., \(\operatorname{vd}(\mu^{h})=\operatorname{th}(\varphi)\)). Furthermore, if a path contains nested formulas with the same operators, such as \(\bar{\varphi}=\mathcal{G}_{[1,10]}\mathcal{G}_{[0,2]}\mu^{h}\), then the validity domain of \(\bar{\varphi}\) is equal to the time horizon of the path (i.e., \(\operatorname{vd}(\bar{\varphi})=\operatorname{th}(\bar{\varphi})\)). For example, \(\operatorname{vd}(\mathcal{G}_{[1,10]}\mathcal{G}_{[0,2]}\mu^{h})= \operatorname{th}(\bar{\varphi})=[1,12]\).
**Example 3**.: _Consider the following examples of the validity domain:_
* \(\varphi_{1}=\mathcal{G}_{[5,10]}\mu^{h}\)_, then_ \(\operatorname{vd}(\varphi_{1})=[5,10]\)_, which is the interval over which_ \(\mu^{h}\) _must hold._
* \(\varphi_{2}=\mathcal{F}_{[5,10]}\mu^{h}\)_, then_ \(t^{\star}=0\)_,_ \(T^{\star}\in[5,10]\) _and_ \(\operatorname{vd}(\mu^{h})=0\)_. Therefore,_ \(\operatorname{vd}(\varphi_{2})=T^{\star}\in[5,10]\) _is the instance when_ \(\mu^{h}\) _is required to hold._
* \(\varphi_{3}=\mathcal{F}_{[5,10]}\mathcal{G}_{[0,2]}\mu^{h}\)_, then_ \(t^{\star}=0\)_,_ \(T^{\star}\in[5,10]\)_,_ \(\operatorname{vd}(\mathcal{G}_{[0,2]}\mu^{h})=[0,2]\)_. Therefore,_ \(\operatorname{vd}(\varphi_{3})=0+T^{\star}\oplus[0,2]=[T^{\star},T^{\star}+2]\) _is the interval over which_ \(\mu^{h}\) _needs to hold such that_ \(\varphi_{3}\) _is satisfied._
* \(\varphi_{4}=\mathcal{G}_{[2,10]}\mathcal{F}_{[0,5]}\mu^{h}\)_, then_ \(a=2\) _and_ \(\operatorname{vd}(\varphi_{4})=2\oplus\operatorname{vd}(\mathcal{F}_{[0,5]} \mu^{h})=2+0+T^{\star}\) _where_ \(T^{\star}\in[0,5]\)_. For example, if_ \(T^{\star}=1\)_, then_ \(\operatorname{vd}(\varphi_{4})=3\) _is the time instance when_ \(\mu^{h}\) _needs to hold. Once_ \(\mu^{h}=\top\)_, then_ \(t^{\star}=T^{\star}\) _and the new_ \(\operatorname{vd}(\varphi_{4})=2+1+T^{\star}\) _where_ \(T^{\star}\in[0,5]\)_._
* \(\varphi_{5}=\mathcal{F}_{[0,100]}\mathcal{G}_{[5,10]}\mathcal{F}_{[0,1]}\mu^{h}\)_, then_ \(t^{\star}=0\)_,_ \(T^{\star}\in[0,100]\) _and_ \(\operatorname{vd}(\varphi_{5})=T^{\star}+a\oplus\operatorname{vd}(\mathcal{F}_{[ 0,1]}\mu^{h})\)_. Suppose_ \(T^{\star}=50\)_, then_ \(\operatorname{vd}(\varphi_{5})=55\oplus\operatorname{vd}(\mathcal{F}_{[0,1]} \mu^{h})\) _and so on._
_Regarding the STL formula in equation (4), the validity domains are defined for the following paths: \(\mathcal{F}_{I_{1}}\mu^{h^{1}},\ \mathcal{F}_{I_{1}}\mathcal{G}_{I_{2}}\mu^{h^{2}},\ \mathcal{G}_{I_{3}}\mathcal{F}_{I_{4}}\mu^{h^{3}}\), and \(\mathcal{G}_{I_{5}}\mu^{h^{4}}\)._
We use the following notational convenience in this work: if a parent node of a leaf node of a path \(\varphi\) is an _eventually_ operator we denote the corresponding validity domain by \(\operatorname{vd}^{F}()\), and, if the parent node of a leaf node of a path \(\varphi\) is an _always_ operator we denote the corresponding validity domain by \(\operatorname{vd}^{G}()\). The notation \(\operatorname{vd}^{F}()\) indicates that the predicate needs to hold at some instance in the said interval, and \(\operatorname{vd}^{G}()\) indicates that the predicate needs to hold throughout the interval.
In the next Section, we present how to integrate the validity domain with the optimisation problem in (7), completing thus the spatial and temporal integration.
## VI Main Results
In this section, we present the algorithm for generating continuous trajectories that meet the requirements of a given Signal Temporal Logic (STL) formula \(\varphi\). The algorithm is executed by the robots offline in a distributed manner, in the sense that they only communicate with their neighbouring robots. The algorithm builds a tree \(\mathcal{T}_{i}=\{\mathcal{V}_{i},\mathcal{E}_{i}\}\) for robot \(i\) where \(\mathcal{V}_{i}\) is the vertex set and \(\mathcal{E}_{i}\) is the edge set. Each vertex \(z\in\mathbb{R}_{+}\times\mathbb{R}^{n_{i}}\) is sampled from a space-time plane.
In what follows, we give a high-level description of the algorithm. The general idea is to start with an initial trajectory that spans the time horizon of the formulas \(\operatorname{th}(\varphi)\), then repeatedly sample random points along the trajectory and use gradient-based techniques to find solutions that satisfy the specification at these points. More specifically, the algorithm begins by connecting the initial and final points \(z_{0}^{i}=\{0,\mathbf{x}_{0}^{i}\}\) and \(z_{f}^{i}=\{t_{f}^{i},\mathbf{x}_{f}^{i}\}\) with a single edge \(\mathcal{E}_{i}^{0}=(z_{0}^{i},z_{f}^{i})\). The algorithm then randomly selects a time instant \(t^{0}\in[0,\operatorname{tf}(\varphi)]\) and uses linear interpolation to determine the states of each robot at that time, denoted by \(\mathbf{x}^{0}\). The robots then solve the distributed optimisation problem (7) to find new positions \(\mathbf{x}^{\star}\) that meet the specification at time \(t^{0}\). The algorithm then repeats this process at a user-specified time density, updating the trajectories as necessary. The result is a trajectory that asymptotically improves the task satisfaction of the STL formula.
**Example 4**.: _Before we get into the technical details, let us consider an example of 4 agents, represented by the colours blue, green, yellow and magenta, to illustrate the procedure. Suppose, at a specific instance in time, say \(t^{0}\), the STL formula requires agent 1 (blue) and agent 2 (green) to be more than 6 units apart and agent 3 (yellow) and agent 4 (magenta) to
be closer than 6 units i.e., for \(\epsilon>0\),_
\[G_{[t^{0}-\epsilon,t^{0}+\epsilon]}\Big{(}(\text{blue and green are farther than 6 units apart})\wedge\] \[(\text{yellow and magenta are closer than 6 units})\Big{)}\]
_We begin the process by connecting the initial and final points \(z_{0}^{i}\) and \(z_{f}^{i}\) with an initial trajectory for all agents, as shown in Figure 2(a). Each agent's vertex set is \(\mathcal{V}_{i}\) and consists of the start and end points denoted by \(z_{0}^{i}\) and \(z_{f}^{i}\) respectively, while its edge set is \(\mathcal{E}_{i}\) which contains only one edge connecting the start and end points. The initial trajectory of agent \(i\) is the tree \(\mathcal{T}_{i}=\{\mathcal{V}_{i},\mathcal{E}_{i}\}\). From the initial line trajectory, the algorithm randomly selects a point at time instance \(t^{0}\) from the entire time domain and use linear interpolation to determine the state of each agent at that time. The agents solve (7) using the initial position \(\mathbf{x}^{0}\) to find new position \(\mathbf{x}^{\star}\), as seen in Figure 2(b). As shown in Figure 2(c), the distributed optimisation problem (7) is solved, resulting in a solution \(\mathbf{x}^{\star}\), in which agent 1 and agent 2 are positioned so that they are more than 6 units apart and agent 3 and agent 4 remain undisturbed. The latter is the result of using functions of the form \(1/2\max(0,h_{ij})^{2}\), and since agent 3 and agent 4 already satisfy the requirements, i.e., \(h_{ij}<0\), the function is valued 0. The newly determined positions of agents 1 and 2 are added to the tree, allowing the trajectory to be shaped to meet the requirements. The updated trajectory can be seen in Figure 2(d). This process of randomly selecting a point in time, determining the state of the agents and updating their positions is repeated for a user-defined number of times \(L\), to ensure that the trajectory satisfies the STL formula \(\varphi\) throughout the time horizon._
### _The overall algorithm_
Here, we provide the main algorithm used to solve the problem at hand. The algorithm is called MAPS2 (short for '**m**ulti-robot **a**nytime motion **p**lanning under **s**ignal temporal logic **s**pecifications') and consists of the following functions: a function called GradientDescent() that addresses equation (7), a function called SatisfactionVariable() which calculates the satisfaction variables discussed in Section III-B, and a function called ValidityDomain() which calculates the intervals during which a predicate function is active. The algorithm is executed independently by each robot.
The architecture of the algorithm is depicted in Figure 4 and proceeds as follows: first, the algorithm starts with an STL formula \(\varphi\), along with the initial and final conditions. The initial conditions \(z_{0}^{i}=\{t_{0}^{i},\mathbf{x}_{0}^{i}\}\) depend on the robot's initial position and time. The final condition is chosen to be \(z_{f}^{i}=\{\mathrm{th}(\varphi)+\epsilon,\mathbf{x}_{f}^{i}\}\) where \(\epsilon>0\) and \(\mathbf{x}_{f}^{i}\in\mathbb{R}^{n_{i}}\) is a random vector. This allows the algorithm to enforce STL tasks at a time instance \(\mathrm{th}(\varphi)\). Additionally, all robots initialise a random seed, and determine their neighbours based on the coupled constraints. The algorithm requires a maximum number of nodes, step size and stopping criterion for the optimisation problem.
#### Iii-A1 Maps2
The algorithm is presented in Algorithm 1; it starts with an initial trajectory connecting \(z_{0}^{i}\) and \(z_{0}^{f}\) (see lines 1-3) and takes a random seed as input. Such a seed allows all robots to pick the same random number over
Figure 4: Architecture of the provided algorithm
Figure 3: Illustration of the proposed algorithm
the time horizon of the formula. It continues by repeatedly sampling a time point, interpolating states, using gradient descent to find a satisfactory solution, and expanding the tree with new vertices until the total number of vertices \(L\) is reached, see lines 5-14.
```
Input: Initial condition \(z_{0}^{i}=\{t_{0}^{i},\mathbf{x}_{0}^{i}\}\), Final condition \(z_{t}^{i}=\{t_{t}^{i},\mathbf{x}_{t}^{i}\}\), Maximum number of nodes \(L\), random seed, step size \(\delta\), stopping criterion \(\eta\) Output:\(\mathcal{T}_{i}\)
1\(\mathcal{V}_{i}\leftarrow\mathcal{V}_{i}\cup z_{0}^{i}\cup z_{t}^{i}\) ;
2\(\mathcal{E}_{i}\leftarrow\mathcal{E}_{i}\cup\{z_{0}^{i},z_{t}^{i}\}\);
3\(\mathcal{T}_{i}\leftarrow\{\mathcal{V}_{i},\mathcal{E}_{i}\}\);
4\(j\gets 0\);
5while\(j\leq L\)and \(\tau(\textbf{root})\neq+1\)do
6\(t^{0}\leftarrow\) generate random number in \([t_{0}^{i},t_{t}^{i}]\);
7\(\text{index}\leftarrow\text{SearchSort}\left(\mathcal{V}_{i},\ t^{0}\right)\) ;
8\(z_{\text{inter}}^{i}\leftarrow\text{Interpolate}\left(\mathcal{V}_{i},index\right)\) ;
9\(z_{\text{opt}}^{i},\tau\leftarrow\text{GradientDescent}\left(z_{\text{inter}}^{i}, \ \delta,\ L^{\prime},\ \eta\right)\) ;
10\(\mathcal{V}_{i}\leftarrow\mathcal{V}_{i}\cup z_{\text{opt}}^{i}\) ;
11\(\mathcal{E}_{i}\leftarrow\mathcal{E}_{i}\setminus\{z_{\text{index}}^{i},\ z_{ \text{index+1}}^{i}\}\);
12\(\mathcal{E}_{i}\leftarrow\mathcal{E}_{i}\cup\{z_{\text{index}}^{i},\ z_{ \text{opt}}^{i}\}\);
13\(\mathcal{E}_{i}\leftarrow\mathcal{E}_{i}\cup\{z_{\text{opt}}^{i},\ z_{ \text{index+1}}^{i}\}\);
14\(\mathcal{T}_{i}\leftarrow\{\mathcal{V}_{i},\mathcal{E}_{i}\}\);
15\(j\gets j+1\);
16if\(j=L\)then\(j\gets 0,\text{ and }\forall\mathcal{F},\tau(\mathcal{F})=-1\);
```
**Algorithm 1**MAPS\({}^{2}\)
The searchSort() function separates the vertices \(\mathcal{V}_{i}\) into two sets based on their time values: one set with time values lower than \(t^{0}\) (the vertex with the highest time in this set is indexed with 'index'), and another with values greater than \(t^{0}\) (the vertex with the lowest time in this set is indexed with 'index \(+\) 1'). The corresponding vertices are \(z_{\text{index}}^{i}=\{t_{\text{index}}^{i},\mathbf{x}_{\text{index}}^{i}\}\) and \(z_{\text{index+1}}^{i}=\{t_{\text{index+1}}^{i},\mathbf{x}_{\text{index+1}}^{i }\}\). Then, the algorithm linearly interpolates in line 8 via the function \(\text{Interpolate()}\) to obtain the vertex \(z_{\text{inter}}^{i}=\{t^{0},\mathbf{x}_{\text{inter}}^{i}\}\). This is obtained by solving for \(\mathbf{x}_{\text{inter}}^{i}\) element-wise as the solution of
\[\mathbf{x}_{\text{inter}}^{i}=\Big{(}\frac{\mathbf{x}_{\text{index+1}}^{i}- \mathbf{x}_{\text{index}}^{i}}{t_{\text{index+1}}^{i}-t_{\text{index}}^{i}} \Big{)}(t^{0}-t_{\text{index}}^{i})+\mathbf{x}_{\text{index}}^{i}.\]
The vertex \(z_{\text{inter}}^{i}\) is the initial condition to solve the optimisation problem (7); and once a solution \(z_{\text{opt}}^{i}\) is obtained, it is added to the vertex set \(\mathcal{V}_{i}\) in line 10. The edge set \(\mathcal{E}_{i}\) is reorganised to include \(z_{\text{opt}}^{i}\) in lines 11-13. Additionally, as a safeguard, if no solution is found after \(L\) iterations, line 16 resets the satisfaction variable of all eventually operators to \(-1\) and begins the search again.
#### V-B2 GradientDescent
The function is presented in Function 2, and as the name suggests, GradientDescent() computes the optimal value, \(z_{\text{opt}}\), by solving the problem presented in equation (7). This allows the robots to compute vertices that locally satisfy the STL formula. In lines 17-21, we implement the standard gradient descent algorithm with a step size of \(\delta\) and a stopping criterion of \(\eta\), as described in Algorithm 9.3 of [25]. To evaluate the gradient, which depends on the states of the neighbouring robots, each robot communicates with its neighbours, as demonstrated in lines 1 and 19. This is the only instance of communication in the algorithm. In line 20, the function \(\text{GradientComputation()}\) computes the gradient, either analytically or numerically. Once \(z_{\text{opt}}\) is determined, the satisfaction variables are updated in Function 3. An additional stopping criterion is implemented in line 23 in case the problem does not converge when there are conflicting predicates at a specific time instance. This occurs,
for example, if \(\varphi=\mathcal{F}_{[0,5]}\mathcal{G}_{[0,5]}\mu_{1}\wedge\mathcal{G}_{[5,10]}\mu_ {2}\), and there is a conflict between \(\mu_{1}\) and \(\mu_{2}\). In such cases, it becomes necessary for \(\mu_{1}\) to be true exclusively within the interval \([0,5]\)[s], and for \(\mu_{2}\) to hold exclusively within the interval \([5,10]\)[s].
Based on the validity domain, the algorithm determines which predicate functions are active in (6) at every sampled time instance. The Function ValidityDomain() in line 3 calculates the validity domains based on Definition 2. Among the set of predicate functions \(\{h_{ij}|\forall j\in\mathcal{N}_{i}\}\) associated with a robot, a binary variable \(\lambda_{ij}\in\{0,1\}\) is assigned to determine whether a predicate function is active or not. It is set to \(1\) if the predicate is active and \(0\) otherwise. We distinguish three cases: if the sampled point belongs to the validity domain of a single _eventually_ operator and/or a single _always_ operator, \(\lambda_{ij}=1\). If the sampled point belongs to the validity domain of multiple _eventually_ operators, we activate only one of them, that is, \(\lambda_{ij}=1\) only for one of them. This avoids enforcing conflicting predicates as it is can happen that multiple _eventually_ operators may not be satisfied at the same time instance; see lines 6-13.
In lines 25-33, the algorithm updates the satisfaction variable of all paths in the STL formula that impose restrictions on agent \(i\)'s states. The algorithm goes bottom-up, starting from the **leaf** node to the **root** node. First, it determines if \(z^{i}_{opt}\) is the desired minimum in line 27, and in lines 28-32, the algorithm updates the satisfaction variable of all nodes in the path \(\bar{\varphi}\) through the function SatisfactionVariable(). If \(z^{i}_{\text{opt}}\) is not the desired minimum, then all the satisfaction variables of the path \(\bar{\varphi}\) are reset to \(-1\) in line 34. This could result from conflicting predicates at the same time instance.
``` Input:\(\bar{\varphi},\ z^{i}_{\text{opt}}=\{t^{0},\mathbf{x}^{i}\}\) Output:\(\tau,t^{\star}\)
1case\(\mathcal{F}_{I}\)do
2\(\tau(\mathcal{F}_{I})=+1\);
3\(t^{\star}=t^{0}\);
4return\(\tau,t^{\star}\);
5
6case\(\mathcal{G}_{I}\)do
7if\(\text{robust}(\mathcal{G}_{I})\geq 0\)then
8\(\tau(\mathcal{G}_{I})=+1\);
9
10
11
12
13
14
15
16
17
18
190
20
212
22
233
24
25
263
275
286
297
300
310
329
330
3411
352
353
363
375
388
390
3911
392
3933
3941
395
3961
397
398
399
400
4101
4202
4303
444
454
4655
476
486
499
590
5910
6011
6111
6212
6313
6414
655
666
676
686
6915
7016
7017
7170
7171
7172
7173
7174
7175
7176
7177
7178
7179
7200
7217
7217
7223
7240
7251
7261
7273
7283
7293
7333
7333
7333
7333
7333
7333
7333
7333
7333
7333
7333
7333
7333
7333
7333
73333
73333
7333
7333
7333
73333
7333
7333
7333
73333
73333
73333
7333
73333
7333
73333
73333
73333
73333
73333
73333
7333
73333
73333
73333
73333
73333
73333
73333
73333
73333
73333
73333
73333
73333
73333
73333
733333
73333
73333
73333
73333
73333
733333
733333
733333
733333
733333
733333
733333
733333
733333
733333
733333
7333333
733333
7333333
733333
733333
733333
733333
7333333
733333
7333333
733333
733333
733333
7333333
733333
7333333
733333
733333
733333
7333333
7333333
7333333
7333333
7333333
7333333
7333333
7333333
7333333
7333333
73333333
73333333
7333333
7333333
7333333
73333333
7333333
7333333
7333333
7333333
7333333
7333333
7333333
73333333
7333333
7333333
7333333
7333333
7333333
7333333
7333333
7333333
7333333
7333333
7333333
7333333
7333333
7333333
733333
73333333
7333333
7333333
7333333
7333333
7333333
7333333
7333333
7333333
73333333
7333333
733333
7333333
7333333
73333333
7333333
7333333
7333333
73333333
7333333
7333333
7333333
7333333
7333333
7333333
7333333
7333333
73333333
7333333
7333333
73333333
73333333
733333
7333333
73333333
7333333
7333333
7333333
73333333
73333333
7333333
73333333
73333333
7333333
73333333
73333333
73333333
73333333
7333333
73333333
7333333
73333333
73333333
733333333
73333333
73333333
733333333
73333333
73333333
73333333
73333333
73333333
73333333
73333333
73333333
73333333
73333333
73333333
733333333
73333333
73333333
73333333
73333333
73333333
73333333
73333333
73333333
73333333
73333333
73333333
73333333
73333333
73333333
73333333
73333333
73333333
73333333
73333333
73333333
73333333
733333333
7333333
7333333
73333333
73333333
73333333
7333333
73333333
73333333
73333333
73333333
7333333
73333333
7333333
73333333
73333333
73333333
73333333
73333333
7333333
73333333
73333333
73333333
73333333
73333333
73333333
7333333
7333333
7333333
73333333
73333333
73333333
7333333
7333333
7333333
73333333
73333333
7333333
73333333
73333333
73333333
73333333
7333333
7333333
73333333
7333333
73333333
73333333
7333333
7333333
73333333
73333333
733333333
733333333
73333333
73333333
73333333
733333333
733333333
733333333
733333333
733333333
733333333
73333333
7333333333
733333333
7333333333
7333333333
7333333333
7333333333
7333333333
7333333333
73333333333
73333333333
733333333333
7333333333333
7333333333333
sequential covering class1 of trajectory \(\mathbf{y}\) as \(Y_{\delta_{t}}(\mathbf{x}_{i})\). The length of \(Y_{\delta_{t}}(\mathbf{x}_{i})\) is \(\delta_{t}\) in the time domain and is centered at \(\mathbf{x}_{i}\). See Figure 6 for reference. A trial is counted as successful if we sample a point \(t^{0}\) within the interval \(\delta_{t}/2\) on either side of \(\mathbf{x}_{i}\), that is, within \(Y_{\delta_{t}}(\mathbf{x}_{i})\). If there are \(L\) successful trials, the entire trajectory \(\mathbf{y}\) is covered, and the motion planning problem is solved. Consider \(k\) total samples, where \(k\gg L\), and treat this as \(k\) Bernoulli trials with success probability \(p\) since each sample is independent with only two outcomes. We are now ready to state the following proposition.
Footnote 1: Meaning \(\mathbf{y}\subset\bigcup_{x=i}^{L}Y_{\delta_{t}}(\mathbf{x}_{i})\)
**Proposition 1**.: _Let a constant \(L\) and probability \(p\) such that \(p<\frac{1}{2}\). Further, let \(k\) represent the number of samples taken by the \(\mathtt{MAPS}^{2}\) algorithm. Then, the probability that \(\mathtt{MAPS}^{2}\) fails to find a path after \(k\) samples is at most \(\frac{(k-L)p}{(kp-L)^{2}}\)._
Proof.: The probability of not having \(L\) successful trials after \(k\) samples can be expressed as:
\[\mathbf{P}[X_{k}\leq L]=\sum_{i=0}^{L-1}\binom{k}{i}p^{i}(1-p)^{k-i}\]
and according to [27], if \(p<\frac{1}{2}\), we can upper bound this probability as:
\[\mathbf{P}[X_{k}\leq L]\leq\frac{(k-L)p}{(kp-L)^{2}}.\]
As \(p\) and \(L\) are fixed and independent of \(k\), the expression \(\frac{(k-L)p}{(kp-L)^{2}}\) approaches 0 with increasing \(k\). Therefore, with uniform sampling, the algorithm \(\mathtt{MAPS}^{2}\) is probabilistically complete.
The effectiveness of \(\mathtt{MAPS}^{2}\) has been demonstrated through its ability to locate a solution if it exists, which makes it probabilistically complete. Furthermore, every position \(\mathbf{x}\) added to the tree is guaranteed to be in \(\mathcal{S}\), affirming the soundness of the algorithm.
## VII Simulations
In this section, we present simulations of various scenarios encountered in a multi-robot system. Restrictions are imposed using an STL formula and \(\mathtt{MAPS}^{2}\) is utilised to create trajectories that comply with the STL formula. In the following we consider 4 agents, with \(\delta=0.1\), \(\eta=0.01\) and \(L=L^{\prime}=100\). The simulations were run on an 8 core Intel(r) Core(tm) i7 1.9GHz CPU with 16GB RAM.
#### Vii-1 Collision avoidance
We begin with a fundamental requirement in multi-robot systems: avoiding collisions. In this scenario, it is assumed that all agents can communicate or sense each other's positions. The following STL formula is used to ensure collision avoidance in the interval \(20\)[s] to \(80\)[s]:
\[\varphi=\mathcal{G}_{[20,80]}(\|x_{i}-x_{j}\|\geq 1)\]
where \(\{i,j\}\in\{\{1,2\},\{1,3\},\{1,4\},\{2,3\},\{2,4\},\{3,4\}\}\). As depicted in Figure (a)a, all four agents maintain a distance of at least 1 unit from each other during the interval \([20,80]\)[s]. The maximum computation time by any agent is \(0.1143\)[s].
#### Vii-2 Rendezvous
The next scenario is rendezvous. We use the eventually operator to express this requirement. The STL formula specifies that agents 1 and 3 must approach each other within 1 distance unit during the interval \([40,60]\)[s] and similarly, agents 2 and 4 must meet at a minimum distance of 1 unit during the same interval. The STL formula is:
\[\varphi=\mathcal{F}_{[40,60]}(\|x_{1}-x_{3}\|\leq 1\wedge\|x_{2}-x_{4}\|\leq 1).\]
As seen in Figure (b)b, agents 1 and 3 and agents 2 and 4 approach each other within a distance of 1 unit during the specified interval. It's worth noting that the algorithm randomly selects the specific time \(t^{\star}\) within the continuous interval \([40,60]\)[s] at which the satisfaction occurs. The maximum computation time by any agent is \(0.0637\)[s].
#### Vii-3 Stability
The last task is that of stability, which is represented by the STL formula \(\mathcal{F}_{[a_{1},b_{1}]}\mathcal{G}_{[a_{2},b_{2}]}\mu\). This formula requires that \(\mu\) must always hold within the interval \([t^{\star}+a_{2},t^{\star}+b_{2}]\), where \(t^{\star}\in[a_{1},b_{1}]\). This represents stability, as it requires \(\mu\) to always hold within the interval \([t^{\star}+a_{2},t^{\star}+b_{2}]\), despite any transients that may occur in the interval \([a_{1},t^{\star})\). Figure (c)c presents a simulation of the following STL formula:
\[\varphi=\ \ \mathcal{F}_{[0,100]}\ \mathcal{G}_{[0,20]}\ \Big{(}(1.9\leq x_{1}\leq 2.1) \wedge(3.9\leq x_{2}\leq 4.1)\] \[\
#### Vii-B5 Overall case study
In this case study, we demonstrate the application of the aforementioned scenarios by setting up the following tasks:
* Agent 1 is always stays above 8 units.
* Agents 2 and 4 are required to satisfy the predicate \(x_{2}^{2}+x_{4}^{2}\leq 2\) within the time interval \([10,30]\)[s].
* Agent 3 is required to track an exponential path within the time interval \([20,60]\)[s].
* Agent 2 is required to repeatedly visit Agent 1 and Agent 3 every \(10\)s within the interval \([30,50]\)[s].
* Agent 1 is required to maintain at least 1 unit distance from the other three agents within the interval \([80,100]\)[s].
The STL formula of the above tasks is as follows:
\[\varphi =(x_{1}\geq 8)\ \land\ \mathcal{G}_{[10,30]}(x_{2}^{2}+x_{4}^{2} \leq 2)\land\] \[\mathcal{G}_{[20,60]}(\|x_{3}-50\exp(-0.1t)\|\leq 0.05)\land\] \[\mathcal{G}_{[30,50]}\mathcal{F}_{[0,10]}\Big{(}(\|x_{2}-x_{1}\| \leq 0.5)\land(\|x_{2}-x_{3}\|\leq 0.5)\Big{)}\land\] \[\mathcal{F}_{[79.9,80.1]}\mathcal{G}_{[0,20]}\Big{(}(\|x_{1}-x_{2} \|\geq 1)\land(\|x_{1}-x_{3}\|\geq 1)\] \[\land(\|x_{1}-x_{4}\|\geq 1)\Big{)}\]
The parameter \(L\) was increased to 1000 and \(\eta\) was decreased to 0.001. In Figure 8, we show the resulting trajectories of each agent generated by MAPS\({}^{2}\) satisfying the above STL formula. The maximum computation time by any agent is \(4.611\)[s].
**Remark 1**.: _To guarantee completeness, our focus in this work is directed towards the planning problem, specifically the generation of trajectories that fulfil a specific criterion, rather than the mechanics of how the agent moves or the precise control techniques used to execute the trajectory. This approach leads to the production of non-smooth trajectories, as seen in the simulations. To address this, we can apply a smoothing procedure to the trajectories using B-Splines, taking into account the velocity and acceleration constraints of the robots, see [28]. Furthermore, to the best of our knowledge, there has been no prior study that tackles the distributed multi-robot STL planning problem under nonlinear, nonconvex coupled constraints; thus a comparison study is not in order._
## VIII Experiments
We now present an experimental demonstration of the proposed algorithm. The multi-robot setup involves three robots, as shown in Figure 1, and consists of 3 mobile bases and two 6-DOF manipulator arms. The locations of the three bases are denoted as \(\mathbf{x}_{1}\in\mathbb{R}^{2}\), \(\mathbf{x}_{2}\in\mathbb{R}^{2}\), and \(\mathbf{x}_{3}\in\mathbb{R}^{2}\), respectively. Base 2 and base 3 are equipped with manipulator arms, whose end-effector positions are represented as \(\mathbf{e}_{1}\in\mathbb{R}^{3}\) and \(\mathbf{e}_{2}\in\mathbb{R}^{3}\), respectively.
The STL formula defining the tasks is the following,
\[\varphi=\|\mathbf{x}_{1}-\mathbf{x}_{2}\|\geq 0.6\land\| \mathbf{x}_{2}-\mathbf{x}_{3}\|\geq 0.6\land\|\mathbf{x}_{3}-\mathbf{x}_{1}\| \geq 0.6\land\] \[\mathcal{G}_{[10,125]}\|\mathbf{x}_{1}-1.8[-\cos 0.0698t,\sin(0.0698t)]^{ \top}\|\leq 0.05\land\] \[\mathcal{G}_{[30,70]}\|\mathbf{e}_{1}-[\mathbf{x}_{1}^{\top},0.3 5]^{\top}\|\leq 0.01\land\] \[\mathcal{G}_{[30,70]}\|\mathbf{x}_{2}-1.1[-\cos 0.0698t,\sin(0.0698t)]^{ \top}\|\leq 0.05\land\] \[\mathcal{G}_{[80,120]}\|\mathbf{e}_{2}-[\mathbf{x}_{1}^{\top},0. 35]^{\top}\|\leq 0.01\land\] \[\mathcal{G}_{[80,120]}\|\mathbf{x}_{3}-1.1[-\cos 0.0698t,\sin(0.0698t)]^{ \top}\|\leq 0.05\land\] \[\mathcal{F}_{[180,200]}\|\mathbf{x}_{1}-[0,0]^{\top}\|\leq 0.05\land\] \[\mathcal{F}_{[180,200]}\Big{(}\|\mathbf{x}_{2}-[1,-1]\|\leq 0.05 \land\|\mathbf{e}_{1}-[\mathbf{x}_{2},0.6]\|\leq 0.05\Big{)}\land\] \[\mathcal{F}_{[180,200]}\Big{(}\|\mathbf{x}_{3}-[-1,1]\|\leq 0.05 \land\|\mathbf{e}_{2}-[\mathbf{x}_{3},0.6]\|\leq 0.05\Big{)}.\]
The above task involves collision avoidance constraints that are always active given by the subformula \(\bar{\varphi}_{1}=(\|\mathbf{x}_{1}-\mathbf{x}_{2}\|\geq 0.6)\land(\| \mathbf{x}_{2}-\mathbf{x}_{3}\|\geq 0.6)\land(\|\mathbf{x}_{3}-\mathbf{x}_{1}\| \geq 0.6)\). Next, in the duration \([10,125]\)[s], base 1 surveils the arena and follows a circular time varying trajectory given by the subformula \(\bar{\varphi}_{2}=(\mathcal{G}_{[10,125]}\|\mathbf{x}_{1}-c_{1}(t)\|\leq 0.05)\) where \(c_{1}(t)\) is the circular trajectory. In the duration \([30,70]\)[s],
Figure 8: Overall case study
Figure 7: Simulation results of MAPS\({}^{2}\) with four agents.
end-effector 1 tracks a virtual point \(0.35\)[m] over base 1 to simulate a pick-and-place task, given by the subformula \(\bar{\varphi}_{3}=\mathcal{G}_{[30,70]}\|\mathbf{e}_{1}-[\mathbf{x}_{1}^{\top},0. 35]^{\top}\|\leq 0.01\wedge\mathcal{G}_{[30,70]}\|\mathbf{x}_{2}-c_{2}(t)\|\leq 0.05\) where \(c_{2}(t)\) is the circular trajectory. Similarly, in the duration \([80,120]\)[s], end-effector 2 takes over the task to track a virtual point \(0.35\)[m] over base 1, given by the subformula \(\bar{\varphi}_{4}=\mathcal{G}_{[80,120]}\|\mathbf{e}_{2}-[\mathbf{x}_{1}^{ \top},0.35]^{\top}\|\leq 0.01\wedge\mathcal{G}_{[80,120]}\|\mathbf{x}_{3}-c_{2}(t)\| \leq 0.05\). Finally, eventually in the duration \([180,200]\)[s], the robots assume a final position given by the subformula \(\bar{\varphi}_{5}=\mathcal{F}_{[180,200]}\|\mathbf{x}_{1}-[0,0]^{\top}\|\leq 0.05\wedge\mathcal{F}_{[180,200]}\Big{(}\|\mathbf{x}_{2}-[1,-1]\|\leq 0.05 \wedge\|\mathbf{e}_{1}-[\mathbf{x}_{2},0.6]\|\leq 0.05\Big{)}\wedge\mathcal{F}_{[180,200 ]}\Big{(}\|\mathbf{x}_{3}-[-1,1]\|\leq 0.05\wedge\|\mathbf{e}_{2}-[\mathbf{x}_{3},0.6] \|\leq 0.05\Big{)}.\]
The results are shown in Figure 9, where the x-axis represents time in seconds, and the y-axis represents the predicate functions defined by (5). The dashed line in the plots represents the predicate functions of the trajectories obtained by solving the optimisation problem (7), while the solid line represents the predicate functions of the actual trajectories by the robots. In the context of (5), negative values indicate task satisfaction. However, due to the lack of an accurate model of the robots and the fact that the optimisation solution converges to the boundary of the constraints, the tracking is imperfect, and we observe slight violations of the formula by the robots in certain cases. Nonetheless, the trajectories generated by the algorithm do not violate the STL formula. The coloured lines represent the functions that lie within the validity domain of the formula. Figure 8(a) shows that the collision constraint imposed on all 3 bases is not violated, and they maintain a separation of at least 60 cm. In Figure 8(b), base 1 tracks a circular trajectory in the interval \([10,125]\) seconds. In Figures 8(c) and 8(d), the end effectors mounted on top of bases 2 and 3 track a virtual point over the moving base 1 sequentially. In the last 20 seconds, the bases and end effectors move to their desired final positions, as seen in Figures 8(e) and 8(f). The maximum computation time by any robot is \(3.611\)[s]. Figure 10 shows front-view and side-view at different time instances during the experimental run2.
Footnote 2: The video of the experiments can be found here: [https://youtu.be/YxiuPoerMg](https://youtu.be/YxiuPoerMg)
## IX Conclusion
This work proposed MAPS\({}^{2}\), a distributed planner that solves the multi-robot motion-planning problem subject to tasks encoded as STL constraints. By using the notion of validity domain and formulating the optimisation problem as shown in (7), MAPS\({}^{2}\) transforms the spatio-temporal problem into a spatial planning task, for which efficient optimisation algorithms already exist. Task satisfaction is probabilistically guaranteed in a distributed manner by presenting an optimisation problem that necessitates communication only between robots that share coupled constraints. Extensive simulations involving benchmark formulas and experiments involving varied tasks highlight the algorithms functionality. Future work involves incorporating dynamical constraints such as velocity and acceleration limits into the optimisation problem.
|
2309.15421 | Deep Learning in Deterministic Computational Mechanics | The rapid growth of deep learning research, including within the field of
computational mechanics, has resulted in an extensive and diverse body of
literature. To help researchers identify key concepts and promising
methodologies within this field, we provide an overview of deep learning in
deterministic computational mechanics. Five main categories are identified and
explored: simulation substitution, simulation enhancement, discretizations as
neural networks, generative approaches, and deep reinforcement learning. This
review focuses on deep learning methods rather than applications for
computational mechanics, thereby enabling researchers to explore this field
more effectively. As such, the review is not necessarily aimed at researchers
with extensive knowledge of deep learning -- instead, the primary audience is
researchers at the verge of entering this field or those who attempt to gain an
overview of deep learning in computational mechanics. The discussed concepts
are, therefore, explained as simple as possible. | Leon Herrmann, Stefan Kollmannsberger | 2023-09-27T05:57:19Z | http://arxiv.org/abs/2309.15421v1 | # Deep Learning in Deterministic Computational Mechanics
###### Abstract
The rapid growth of deep learning research, including within the field of computational mechanics, has resulted in an extensive and diverse body of literature. To help researchers identify key concepts and promising methodologies within this field, we provide an overview of deep learning in deterministic computational mechanics. Five main categories are identified and explored: simulation substitution, simulation enhancement, discretizations as neural networks, generative approaches, and deep reinforcement learning. This review focuses on deep learning methods rather than applications for computational mechanics, thereby enabling researchers to explore this field more effectively. As such, the review is not necessarily aimed at researchers with extensive knowledge of deep learning -- instead, the primary audience is researchers at the verge of entering this field or those who attempt to gain an overview of deep learning in computational mechanics. The discussed concepts are, therefore, explained as simple as possible.
keywords: Deep learning; Computational mechanics, Neural networks; Surrogate model; Physics-informed; Generative
###### Contents
* 1 Introduction
* 1.1 Motivation
* 1.2 Taxonomy of Deep Learning Techniques in Computational Mechanics
* 1.3 Deep Learning
* 2 Simulation Substitution
* 2.1 Data-Driven Modeling
* 2.1.1 Space-Time Approaches
* 2.1.1.1 Fully-Connected Neural Networks
* 2.1.1.2 Image-To-Image Mapping
* 2.1.1.3 Model Order Reduction Encoding
* 2.1.1.4 Neural Operators
* 2.1.1.5 Neural Network Approximation Power
* 2.1.1.6 Active Learning & Transfer Learning
* 2.1.2 Time-Stepping Procedures
* 2.1.2.1 Recurrent Neural Networks
* 2.1.2.2 Dynamic Mode Decomposition
* 2.2 Physics-Informed Learning
* 2.2.1 Space-Time Approaches
* 2.2.1.1 Differential Equation Solving With Neural Networks
###### Abstract
We consider the problem of finding a set of \(n\)-dimensional linear programs that are solved by a linear program. The problem is solved by a linear program.
## 1 Introduction
### Motivation
In recent years, access to enormous quantities of data combined with rapid advances in machine learning has yielded outstanding results in computer vision, recommendation systems, medical diagnosis, and financial forecasting [1]. Nonetheless, the impact of learning algorithms reaches far beyond and has already found its way into many scientific disciplines [2].
The rapid interest in machine learning in general and within computational mechanics is well documented in the scientific literature. By considering the number of publications treating "Artificial Intelligence", "Machine Learning", "Deep Learning", and "Neural Networks", the interest can be quantified. Figure 0(a) shows the trend in all journals of Elsevier and Springer since 1999, while Figure 0(b) depicts the trend within the computational mechanics community by considering representative journals1 at Elsevier and Springer. The trends before 2017 differ slightly, with a steady growth in general but only limited interest within computational mechanics. However, around 2017, both curves show a shift in trend, namely a vast increase in publications highlighting the interest and potential prospects of artificial intelligence and its subtopics for a variety of applications.
Footnote 1: The considered journals are _Computer Methods in Applied Mechanics and Engineering, Computers & Mathematics with Applications, Computers & Structures, Computational Mechanics, Engineering with Computers, Journal of Computational Physics._
### Taxonomy of Deep Learning Techniques in Computational Mechanics
Due to the rapid growth [4] in deep learning research, as also seen in Figure 0(a), we provide an overview of the various deep learning methodologies in deterministic computational mechanics. Numerous review articles on deep learning for specific applications have already emerged (see [5, 6] for topology optimization, [7] for full waveform inversion, [8, 9, 10, 11, 12] for fluid mechanics, [13] for continuum mechanics, [14] for material mechanics, [15] for constitutive modeling, [16] for generative design, [17] for material design, and [18] for aeronautics)2. The aim of this work is, however, to focus on the general methods rather than applications, where similar methods are often applied to different problems. This has the potential to bridge gaps between scientific communities by highlighting similarities between methods and thereby establishing clarity on the state-of-the-art. We propose the following taxonomy in order to discuss the deep learning methods in a structured manner:
Figure 1: Number of publications concerning artificial intelligence and some of its subtopics since 1999. Illustration inspired by [3].
* **simulation substitution*
* (Section 2)
* **data-driven modeling*
* (Section 2.1)
* **physics-informed learning*
* (Section 2.2)
* **simulation enhancement** (Section 3)
* **discretizations as neural networks** (Section 4)
* **generative approaches** (Section 5)
* **deep reinforcement learning** (Section 6)
**Simulation substitution** replaces the entire simulation with a surrogate model, which in this work are neural networks (NNs). The model can be trained with supervised learning, which purely relies on labeled data and therefore is referred to as **data-driven modeling**. The generalization errors of these models can be reduced by **physics-informed learning**. Here, physics constraints are imposed on the learnable space such that only physically admissible solutions are learned.
**Simulation enhancement** instead only replaces components of the simulation chain, while the remaining parts are still handled by classical methods. Approaches within this category are strongly linked to their respective applications and will, therefore, be presented in the context of their specific use cases. Both data-driven and physics-informed approaches will be discussed.
Treating **discretizations as neural networks** is achieved by constructing a discretization from the basic building blocks of NNs, i.e., linear transformations and non-linear activation functions. Thereby, techniques within deep learning frameworks - such as automatic differentiation, gradient-based optimization, and efficient GPU-based parallelization - can be leveraged to improve classical simulation techniques.
**Generative approaches** deal with creating new content based on a data set. The goal is not, however, to recreate the data, but to generate statistically similar data. This is useful in diversifying the design space or enhancing a data set to train surrogate models.
Finally, in **deep reinforcement learning**, an agent learns how to interact with an environment in order to maximize rewards provided by the environment. In the case of deep reinforcement learning, the agent is modeled with NNs. In the context of computational mechanics, the environment is modeled by the governing physical equations. Reinforcement learning provides an alternative to gradient-based optimization, which is useful when gradient information is not available.
### Deep Learning
Before continuing with the topics specific to computational mechanics, NNs3 and the notation used throughout this work are briefly introduced. In essence, NNs are function approximators that are capable of approximating any continuous function [26]. The NN parametrized by the parameters \(\mathbf{\theta}\) learns a function \(\hat{y}=f_{NN}(x;\mathbf{\theta})\), which approximates the relation \(y=f(x)\). The NN is constructed with nested linear transformations in combination with non-linear activation functions \(\sigma\). The quality of prediction is determined by a cost function \(C(\hat{y})\), which is to be minimized. Its gradients \(\nabla_{\mathbf{\theta}}C\) with respect to the parameters \(\mathbf{\theta}\) are used within a gradient-based optimization [23, 27, 28] to update the parameters \(\mathbf{\theta}\) and thereby improve the prediction \(\hat{y}\). Supervised learning relies on labeled data \(x^{\mathcal{M}},y^{\mathcal{M}}\) to establish a cost function, while unsupervised learning does not rely on labeled data. The parameters defining the user-defined training algorithm and NN architecture are referred to as hyperparameters.
Footnote 3: see [23] for an in-depth treatment and PyTorch [24] or TensorFlow [25] for deep learning libraries
**Notational Remark 1**: Although \(x\) and \(y\) may denote vector-valued quantities, we do not use bold-faced notation for them. Instead, this is reserved for all \(N\) degrees of freedom within a problem, i.e., \(\mathbf{x}=\{x_{i}\}_{i=1}^{N}\), \(\mathbf{y}=\{y_{i}\}_{i=1}^{N}\). This can, for instance, be in the form of a domain \(\Omega\) sampled with \(N\) grid points or systems composed of \(N\) degrees of freedom. Note however, that matrices will still be denoted with capital letters in bold face.
**Notational Remark 2**: A multitude of NN architectures will be discussed throughout this work, for which we introduce abbreviations and subscripts. Most prominent are fully-connected NNs \(F_{FNN}\) (FC-NNs) [29, 23], convolutional NNs \(f_{CNN}\) (CNNs) [30, 31, 32], recurrent NNs \(f_{RNN}\) (RNNs) [33, 34, 35], and graph NNs \(f_{GNN}\) (GNNs) [36, 37, 38]4. If the network architecture is independent of the method, the network is denoted as \(f_{NN}\).
Footnote 4: Another architecture worth mentioning, as it has recently been applied for regression [39, 40] are spiking NNs [41] specialized to run on neuromorphic hardware and thereby reduce memory and energy consumption. These are however not treated in this work.
## 2 Simulation Substitution
In the field of computational mechanics, numerical procedures are developed to solve or find partial differential equations (PDEs). A generic PDE can be written as
\[\mathcal{N}[u;\lambda]=0,\qquad\text{on }\Omega\times\mathcal{T}, \tag{1}\]
where a non-linear operator \(\mathcal{N}\) acts on a solution \(u(x,t)\) of a PDE as well as the coefficients \(\lambda(x,t)\) of the PDE in the spatio-temporal domain \(\Omega\times\mathcal{T}\). In the forward problem, the solution \(u(x,t)\) is to be computed, while the inverse problem considers either the non-linear operator \(\mathcal{N}\) or coefficients \(\lambda(x,t)\) as unknowns.
A further distinction is made between methods treating the temporal dimension \(t\) as a continuum, as in space-time approaches [42] (Sections 2.1.1 and 2.2.1)5, or in discrete sequential time steps, as in time-stepping procedures (Sections 2.1.2 and 2.2.2). For simplicity, but without loss of generality, time-stepping procedures will be presented on PDEs with a first order derivative with respect to time:
Footnote 5: Static problems without time-dependence can only be treated by the space-time approaches.
\[\frac{\partial u}{\partial t}=\mathcal{N}[u;\lambda],\qquad\text{on }\Omega \times\mathcal{T}. \tag{2}\]
Another task in computational mechanics is the forward modeling and identification of systems of ordinary differential equations (ODEs). For this, we will consider systems of the following form:
\[\frac{d\mathbf{x}(t)}{dt}=\mathbf{f}(\mathbf{x}(t)). \tag{3}\]
Here, \(\mathbf{x}(t)\) are the time-dependent degrees of freedom and \(\mathbf{f}\) is the right-hand side defining the system of equations.6 Both the forward problem of computing \(\mathbf{x}(t)\) and the inverse problem of identifying \(\mathbf{f}\) will be discussed in the following.
Footnote 6: Note that a spatial discretization of the PDE Equation (2) can also be written as a system of ODEs.
### Data-Driven Modeling
Data-driven modeling relies entirely on labeled data \(x^{\mathcal{M}},y^{\mathcal{M}}\). The NN learns the mapping between \(x^{\mathcal{M}}\) and \(y^{\mathcal{M}}\) with \(\hat{y}_{i}=f_{NN}(x_{i};\mathbf{\theta})\). Thereby an interpolation to yet unseen datapoints is established. A data-driven loss \(\mathcal{L}_{\mathcal{D}}\), such as the mean squared error, for example, can be used as cost function \(C\).
\[C=\mathcal{L}_{\mathcal{D}}=\frac{1}{2N_{\mathcal{D}}}\sum_{i=1}^{N_{\mathcal{ D}}}||\hat{y}_{i}-y_{i}^{\mathcal{M}}||_{2}^{2} \tag{4}\]
#### 2.1.1 Space-Time Approaches
To declutter the notation, but without loss of generality, the temporal dimension \(t\) is dropped in this section, as it is possible to treat it like any other spatial dimension \(x\) in the scope of these methods. The goal of the upcoming methods is to either learn a forward operator \(\hat{u}=F[\lambda;x]\), an inverse operator for the coefficients \(\hat{\lambda}=I[u;x]\), or an inverse operator for the non-linear operator \(\hat{\mathcal{N}}=O[u;\lambda;x]\).7 The methods will be explained using the forward operator, but they apply analogously to the inverse operators. Only the inputs and outputs differ.
Footnote 7: Note that \(u\) might only be partially known on the domain \(\Omega\) for inverse problems.
The solution prediction \(\hat{u}_{i}\) at coordinate \(x_{i}\) or \(\mathbf{\hat{u}}_{i}\) on the entire domain \(\Omega\) is made based on a set of inverse coefficients \(\mathbf{\lambda}_{i}\). The cost function \(C\) is formulated analogously to Equation (4):
\[C=\mathcal{L}_{\mathcal{D}}=\frac{1}{2N_{\mathcal{D}}}\sum_{i=1}^{N_{\mathcal{ D}}}||\hat{u}_{i}-u_{i}^{\mathcal{M}}||_{2}^{2}\qquad\text{or}\qquad C= \mathcal{L}_{\mathcal{D}}=\frac{1}{2N_{\mathcal{D}}}\sum_{i=1}^{N_{\mathcal{ D}}}||\mathbf{\hat{u}}_{i}-\mathbf{u}_{i}^{\mathcal{M}}||_{2}^{2}. \tag{5}\]
#### 2.1.1 Fully-Connected Neural Networks
The simplest procedure is to approximate the operator \(F\) with a FC-NN \(F_{FNN}\).
\[\hat{u}(x)=F_{FNN}(\mathbf{\lambda};x;\mathbf{\theta}) \tag{6}\]
Example applications are flow classification [43, 44], fluid flow in turbomachinery [45], dynamic beam displacements from previous measurements [46], wall velocity predictions in turbulence [47], heat transfer [48], prediction of source terms in turbulence models [49], full waveform inversion [50, 51, 52], and topology optimization based on moving morphable bars [53]. The approach is however limited to simple problems, as an abundance of data is required. Therefore, several improvements have been proposed.
#### 2.1.1.2 Image-To-Image Mapping
One downside of the application of fully-connected NNs to problems in computational mechanics is that they often need to learn spatial relationships with respect to \(x\) from scratch. CNNs inherently account for these spatial relationships due to their kernel-based structure. Therefore, image-to-image mappings using CNNs have been proposed, where an image, i.e., a uniform grid (see Figure 2) of the coefficients \(\mathbf{\lambda}\), is used as input.
\[\mathbf{\hat{u}}=F_{CNN}(\mathbf{\lambda};\mathbf{\theta}) \tag{7}\]
This results in a prediction of the solution \(\mathbf{\hat{u}}\) throughout the entire image, i.e., the domain.
Applications include pressure and velocity predictions around airfoils [55, 56, 57, 58], stress predictions from geometries and boundary conditions [59, 60], steady flow predictions [61], detection of manufacturing features [62, 63], full waveform inversion [64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75], and topology optimization [76, 77, 78, 79, 80, 81, 82, 83, 84, 85]. An important choice in the design of the learning algorithm is the encoding of the input data. In the case of geometries and boundary conditions, binary representations are the most straightforward approach. These are however challenging for CNNs, as discussed in [61]. Signed distance functions [61] or simulations on coarse grids provide superior alternatives. For inverse problems, an
Figure 2: Representation of nodes of a Cartesian grid as pixels in an image. Adapted from [54].
initial forward simulation of an initial guess of the inverse field can be used to encode the desired boundary conditions [83, 84, 80, 85]. Another possibility for CNNs is a decomposition of the domain. The mapping can be performed on the full domain [86], smaller subdomains [87], or even individual pixels [88]. In the latter two cases, interfaces require special treatment.
##### 2.1.1.3 Model Order Reduction Encoding
The disadvantage of CNN mappings is being constrained to uniform grids on rectangular domains. This can be circumvented by using GNNs, such as in [89, 90, 91], or point cloud-based NNs [92, 93], such as in [94]. To further aid the learning, the NN can be applied to a lower-dimensional space that is able to capture the data. For complex problems, mappings \(e\) to low-dimensional spaces (also referred to as latent space or latent vector) \(\mathbf{h}\) can be identified with model order reduction techniques. Thus, in the case of simulation substitution, a low-dimensional encoding \(\mathbf{h}^{\lambda}=e(\mathbf{\lambda})\) of \(\mathbf{\lambda}\) is identified. This is provided as input to a NN to predict the solution field \(\mathbf{h}^{u}\) in a reduced latent space. The full solution field \(\mathbf{u}\) is obtained in a decoding \(d=e^{-1}\) step. The prediction is given as
\[\mathbf{\hat{u}}=d(\mathbf{\hat{h}}^{u})=d\big{(}F_{NN}(\mathbf{h}^{\lambda};\mathbf{\theta}) \big{)}=d\Big{(}F_{NN}\big{(}e(\mathbf{\lambda});\mathbf{\theta}\big{)}\Big{)}. \tag{8}\]
The dimensional reduction can, e.g., be performed with principal components analysis [95, 96], as proposed in [97], proper orthogonal decomposition [98], or reduced manifold learning [99]. These techniques have been applied to learning aortic wall stresses [100], arterial wall stresses [101], flow velocities in viscoplastic flow [102], and the inverse problem of identifying unpressurized geometries from pressurized geometries [103]. Currently, the most impressive results in data-driven surrogate modeling are achieved with model order reduction encodings combined with NNs [104, 105], which can be combined with most other methodologies presented in this work.
Another dimensionality reduction technique are autoencoders [106], where \(e\) and \(d\) are modeled by NNs8. These are treated in detail in Section 5.1 and enable non-linear encodings. An early investigation is presented in [107], where proper orthogonal decomposition is related to NNs. Application areas are the prediction of designs of acoustic scatterers from the reduced latent space [108], or mappings from dynamic responses of bridges to damage [109]. Furthermore, it has to be stated that many of the image-to-image mapping techniques rely on NN architectures inspired by autoencoders, such as U-nets [110, 111].
Footnote 8: Note that the autoencoder is modified, as it does not perform an identity mapping. Nonetheless, the idea of mapping to a reduced latent state is exploited.
##### 2.1.1.4 Neural Operators
The most recent trend in surrogate modeling with NNs are neural operators [112], which map between function spaces instead of functions. Neural operators rely on the extension of the universal approximation theorem [26] to non-linear operators [113]. The two most prominent neural operators are DeepONets9[114] and Fourier neural operators [115].
Footnote 9: Originally proposed in [113] with shallow NNs.
**DeepONet**
In DeepONets [114], illustrated in Figure 3, the task of predicting the operator \(\hat{u}(\mathbf{\lambda};x)\) is split up into two sub-tasks:
* the prediction of \(N_{P}\) basis functions \(\mathbf{\hat{t}}(x)\) (TrunkNet),
* the prediction of the corresponding \(N_{P}\) problem-specific coefficients \(\mathbf{\hat{b}}(\mathbf{\lambda})\) (BranchNet).
The basis is predicted by the TrunkNet with parameters \(\mathbf{\theta}^{T}\) via an evaluation at coordinates \(x\). The coefficients are estimated from the coefficients \(\mathbf{\lambda}\) using the BranchNet parametrized by
and, thus, specific to the problem being solved. Taking the dot product over the evaluated basis and the coefficients yields the solution prediction \(\hat{u}(\mathbf{\lambda};x)\).
\[\mathbf{\hat{t}}(x) =F_{FNN}^{T}(x;\mathbf{\theta}^{T}) \tag{9}\] \[\mathbf{\hat{b}}(\mathbf{\lambda}) =F_{FNN}^{B}(\mathbf{\lambda};\mathbf{\theta}^{B})\] (10) \[\hat{u}(x) =\mathbf{\hat{b}}(\mathbf{\lambda})\cdot\mathbf{\hat{t}}(x) \tag{11}\]
Applications can be found in [116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128]. DeepONets have also been extended with physics-informed loss functions [129, 130, 131].
**Fourier Neural Operators**
Fourier neural operators [115] predict the solution \(\mathbf{\hat{u}}\) on a uniform grid \(\mathbf{x}\) from the spatially varying coefficients \(\mathbf{\lambda}=\lambda(\mathbf{x})\). As the aim is to learn a mapping between functions, sampled on the entire domain, non-local mappings can be performed at each layer [132]. For example, mappings such as integral kernels [133, 134], Laplace transformations [135], and Fourier transforms [115] can be employed. These transformations enhance the non-local expressivity of the NN [132], where Fourier transforms are particularly favorable due to the computational efficiency achievable through fast Fourier transforms.
The Fourier neural operator, as illustrated in Figure 4, consists of Fourier layers, where linear transformations \(\mathbf{K}\) are performed after Fourier transforms \(\mathcal{F}\) along the spatial dimensions \(x\). Subsequently, an inverse Fourier transform \(\mathcal{F}^{-1}\) is applied, which is added to the output of a linear transformation \(\mathbf{W}\) performed outside the Fourier space. Thus, the Fourier transform can be skipped by the NN. The final step is an activation function \(\sigma\). The manipulations within a Fourier layer to predict the next activation on the uniform grid \(\mathbf{a}^{(j+1)}(\mathbf{x})\) can be written as
\[\mathbf{a}^{(j+1)}(\mathbf{x})=\sigma\bigg{(}\mathbf{W}\mathbf{a}^{(j)}(\mathbf{x})+\mathbf{b}+( \mathcal{F}^{-1}\Big{[}\mathbf{K}\mathcal{F}\big{[}\mathbf{a}^{(j)}(\mathbf{x})\big{]} \Big{]}\bigg{)}, \tag{12}\]
where \(\mathbf{b}\) is the bias. Both the linear transformations \(\mathbf{K},\mathbf{W}\) and the bias \(\mathbf{b}\) are learnable and thereby part of the parameters \(\mathbf{\theta}\). Multiple Fourier layers can be employed, typically used in combination with an encoding network \(P_{NN}\) and a decoding network \(Q_{NN}\).
Applications can be found in [136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146]. An extension relying on the attention mechanisms of transformers [147] is presented in [148]. Analogously to DeepONets, Fourier neural operators have been combined with physics-informed loss functions [149].
##### 2.1.1.5 Neural Network Approximation Power
Despite the advancements in NN architectures10, NN surrogates struggle to learn solutions of general PDEs. Typically, successes have only been achieved for parametrized PDEs with relatively small parameter spaces - or in cases where accuracy, reliability, or generalization were disregarded. It has, however, been shown - both for simple architectures such as FC-NNs [150, 151] as well as
Figure 3: DeepONet, operator learning via prediction of the basis functions \(\mathbf{\hat{t}}\) and the corresponding coefficients \(\mathbf{\hat{b}}\)[114].
for advanced architectures such as DeepONets [152] - that NNs possess an excellent theoretical approximation power which can capture solutions of various PDEs. Currently, there are two obstacles that impede the identification of sufficiently good optima with these desirable NN parameter spaces [150]:
* training data: generalization error,
* training algorithm: optimization error.
A lack of sufficient training data leads to poor generalization. This might be alleviated through faster data generation using, e.g., faster and specialized classical methods [153], or improved sampling strategies, i.e., finding the minimum number of required datapoints distributed in a specific manner to train the surrogate. Additionally, current training algorithms only converge to local optima. Research into improved optimization algorithms, such as current trends in computing better initial weights [154] and thereby better local optima, attempts to reduce the optimization error. At the same time, training times are reduced drastically increasing the competitiveness.
##### 2.1.1.6 Active Learning & Transfer Learning
Finally, an important machine learning technique independent of the NN architecture is active learning [155]. Instead of precomputing a labeled data set, data is only provided when the prediction quality of the NN is insufficient. Furthermore, the data is not chosen arbitrarily, but only in the vicinity of the failed prediction. In computational mechanics, the prediction of the NN can be assessed with an error indicator. For an insufficient result, the results of a classical simulation are used to retrain the NN. Over time, the NN estimates improve in the respective domain of application. Due to the error indicator and the classical simulations, the predictions are reliable. Examples for active learning in computational mechanics can be found in [156, 157, 158].
Another technique, transfer learning [159, 160], aims at accelerating the NN training. Here, the NN is first trained on a similar task. Subsequently, it is applied to the task of interest - where it converges faster than an untrained NN. Applications in computational mechanics can be found in [73, 161].
#### 2.1.2 Time-Stepping Procedures
For the time-stepping procedures, we will consider Equations (2) and (3) in the following.
##### 2.1.2.1 Recurrent Neural Networks
The simplest approach to modeling time series data is by using FC-NNs to predict the next time
Figure 4: Fourier neural operator, operator learning in the Fourier space [115].
step \(t_{i+1}\) from the current time step \(t_{i}\):
\[\hat{u}(x,t_{i+1})=F_{FNN}\big{(}x,t_{i};u(x,t_{i});\mathbf{\theta}\big{)}. \tag{13}\]
However, this approach lacks the ability to capture the temporal dependencies between different time steps, as each input is treated independently and without considering more than just the previous time step. Incorporating the sequential nature of the data can be achieved directly with RNNs. RNNs maintain a hidden state which captures information from the previous time steps, to be used for the next time step prediction. By unrolling the RNN, the entire time-history can be predicted.
\[\{\hat{u}(x,t_{2}),\hat{u}(x,t_{3}),\ldots,\hat{u}(x,t_{N})\}=F_{RNN}(x;u(x,t_{ 1});\mathbf{\theta}) \tag{14}\]
Shortcomings of RNNs, such as their tendency to struggle with learning long-term dependencies due to the problem of vanishing or exploding gradients, have been addressed by more sophisticated architectures such as long short-time memory networks (LSTM) [34], gated recurrent unit networks (GRU) [162], and transformers [147]. The concept of recurrent units has also been combined with other architectures, as demonstrated for CNNs [163] and GNNs [164, 165, 166, 167, 89, 90, 168].
Further applications of RNNs are full waveform inversion [169, 170, 171], high-dimensional chaotic systems [172], fluid flow [173, 3], fracture propagation [91], sensor signals in non-linear dynamic systems [174, 175], and settlement field predictions induced by tunneling [176], which was extended to damage prediction in affected structures [177, 178]. RNNs are often combined with reduced order model encodings [179], where the dynamics are predicted on the reduced latent space, as demonstrated in [180, 181, 182, 183, 184, 185, 186]. Further variations employ classical time-stepping schemes on the reduced latent space obtained by autoencoders [187, 188].
2.1.2.2 Dynamic Mode Decomposition
Another approach that was formulated for system dynamics, i.e., Equation (3) is dynamic mode decomposition (DMD) [189, 190]. The aim of DMD is to identify a linear operator \(\mathbf{A}\) that relates two successive snapshot matrices with \(n\) time steps \(\mathbf{X}=[\mathbf{x}(t_{1}),\mathbf{x}(t_{2}),\ldots,\mathbf{x}(t_{n})]^{T},\mathbf{X}^{\prime}=[ \mathbf{x}(t_{2}),\mathbf{x}(t_{3}),\ldots,\mathbf{x}(t_{n+1})]^{T}\):
\[\mathbf{X}^{\prime}\approx\mathbf{A}\mathbf{X}. \tag{15}\]
To solve this, the problem is reframed as a regression task. The operator \(\mathbf{A}\) is approximated by minimizing the Frobenius norm of the difference between \(\mathbf{X}^{\prime}\) and \(\mathbf{A}\mathbf{X}\). This minimization can be performed using the Moore-Penrose pseudoinverse \(\mathbf{X}^{\dagger}\) (see, e.g., [191]):
\[\mathbf{A}=\underset{\mathbf{A}}{\arg\min}||\mathbf{X}^{\prime}-\mathbf{A}\mathbf{X}||_{F}=\mathbf{X}^ {\prime}\mathbf{X}^{\dagger}. \tag{16}\]
Once the operator is identified, it can be used to propagate the dynamics forward in time, approximating the next state \(\mathbf{x}(t_{i+1})\) using the current state \(\mathbf{x}(t_{i})\):
\[\mathbf{x}(t_{i+1})\approx\mathbf{A}\mathbf{x}(t_{i}). \tag{17}\]
This framework, is however, only valid for linear dynamics. DMD can be extended to handle non-linear systems through the application of Koopman operator theory [192]. According to Koopman operator theory, it is possible to represent a non-linear system as a linear one by using an infinite-dimensional Koopman operator \(\mathcal{K}\) that acts on a transformed state \(e(\mathbf{x}(t_{i}))\):
\[g(\mathbf{x}(t_{i+1}))=\mathcal{K}[e(\mathbf{x}(t_{i}))]. \tag{18}\]
In theory, the Koopman operator \(\mathcal{K}\) is an infinite-dimensional linear transformation. In practice, however, finite-dimensional approximations are employed. This approach is, for example utilized in the extended DMD [193], where the regression from Equation (16) is performed on a higher-dimensional state \(\mathbf{h}(t_{i})=e(\mathbf{x}(t_{i}))\) relying on a dictionary of orthonormal basis functions \(\mathbf{\psi}(\mathbf{x})\). Alternatively, the dictionary can be learned using NNs, i.e., \(\mathbf{\hat{\psi}}(\mathbf{x})=\psi_{NN}(\mathbf{x};\mathbf{\theta})\), as demonstrated
in [194, 195]. The NN is trained by minimizing the mismatch between predicted state \(\mathbf{\psi}(\mathbf{\hat{x}}(t_{i+1}))=\mathbf{A}\mathbf{\hat{\psi}}(\mathbf{x}(t_{i}))\) (Equation (17)) and the true state in the dictionary space. Orthogonality is not required and therefore not enforced.
\[C=\frac{1}{2N}\sum_{i=1}^{N}||\mathbf{\hat{\psi}}(\mathbf{x}(t_{i+1}))-\mathbf{A}\mathbf{\hat{ \psi}}(\mathbf{x}(t_{i}))||_{2}^{2} \tag{19}\]
When the dictionary is learned, the state predictions can be reconstructed using the Koopman mode decomposition, as explained in detail in [194].
Alternatively, the mapping to the augmented state can be performed with autoencoders, which at the same time allows for a direct map back to the original space [196, 197, 198, 199]. Thus, an encoder learns a reduced latent space \(\mathbf{\hat{h}}(\mathbf{x})=e_{NN}(\mathbf{x};\mathbf{\theta}^{e})\) and a decoder learns the inverse mapping \(\mathbf{\hat{x}}(\mathbf{h})=d_{NN}(\mathbf{h};\mathbf{\theta}^{d})\). The networks are trained using three losses: the autoencoder reconstruction loss \(\mathcal{L}_{\mathcal{A}}\), the linear dynamics loss \(\mathcal{L}_{\mathcal{R}}\), and the future state prediction loss \(\mathcal{L}_{\mathcal{F}}\).
\[\mathcal{L}_{\mathcal{A}} =\frac{1}{2(n+1)}\sum_{i=1}^{n+1}||\mathbf{x}(t_{i})-d_{NN}(e_{NN}( \mathbf{x}(t_{i});\mathbf{\theta}^{e});\mathbf{\theta}^{d})||_{2}^{2} \tag{20}\] \[\mathcal{L}_{\mathcal{R}} =\frac{1}{2n}\sum_{i=1}^{n}||e_{NN}(\mathbf{x}(t_{i+1});\mathbf{\theta}^ {e})-\mathbf{A}e_{NN}(\mathbf{x}(t_{i});\mathbf{\theta}^{e})||_{2}^{2}\] (21) \[\mathcal{L}_{\mathcal{F}} =\frac{1}{2n}\sum_{i=1}^{n}||\mathbf{x}(t_{i+1})-d_{NN}(\mathbf{A}e_{NN}( \mathbf{x}(t_{i});\mathbf{\theta}^{e});\mathbf{\theta}^{d})||_{2}^{2}\] (22) \[C =\kappa_{\mathcal{A}}\mathcal{L}_{\mathcal{A}}+\kappa_{\mathcal{R }}\mathcal{L}_{\mathcal{R}}+\kappa_{\mathcal{F}}\mathcal{L}_{\mathcal{F}} \tag{23}\]
The cost function \(C\) is composed of a weighted sum of the loss terms \(\mathcal{L}_{\mathcal{A}},\mathcal{L}_{\mathcal{R}},\mathcal{L}_{\mathcal{F}}\) and weighting terms \(\kappa_{\mathcal{A}},\kappa_{\mathcal{R}},\kappa_{\mathcal{F}}\). Furthermore, [198] allows \(\mathbf{A}\) to vary depending on the state. This is achieved by predicting the eigenvalues of \(\mathbf{A}\) with an auxiliary network and constructing the matrix from these.
### Physics-Informed Learning
In supervised learning, as discussed in Section 2.1, the quality of prediction strongly depends on the amount of training data. Acquiring data in computational mechanics may be expensive. To reduce the amount of required data, constraints enforcing the physics have been proposed. Two main approaches exist. The physics can be enforced by modifying the cost function through a penalty term punishing unphysical predictions, thus acting as a regularizer. Possible modifications are discussed in the upcoming section. Alternatively, the physics can be enforced by construction, i.e., by reducing the learnable space to a physically meaningful space. This approach is highly specific to its application and will therefore mainly be explored in Section 3. A brief coverage is provided in Section 2.2.3.
#### 2.2.1 Space-Time Approaches
Once again and without loss of generality, the temporal dimension \(t\) is dropped to declutter the notation. However, in contrast to Section 2.1.1, the following methods are not equally applicable to forward and inverse problems. Thus, the prediction of the solution \(\hat{u}\), the PDE coefficients \(\hat{\lambda}\), and the non-linear operator \(\mathcal{N}\) are treated separately.
2.2.1.1 Differential Equation Solving With Neural Networks
The concept of solving PDEs11 was first proposed in the 1990s [200, 201, 202], but was recently popularized by the so-called physics-informed neural networks (PINNs) [203] (see [204, 205, 206] for recent
review articles and SciANN [207], SimNet [208], DeepXDE [209] for libraries).
To illustrate the idea and variations of PINNs, we will consider the differential equation of a static elastic bar
\[\frac{d}{dx}\left(EA\frac{du}{dx}\right)+p=0,\qquad x\in\Omega. \tag{24}\]
Here, the non-linear operator \(\mathcal{N}\) is given by the left-hand side of the equation, the solution \(u(x)\) is the axial displacement, and the spatially varying coefficients \(\lambda(x)\) are given by the cross-sectional properties \(EA(x)\) and the distributed load \(p(x)\). Additionally, boundary conditions are specified, which can be in terms of Dirichlet (on \(\Gamma_{D}\)) or Neumann boundary conditions (on \(\Gamma_{N}\)):
\[u(x) =g(x),\qquad x\in\Gamma_{D}, \tag{25}\] \[EA(x)\frac{du(x)}{dx} =f(x),\qquad x\in\Gamma_{N}. \tag{26}\]
**Physics-Informed Neural Networks**
PINNs [203] approximate either the solution \(u(x)\), the coefficients \(\lambda(x)\), or both with FC-NNs.
\[\hat{u}(x) =F_{FNN}(x;\mathbf{\theta}^{u}) \tag{27}\] \[\hat{\lambda}(x) =I_{FNN}(x;\mathbf{\theta}^{\lambda}) \tag{28}\]
Instead of training the network with labeled data as in Equation (5), the residual of the PDE is considered. The residual is evaluated at a set of \(N_{\mathcal{N}}\) points, called collocation points. Taking the mean squared error over the residual evaluations yields the PDE loss
\[\mathcal{L}_{\mathcal{N}}=\frac{1}{2N_{\mathcal{N}}}\sum_{i}^{N_{\mathcal{N}} }||\mathcal{N}[u(x_{i});\lambda(x_{i})]||_{2}^{2}=\frac{1}{2N_{\mathcal{N}}} \sum_{i}^{N_{\mathcal{N}}}\left(\frac{d}{dx}\left(EA(x_{i})\frac{du(x_{i})}{ dx}\right)+p(x_{i})\right)^{2}. \tag{29}\]
The gradients of the possible predictions, i.e., \(u,EA\), and \(p\) with respect to \(x\), are obtained with automatic differentiation [210] through the NN approximation. Similarly, the boundary conditions are enforced at the \(N_{\mathcal{B}_{\mathcal{D}}}+N_{\mathcal{B}_{\mathcal{N}}}\) boundary points.
\[\mathcal{L}_{\mathcal{B}}=\frac{1}{2N_{\mathcal{N}_{D}}}\sum_{i}^{N_{\mathcal{ B}_{\mathcal{D}}}}(u(x_{i})-g)^{2}+\frac{1}{2N_{\mathcal{B}_{N}}}\sum_{i}^{N_{ \mathcal{B}_{N}}}\left(EA(x_{i})\frac{du(x_{i})}{dx}-f\right)^{2} \tag{30}\]
The cost function is composed of the PDE loss \(\mathcal{L}_{\mathcal{N}}\), boundary loss \(\mathcal{L}_{\mathcal{B}}\), and possibly a data-driven loss \(\mathcal{L}_{\mathcal{D}}\)
\[C=\mathcal{L}_{\mathcal{N}}+\mathcal{L}_{\mathcal{B}}+\mathcal{L}_{\mathcal{D }}. \tag{31}\]
Both the deep least-squares method [211] and the deep Galerkin method [212] are closely related. Instead of focusing on the residuals at individual collocation points as in PINNs, these methods consider the \(L^{2}\)-norm of the residuals integrated over the domain \(\Omega\).
**Variational Physics-Informed Neural Networks**
Computing high-order derivatives for the non-linear operator \(\mathcal{N}\) is expensive. Therefore, variational PINNs [213, 214] consider the weak form of the PDE, which lowers the order of differentiation. In the case of the bar equation, the weak PDE loss is given by
\[\mathcal{L}_{\mathcal{V}}=\int_{\Omega}\frac{dw(x)}{dx}EA(x)\frac{du(x)}{dx}d \Omega-\int_{\Gamma_{N}}w(x)EA(x)\frac{du(x)}{dx}d\Gamma_{N}-\int_{\Omega}w(x )p(x)d\Omega=0,\forall w(x). \tag{32}\]
In [213], trigonometric and polynomial test functions \(w(x)\) are used. The cost function is obtained by replacing the PDE loss \(\mathcal{L}_{\mathcal{N}}\) with the weak PDE loss \(\mathcal{L}_{\mathcal{V}}\) in Equation (31). Note that the Neumann boundary conditions are now not included in the boundary loss \(\mathcal{L}_{\mathcal{B}}\), as they are already incorporated in the weak form in Equation (32). The integrals are evaluated through numerical integration methods, such as Gaussian quadrature, Monte Carlo integration methods [215, 216], or sparse grid quadratures [217]. Severe inaccuracies can be introduced through the numerical integration of the NN output - for which remedies have been proposed in [218].
**Weak Adversarial Networks**
Instead of specifying the test functions \(w(x)\), weak adversarial networks [219] employ a second NN as test function
\[\hat{w}(x)=W_{FNN}(x;\mathbf{\theta}^{w}). \tag{33}\]
The test function is learned through a minimax optimization
\[\min_{\mathbf{\theta}^{-}}\max_{\mathbf{\theta}^{w}}C, \tag{34}\]
where the test function \(w(x)\) continually challenges the solution \(u(x)\).
**Deep Energy Method & Deep Ritz Method**
By minimizing the potential energy \(\Pi=\Pi_{i}+\Pi_{e}\) instead, the need for test functions is overcome by the deep energy method [220] and the deep Ritz method [221]. This results in the following loss term
\[\mathcal{L}_{\mathcal{E}}=\Pi_{i}+\Pi_{e}=\frac{1}{2}\int_{\Omega}EA(x)\left( \frac{du(x)}{dx}\right)^{2}d\Omega-\int_{\Gamma}u(x)EA(x)\frac{du(x)}{dx}d \Gamma-\int_{\Omega}u(x)p(x)d\Omega. \tag{35}\]
Note that the inverse problem generally cannot be solved using the minimization of the potential energy. Consider, for instance, the potential energy of the bar equation in Equation (35), which is not well-posed in the inverse setting. Here, \(EA(x)\) going towards \(-\infty\) in the domain \(\Omega\) and going towards \(\infty\) at \(\Gamma_{N}\) minimizes the potential energy \(\mathcal{L}_{\mathcal{E}}\).
**Extensions**
A multitude of extensions to the PINN methodology exist. For in-depth reviews, see [204, 205, 206].
**Learning Multiple Solutions**
Currently, PINNs are mainly employed to learn a single solution. As the training effort exceeds the solving effort of classical solvers, the viability of PINNs is questionable [222]. However, PINNs can also be employed to learn multiple solutions. This is achieved by providing the parametrization of the PDE, i.e., \(\lambda\) as an additional input to the network, as discussed in Section 2. This enables a cheap prediction stage without retraining for new solutions12. One possible example for this is [223], where different geometries are captured in terms of point clouds and processed with point cloud-based NNs [92].
Footnote 12: Importantly, the training would be without training data and would only require a definition of the parametrized PDE. Currently, this is only possible for simple PDEs with small parameter spaces.
**Boundary Conditions**
The enforcement of the boundary conditions through a penalty term \(\mathcal{L}_{\mathcal{B}}\) in Equation (30) leads to an unbalanced optimization, due to the competing loss terms \(\mathcal{L}_{\mathcal{N}},\mathcal{L}_{\mathcal{B}},\mathcal{L}_{\mathcal{D}}\) in Equation (31)13. One remedy is to modify the NN output \(F_{FNN}\) by multiplication of a function, such that the Dirichlet boundary conditions are satisfied a priori, i.e., \(\mathcal{L}_{\mathcal{B}}=0\), as demonstrated in [224, 20].
Footnote 13: Consider, for instance, a training procedure in which the PDE loss \(\mathcal{L}_{\mathcal{N}}\) is first minimal, such that the PDE is fulfilled. Without fulfilment of the boundary conditions, the solution is not unique. However, the NN struggles to modify the current boundary values without violating the PDE loss and thereby increasing the total cost function \(C\). The NN is thus stuck in a bad local minimum. Similar scenarios can be formulated for a too rapid minimization of the other loss terms.
\[\hat{u}(x)=G(x)+D(x)F_{FNN}(x;\mathbf{\theta}^{u}) \tag{36}\]
Here, \(G(x)\) is a smooth interpolation of the boundary conditions, and \(D(x)\) is a signed distance function that is zero at the boundary. For Neumann boundary conditions, [225] propose to predict \(u\) and its derivatives \(\partial u/\partial x\) with separate networks, such that the Neumann boundary conditions can be enforced strongly by modifying the derivative network. This requires an additional constraint, ensuring that the derivative predictions match the derivative of \(u\). For complex domains,
\(G(x)\) and \(D(x)\) cannot be found analytically. Therefore, [224] use NNs to learn \(G(x)\) and \(D(x)\) in a supervised manner by prescribing either the boundary values or zero at the boundary and restricting the values within the domain to be non-zero. Similarly [226] proposed using radial basis function networks for \(G(x)\), where \(D(x)=1\) is assumed. The radial basis function networks are determined by solving a linear system of equations constructed with the boundary conditions. On uniform grids, strong enforcement can be achieved through specialized CNN kernels [185] with constant padding terms for Dirichlet boundary conditions and ghost cells for Neumann boundary conditions. Constrained backward propagation [227] has also been proposed to guarantee the enforcement of boundary conditions [228, 229].
Another possibility is to introduce weighting terms \(\kappa_{\mathcal{N}},\kappa_{\mathcal{B}},\kappa_{\mathcal{D}}\) for each loss term. These are either hyperparameters, or they are learned during the optimization with attention mechanisms [230, 231, 232]. This is achieved by performing a minimax optimization with respect to all weighting terms \(\boldsymbol{\kappa}=\{\kappa_{\mathcal{N}},\kappa_{\mathcal{B}},\kappa_{ \mathcal{D}}\}\)
\[\min_{\boldsymbol{\theta}}\max_{\boldsymbol{\kappa}}C. \tag{37}\]
Expanding on this idea, each collocation point used for the loss terms can be considered an individual equality constraint [233, 234]. Therefore, a weighting term \(\kappa_{\mathcal{N}_{i}}\) is allocated for each collocation point \(x_{i}\), as illustrated for the PDE loss \(\mathcal{L}_{\mathcal{N}}\) from Equation (29)
\[\mathcal{L}_{\mathcal{N}}=\frac{1}{2N_{\mathcal{N}}}\sum_{i}^{N_{\mathcal{N}} }\kappa_{\mathcal{N},i}||\mathcal{N}[u(x_{i});\lambda(x_{i})]||_{2}^{2}. \tag{38}\]
This has the added advantage that greater emphasis is assigned on more important collocation points, i.e., points which lead to larger residuals. This approach is strongly related to the approaches relying on the augmented Lagrangian method [235] and competitive PINNs [236], where an additional NN models the penalty weights \(\kappa(x)=K_{FNN}(x;\boldsymbol{\theta}^{\kappa})\). This is similar to weak adversarial networks, but instead formulated using the strong form.
**Ansatz**
Another prominent topic is the question of which ansatz to choose. The type of ansatz is, for example, determined by different NN architectures (see [237] for a comparison) or combinations with classical ansatz formulations. Instead of using FC-NNs, some authors [238, 163] employ CNNs to exploit the spatial structure of the data. Irregular geometries can be handled by embedding the structure in a rectangular domain using binary encodings [239] or signed distance functions [61, 240]. Another option are coordinate transformations into rectangular grids [241]. The CNN requires a full-grid discretization, meaning that the coordinates \(x\) are analytically independent of the prediction \(\hat{u}=F_{CNN}\). Thus, the gradients of \(u\) are not obtained with automatic differentiation, but with numerical differentiation, i.e., finite differences. Alternatively, the output of the CNN can represent coefficients of an interpolation, as proposed under the name spline-PINNs [242] using Hermite splines. This again allows for an automatic differentiation. This is similarly applied for irregular geometries in [243], where GNNs are used in combination with a piecewise polynomial basis. Using a classical basis has the added advantage that Dirichlet boundary conditions can be satisfied exactly. A further variation is the approximation of the coefficients of classical bases with FC-NNs. This is shown with B-splines in [244] in the sense of isogeometric analysis [245]. This was similarly done for piecewise polynomials in [246]. However, instead of simply minimizing the PDE residual from Equation (29) directly, the finite element discretization [247, 248] is exploited. The loss \(\mathcal{L}_{\mathcal{F}}\) can thus be formulated in terms of the non-linear stiffness matrix \(\boldsymbol{K}\), the force vector \(\boldsymbol{F}\), and the degrees of freedom \(\boldsymbol{u}^{h}\).
\[\mathcal{L}_{\mathcal{F}}=||\boldsymbol{K}(\boldsymbol{u}^{h})\boldsymbol{u} ^{h}-\boldsymbol{F}||_{2}^{2} \tag{39}\]
In the forward problem, \(\boldsymbol{u}^{h}\) is approximated by a FC-NN, whereas for the inverse problem a FC-NN predicts \(\boldsymbol{K}\). Similarly, [249, 250] map a NN onto a finite element space by using the NN evaluations at nodal coordinates as the corresponding basis function coefficents. This also allows a straightforward strong enforcement of Dirichlet boundary conditions, as demonstrated in [54] with CNNs.
The nodes are represented as pixels (see Figure 2).
Prior information on the solution can be incorporated through a feature layer [251]. If, for example, it is known that the solution is composed of trigonometric functions, a feature layer with trigonometric functions can be applied after the input layer. Thus, known features are given to the NN directly to aid the learning. Without known features, the task can also be modified to improve learning. Inspired by adaptivity from finite elements, refinements are progressively learned by additional layers of the NN [252] (see Figure 5). Thus, a coarse solution \(\mathbf{u}_{1}\) is learned to begin with, then refined to \(\mathbf{u}_{2}\) by an additional layer, which again is refined to \(\mathbf{u}_{3}\) until the deepest refinement level is reached.
**Domain Decomposition**
To improve the scalability of PINNs to more complex problems, several domain decomposition methods have been proposed. One approach are hp-variational PINNs [214], where the domain is decomposed into patches. Piecewise polynomial test functions are defined on each patch separately, while the solution is approximated by a globally acting NN. This enables a separate numerical integration of each patch, improving its accuracy.
In an alternative formulation, one NN can be used per subdomain. This was proposed as conservative PINNs [253], where conservation laws are enforced at the interface to ensure continuity. Here, the discrepancies between both solution and flux were penalized at the interface in a least squares manner. The advantages of this approach are twofold: Firstly, parallelization is possible [254] and, secondly, adaptivitiy can be introduced. Shallower networks can be employed for smooth solutions and deeper networks for more complex solutions. The approach was generalized for any PDE in the context of extended PINNs [255]. Here, the interface condition is formulated in terms of the difference in both the residual and the solution.
**Acceleration Methods**
Analogously to supervised learning, as discussed in Section 2.1, transfer learning can be applied to PINNs [256] as, e.g., demonstrated in phase-field fracture [257] or topology optimization [258]. These are very suitable problems since crack and displacement fields evolve with mostly local changes in phase-field fracture. For topology optimization, only minor updates are expected between each optimization iteration [258].
The poor performance of PINNs in their original form can also be improved with better sampling
Figure 5: Refinement expressed with NNs in terms of NN depth. Thick black lines indicate non-learnable connections and gray lines indicate learnable connections. Each added layer is composed of a projection from the coarser level and a correction obtained through the learnable connection.
strategies. In importance sampling [259, 260], the collocation point density is proportional to the value of the cost function. Alternatively, residual-based adaptive refinement [209] adds collocation points in the vicinity of areas with a higher cost function.
Another essential topic for NNs is normalization of the inputs, outputs, and loss terms [261, 262]. For time-dependent problems, it is possible to use time-dependent normalization [263] to ensure that the solution is always in the same range regardless of the time step.
Furthermore, the cost function can be enhanced by including the derivative of the residual [264] as well. The derivative should also be minimized, as both the residual and its derivative should be zero at the correct solution. However, a general problem in the cost function formulation persists. The cost function should correspond to the norm of the error, which is not necessarily the case. This means that a reduction in the cost does not necessarily yield an improvement in quality of solution. The error norm can be expressed in terms of the \(H^{-1}\)-norm, which, according to [265], can efficiently be computed on rectangular domains with Fourier transforms. Thus, the \(H^{-1}\)-norm can directly be used as cost function and minimized.
Another aspect is numerical differentiation, which is advantageous for the residual of the PDE [266], as automatic differentiation may be erroneous due to spurious oscillations between collocation points. Thus, numerical differentiation enforces regularity, which was exploited in [266] by coupling automatic differentiation and numerical differentiation to retain the advantages of automatic differentiation.
Further specialized modifications to NN architectures have been proposed. Adaptive activation functions [267] have shown acceleration in convergence. Extreme learning machines [268, 269] remove the need for iterations altogether. All layers are randomly initialized in extreme learning machines, and only the last layer is learnable. Without a non-linear activation function, the parameters are found with a least-squares regression. This was demonstrated for PINNs in [270]. Instead of only learning the last layer, the problem can be split into a non-linear and a linear regression problem, which are solved separately [271], such that the full expressivity of NNs is retained.
**Applications To Forward Problems**
PINNs have been applied to various PDEs (see [204, 205, 206] for an overview). Forward problems can, for example, be found in solid mechanics [261, 272, 273], fluid mechanics [274, 275, 276, 277, 278, 279, 280, 281], and thermomechanics [282, 283]. Currently, PINNs do not outperform classical solvers such as the finite element method [284, 222] in terms of speed for a given accuracy of engineering relevance. In the author's experience and judgement, this is especially the case for forward problems even if the extensions mentioned above are employed. Often, the mentioned gains compared to classical forward solvers disregard the training effort and only report evaluation times.
Incorporating large parts of the solution in the form of measurements with the data-driven loss \(\mathcal{L}_{\mathcal{D}}\) improves the performance of PINNs, which thereby can become a viable method in some cases. Yet, [285] states that data-driven methods outperform PINNs. Thus PINNs should not be regarded as a replacement for data-driven methods, but rather as a regularization technique for data-driven methods to reduce the generalization error.
**Applications To Inverse Problems**
However, PINNs are in particular useful for inverse problems with full domain knowledge, i.e., the solution is available throughout the entire domain. This has, for example, been shown for the identification of material properties [286, 287, 262, 289]. In contrast, for inverse problems with only partial knowledge, the applicability of PINNs is limited [290], as both forward and inverse solution have to be learned simultaneously. Most applications therefore limit themselves to simpler
inversions such as size and shape optimization. Examples are published, e.g., in [291, 292, 293, 294, 295, 296]. Exceptions that deal with the identification of entire fields can be found in full waveform inversion [297], topology optimization [298], elasticity, and the heat equation [299].
#### Inverse Problems
PINNs are capable of discovering governing equations by either learning the operator \(\mathcal{N}\) or the coefficients \(\lambda\). The resulting operator is, however, not always interpretable, and in the case of identification of the coefficients, the underlying PDE is assumed. To discover interpretable operators, one can apply sparse regression approaches [300]. Here, potential differential operators are assumed as an input to the non-linear operator
\[\hat{\mathcal{N}}\left[x,u,\frac{\partial u}{\partial x},\frac{\partial^{2}u} {\partial x^{2}},\ldots\right]=0. \tag{40}\]
Subsequently, a NN learns the corresponding coefficients using observed solutions inserted into Equation (40). The evaluation of the differential operators is achieved through automatic differentiation by first interpolating the solution with a NN. Sparsity is ensured with a \(L^{1}\)-regularization.
A more sophisticated and complete framework is AI-Feynman [301]. Sequentially, dimensional analysis, polynomial regression, and brute force search algorithms are applied to identify fundamental laws in the data. If unsuccessful, a NN interpolates the data, which can thereby be queried for symmetry and separability. The identification of symmetries leads to a reduction in variables, i.e., a reduction of the input space. In the case of separability, the problem is decomposed into two subproblems. The reduced problems or subproblems are iteratively fed through the framework until an equation is identified. AI-Feynman has been successfully applied to 100 equations from the Feynman lectures [302].
#### Time-Stepping Procedures
Again Equation (2) and Equation (3) will be considered for the time-stepping procedures.
##### Physics-Informed Neural Networks
In the spirit of domain decomposition, parareal PINNs [303] split up the temporal domain in subdomains [\(t_{i}<t_{i+1}\)]. A rough estimate of the solution \(u\) is provided by a conjugate gradient solver on a simplified form of the PDE starting from \(t_{0}\). PINNs are then independently applied in each subdomain to correct the estimate. Subsequently, the conjugate gradient solver is applied again, starting from \(t_{1}\). This process is repeated until all time steps have been traversed. A closely related approach can be found in [304], where a PINN is retrained on successive time segments. It is however ensured that previous time steps are kept fulfilled through a data-driven loss term for time segments that were already learned.
Another approach are the discrete-time PINNs [203], which consider the temporal dimension in a discrete manner. The differential equation from Equation (2) is discretized with the Runge-Kutta method with \(q\) stages [305]:
\[u^{n+c_{i}} =u^{n}+\Delta t\sum_{j=1}^{q}a_{ij}\mathcal{N}[u^{n+c_{j}}],\qquad i =1,\ldots,q, \tag{41}\] \[u^{n+1} =u^{n}+\Delta t\sum_{j=1}^{q}b_{j}\mathcal{N}[u^{n+c_{j}}], \tag{42}\]
where
\[u^{n+c_{j}}(x)=u(t^{n}+c_{j}\Delta t,x),\qquad j=1,\ldots,q. \tag{43}\]
A NN \(F_{NN}\) predicts all stages \(i=1,\ldots,q\) from an input \(x\):
\[\boldsymbol{\hat{u}}=[\hat{u}^{n+c_{1}}(x),\ldots,\hat{u}^{n+c_{q}}(x),\hat{u }^{n+1}(x)]=F_{NN}(x;\boldsymbol{\theta}). \tag{44}\]
The cost is then constructed by rearranging Equations (41) and (42).
\[\dot{u}^{n}=\dot{u}^{n}_{i}=\dot{u}^{n+c_{i}}-\Delta t\sum_{j=1}^{q}a_{ij}{\cal N }[\dot{u}^{n+c_{j}}],\qquad i=1,\ldots,q, \tag{45}\]
\[\dot{u}^{n}=\dot{u}^{n}_{q+1}=\dot{u}^{n+1}-\Delta t\sum_{j=1}^{q}b_{j}{\cal N} [\dot{u}^{n+c_{j}}]. \tag{46}\]
The \(q+1\) predictions \(\hat{u}^{n}_{i},\hat{u}^{n}_{q+1}\) of \(\hat{u}^{n}\) have to match the initial conditions \(u^{{\cal M}^{n}}\), where the mean squared error is used as a loss function to learn all stages \(\mathbf{\hat{u}}\). The approach has been applied to fluid mechanics [306, 307].
#### 2.2.2 Inverse Problems
As for inverse problems in the space-time approaches (Paragraph 2.2.1.2), the non-linear operator \({\cal N}\) can be learned. For temporal problems, this corresponds to the right-hand side of Equation (2) for PDEs and to Equation (3) for systems of ODEs. The predicted right-hand side can then be used to predict time series using a classical time-stepping scheme, as proposed in [308]. More sophisticated methods leaning on similar principles are presented in the following. Specifically, we will discuss PDE-Net for discovering PDEs, SINDy for discovering systems of ODEs in an interpretable sense, and an approach relying on multistep methods for systems of ODEs. The multistep approach leads to a non-interpretable, but more expressive approximation of the right-hand side.
**PDE-Net**
PDE-Net [309, 310] is designed to learn both the system dynamics \(u(x,t)\) and the underlying differential equation it follows. Given a problem of the form of Equation (2), the right-hand side can be approximated as a function of coordinates and gradients of the solution.
\[\hat{\cal N}\left[x,u,\frac{\partial u}{\partial x},\frac{\partial^{2}u}{ \partial x^{2}},\cdots\right] \tag{47}\]
The operator \(\hat{\cal N}\) is approximated by NNs. The first step involves estimating spatial derivatives using learnable convolutional filters. The filters are designed to adjust their order of approximation based on the fit to the underlying measurements \(u^{\cal M}\), while the type of gradient is predefined14. Thus, the NN learns how to best approximate spatial derivatives specific to the underlying data. Subsequently, the inputs of \(\hat{\cal N}\) are combined with point-wise CNNs [311] in [309] or a symbolic network in [310]. Both yield an interpretable operator from which the analytical expression can be extracted. In order to construct a loss function, Equations (2) and (47) are discretized using the forward Euler method:
Footnote 14: This is enforced through constraints using moment matrices of the convolutional filters.
\[u(x,t_{n+1})=u(x,t_{n})+\Delta t\hat{\cal N}\left[x,u,\frac{\partial u}{ \partial x},\frac{\partial^{2}u}{\partial x^{2}},\cdots\right]. \tag{48}\]
This temporal discretization is applied iteratively, and the discrepancy between the derived function and the measured data \(u^{\cal M}(x,t_{n})\) serves as the loss function.
**SINDy**
Sparse identification of non-linear dynamic systems (SINDy) [312] deals with the discovery of dynamic systems of the form of Equation (3). The task is posed as a sparse regression problem. Snapshot matrices of the state \(\mathbf{X}=[\mathbf{x}(t_{1}),\mathbf{x}(t_{2}),\ldots,\mathbf{x}(t_{n})]\) and its time derivative \(\dot{\mathbf{X}}=[\dot{\mathbf{x}}(t_{1}),\dot{\mathbf{x}}(t_{2}),\ldots,\dot{\mathbf{x}}(t_{n})]\) are related to one another via candidate functions \(\mathbf{\Theta}(\mathbf{X})\) evaluated at \(\mathbf{X}\) using unknown coefficients \(\mathbf{\Xi}\):
\[\dot{\mathbf{X}}=\mathbf{\Theta}(\mathbf{X})\mathbf{\Xi}. \tag{49}\]
The coefficients \(\mathbf{\Xi}\) are determined through sparse regression, such as sequential thresholded least squares or LASSO regression. By including partial derivatives, SINDy has been extended to the discovery of PDEs [313, 314].
The expressivity of SINDy can further be increased by a coordinate transformation into a representation allowing for a simpler representation of the system dynamics. This can be achieved with an autoencoder (consisting of an encoder \(e_{NN}(x;\mathbf{\theta}^{e})\) and a decoder \(d_{NN}(h;\mathbf{\theta}^{d})\), as proposed in [315], where the dynamics are learned on the reduced latent space \(h\) using SINDy. A simultaneous optimization of the NN parameters \(\mathbf{\theta}^{e},\mathbf{\theta}^{d}\) and SINDy parameters \(\mathbf{\Xi}\) is conducted with gradient descent. The cost is defined in terms of the autoencoder reconstruction loss \(\mathcal{L}_{\mathcal{A}}\) and the residual of Equation (49) at both the reduced latent space \(\mathcal{L}_{\mathcal{R}}\) and the original space \(\mathcal{L}_{\mathcal{F}}\)15. A \(L^{1}\)-regularization for \(\mathbf{\Xi}\) promotes sparsity.
Footnote 15: The encoder and decoder are derived with respect to their inputs to estimate the derivatives \(\dot{\mathbf{x}},\dot{\mathbf{h}}\) using the chain rule.
\[\mathcal{L}_{\mathcal{A}} =\frac{1}{2n}\sum_{i=1}^{n}||\mathbf{x}(t_{i})-d_{NN}\big{(}e_{NN}( \mathbf{x}(t_{i});\mathbf{\theta}^{e});\mathbf{\theta}^{d}\big{)}||_{2}^{2} \tag{50}\] \[\mathcal{L}_{\mathcal{R}} =\frac{1}{2n}\sum_{i=1}^{n}||\underbrace{\Big{(}\nabla_{x}e_{NN} \big{(}\mathbf{x}(t_{i});\mathbf{\theta}^{e}\big{)}\Big{)}\cdot\dot{\mathbf{x}}(t_{i})}_{ \mathbf{h}}-\mathbf{\Theta}\Big{(}e_{NN}\big{(}\mathbf{x}(t_{i});\mathbf{\theta}^{e}\big{)} \Big{)}\mathbf{\Xi}||_{2}^{2}\] (51) \[\mathcal{L}_{\mathcal{F}} =\frac{1}{2n}\sum_{i=1}^{n}||\dot{\mathbf{x}}(t_{i})-\nabla_{h}d_{NN} \big{(}\underbrace{e_{NN}(\mathbf{x}(t_{i});\mathbf{\theta}^{e})}_{\mathbf{h}};\mathbf{\theta} ^{d}\big{)}\cdot\underbrace{\mathbf{\Theta}\Big{(}e_{NN}(\mathbf{x}(t_{i});\mathbf{\theta} ^{e})\Big{)}\mathbf{\Xi}}_{\mathbf{h}}||_{2}^{2}\] (52) \[C =\kappa_{\mathcal{A}}\mathcal{L}_{\mathcal{A}}+\kappa_{\mathcal{ R}}\mathcal{L}_{\mathcal{R}}+\kappa_{\mathcal{F}}\mathcal{L}_{\mathcal{F}} \tag{53}\]
As in Equation (23), a weighted cost function with weights \(\kappa_{\mathcal{A}},\kappa_{\mathcal{R}},\kappa_{\mathcal{F}}\) is employed. The reduced latent space can be exploited for forward simulations of the identified system. By solving the system with classical time-stepping schemes in the reduced latent space, the solution is obtained in the full space through the decoder, as outlined in [316]. Thus, a reduced order model of a previously unknown system is identified. The downside is, that the model is no longer interpretable in the full space.
**Multistep Methods**
Another approach to learning the system dynamics from Equation (3) is to approximate the right-hand side directly with a NN \(\mathbf{\hat{f}}(\mathbf{x}_{i})=O_{NN}(\mathbf{x}_{i};\mathbf{\theta})\), \(\mathbf{x}_{i}=\mathbf{x}(t_{i})\). A residual can be formulated by considering linear multistep methods [305], a residual can be formulated. In general, these methods take the form:
\[\sum_{m=0}^{M}[\alpha_{m}\mathbf{x}_{n-m}+\Delta t\beta_{m}\mathbf{f}(\mathbf{x}_{n-m})]=0, \tag{54}\]
where \(M,\alpha_{0},\alpha_{1},\beta_{0},\beta_{1}\) are parameters specific to a multistep scheme. The scheme can be reformulated with a cost function, given as:
\[C =\frac{1}{N-M+1}\sum_{n=M}^{N}||\mathbf{\hat{y}}_{n}||_{2}^{2} \tag{55}\] \[\mathbf{\hat{y}}_{n} =\sum_{m=0}^{M}[\alpha_{m}\mathbf{x}_{n-m}+\Delta t\beta_{m}\mathbf{\hat {f}}(\mathbf{x}_{n-m})] \tag{56}\]
The idea of the method is strongly linked to the discrete-time PINN presented in Paragraph 2.2.2.1, where a reformulation of the Runge-Kutta method yields the cost function needed to learn the forward solution.
#### 2.2.3 Enforcement Of Physics By Construction
Up to this point, this review only considered the case where physics are enforced indirectly through penalty terms of the PDE residual. The only exception, and the first example of enforcing physics by construction, was the strong enforcement of boundary conditions [224, 20, 185] by modifying the outputs of the NN - which led to a fulfillment of the boundary conditions independent of the NN parameters. For PDEs, this can be achieved by manipulating the output, such that the solution automatically obeys fundamental physical laws. Examples for this are, e.g., given in [317], where stream functions are predicted and subsequently differentiated to ensure conservation of mass, the incorporation of symmetries [318], or invariances [319] by using integrity bases [320]. Dynamical systems have been treated by learning the Lagrangian or Hamiltonian with correspondingly Lagrangian NNs [321, 322, 323] and Hamiltonian NNs [324]. The quantities of interest are obtained through the differentiable NN and compared to labeled data. Indirectly learning the quantities of interest through the Lagrangian or Hamiltonian guarantees the conservation of energy. Enforcing the physics by construction is also referred to as physics-constrained learning, as the learnable space is constrained. More examples hereof are provided in the context of simulation enhancement in Section 3.2.
## 3 Simulation Enhancement
The category of simulation enhancement deals with any deep learning technique that interacts directly with and, thus, improves a component of a classical simulation. This is the most diverse category and will therefore be subdivided into the individual steps of a classical simulation pipeline:
* pre-processing
* physical modeling
* numerical methods
* post-processing
Both data-driven and physics-informed approaches will be discussed in the following.
### Pre-processing
The discussed pre-processing methods are trained in a supervised manner relying on the techniques presented in Section 2.1 and on labeled data.
#### 3.1.1 Data Preparation
Data preparation includes tasks, such as geometry extraction. For instance the detection of cracks from images by means of segmentation [325, 326, 327] can subsequently be used in simulations to assess the impact of the identified cracks. Also, CNNs have been used to prepare voxel data obtained from computed tomography scans, see [328], where scanning artifacts are removed. Similarly NNs can be employed to enhance measurement data. This was, for example, demonstrated in [329], where the NN acts as a denoiser for magnetic signals in the scope of non-destructive testing. Similarly, low-frequency extrapolation for full waveform inversion has been performed using NNs [330, 331, 332].
#### 3.1.2 Initialization
Instead of preparing the data, the simulation can be accelerated by an initialization. This can, for example, be achieved through initial guesses by NNs, providing a better starting point for classical iterative solvers [333]16. A tighter integration is achieved by using a pre-trained [256] NN ansatz whose parameters are subsequently tweaked by the classical solver, as demonstrated for full waveform inversion in [161].
#### 3.1.3 Meshing
Finally, many simulation techniques rely on meshes. This can be achieved indirectly with NNs, by prediction of mesh density functions [334, 335, 336, 337, 338] incorporating either expert knowledge of where small elements are needed, or relying on error estimations. Subsequently, a classical mesh generator is employed. However, NNs (specifically let-it-grow NNs [339]) have also been proposed directly as mesh generators [340, 341].
### Physical Modeling
Physical models that capture physical phenomena accurately are a core component of mechanics. Deep learning offers three main approaches for physical models. Firstly, a NN is used as the physical model directly (model substitution). Secondly, an underlying model may be assumed where a NN determines its coefficients (identification of model parameters). Lastly, the entire model can be identified by a NN (model identification). In the first approach, the NN is integrated within the simulation pipeline, while the latter two rely on incorporation of the identified models in a classical sense.
For illustration purposes, the approaches are mostly explained on the example of constitutive models. Here, the task is to relate the strain \(\varepsilon\) to a stress \(\sigma\), i.e., find a function \(\sigma=f(\varepsilon)\). This can, for example, be used within a finite element framework to determine the element stiffness, as elaborated in [342].
#### 3.2.1 Model Substitution
In model substitution, a NN \(f_{NN}\) replaces the model, yielding the prediction \(\hat{\sigma}=f_{NN}(\varepsilon;\boldsymbol{\theta})\). The quality of the model is assessed with a data-driven cost function (Equation (4)) using labeled data \(\sigma^{\mathcal{M}},\varepsilon^{\mathcal{M}}\). The approach is applied to a variety of problems, where the key difference lies in the definition of input and output quantities. The same deep learning techniques from data-driven simulation substitution (Section 2.1) can be employed.
Applications include predictions of stress from strain [342, 343], flow stresses from temperatures, strain rates and strains [344, 345], yield functions [346], crack opening responses from stresses [347], contact stiffness from penetration and contact pressure [348], point of contact from position of neighboring nodes of finite elements [349], or control points of NURBS surfaces [350]. Source terms of simplified equations or coarser discretizations have also been learned for turbulence [49, 351, 352] and the wave equation [353]. Here, the reference - a high-fidelity model - is to be captured in the best possible way by the source term.
Variations also predict the quantity of interest indirectly. For example, strain energy densities \(\psi\) are predicted by NNs from deformation tensors \(F\), and subsequently derived using automatic differentiation to obtain stresses [354, 355]. The approach can also be extended to incorporate uncertainty quantification [356]. By extending the input space with microstructural information, an in-built homogenization is added to the constitutive model [357, 358, 359]. Thus, the macroscale simulation considers the microstructure at the integration points in the sense of FE\({}^{2}\)[360, 361], but without an additional finite element computation. Incorporation of microstructures requires a large amount of realistic training data, which can be obtained through generative approaches as discussed in Section 5. Active learning can reduce the required number of simulations on these geometries [158].
A specialized NN architecture is employed by [362], where a NN first estimates invariants \(I\) of the deformation tensor \(F\) and thereupon predicts the strain energy density, thus mimicking the classical constitutive modeling approach. Another network extension is the use of RNNs to learn history-dependent models. This was shown in [357, 358, 363, 364] for the prediction of the stress increment from the strain-stress history, the strain energy from the strain energy history [365], and
crack patterns based on prior cracks and crystalline orientations [366, 367].
The learned models do not, however, necessarily obey fundamental physical laws. Attempts to incorporate physics as constraints using penalty terms have been made in [368, 369, 370]. Still, physical consistency is not guaranteed. Instead, NN architectures can be chosen such that they satisfy physical requirements by construction. In constitutive modeling, objectivity can be enforced by using only deformation invariants as input [371], and polyconvexity can be enforced through the architecture, such as input-convex NNs [372, 373, 374, 375] and neural ordinary differential equations [371, 376]. It was demonstrated that ensuring fundamental physical aspects such as invariants combined with polyconvexity delivers a much better behavior for unseen data, especially if the model is used in extrapolation.
Input-convex NNs [377] enforce the convexity with specialized activation functions such as log-sum-exponential, or softplus functions in combination with constraints on the NN weights to ensure that they are positive, while neural ordinary differential equations [378] (discussed in Section 4) approximate the strain energy density derivatives and ensure non-negative values. Alternatively, a mapping from the NN to a convex function can be defined [379] ensuring a convex function for any NN output. Related are also thermodynamics-based NNs [380, 381], e.g., applied to complex microstructures in [382], which by construction obey fundamental thermodynamic laws. Training of these methods can be performed in a supervised manner, relying on strain-stress data, or unsupervised. In the unsupervised setting, the constitutive model is incorporated in a finite element solver, yielding a displacement field for a specific boundary value problem. The computed field, together with measurement data, yields a residual that is referred to as the modified constitutive relation error (mCRE) [383, 384, 385], which is minimized to improve the constitutive relation [386, 387]. Instead of formulating the mismatch in terms of displacements, [388, 389] formulate it in terms of boundary forces. For an in-depth overview of constitutive model substitution in deep learning, see [15].
#### 3.2.2 Identification Of Model Parameters
Identification of model parameters is achieved by assuming an underlying model and training a NN to predict its parameters for a given input. In the constitutive model example, one might assume a linear elastic model expressed in terms of a constitutive tensor \(c\), such that \(\sigma=c\varepsilon\). The constitutive tensor can be predicted from the material distribution defined in terms of a heterogeneous elasticity modulus \(\mathbf{E}\) defined throughout the domain
\[\hat{c}=f_{NN}(\mathbf{E};\mathbf{\theta}). \tag{57}\]
Typical applications are homogenization, where effective properties are predicted from the geometry and material distribution. Examples are CNN-based homogenizations on computed tomography scans [390, 391], predictions of in-vivo constitutive parameters of aortic walls from its geometry [392], predictions of elastoplastic properties [393] from instrumented indentation results relying on a multi-fidelity approach [394], prediction of stress intensity factors from the geometry in microfabricated microcantilevers [395], estimation of effective bone properties from the boundary conditions and applied stresses within a finite element, and incorporating meso-scale information by training a NN on representative volume elements [396].
#### 3.2.3 Model Identification
NN models as a replacement of classical approaches are not interpretable, while only identifying model parameters of known models restricts the models capacity. This gap can be bridged by the identification of models in terms of parsimonious mathematical expressions.
The typical procedure is to pose the problem in terms of candidate functions and to identify the most relevant terms. The methodology was inspired by SINDy [312] and introduced in the framework for efficient unsupervised constitutive law identification and discovery (EUCLID) [397].
The approach is unsupervised, as the stress-strain data is only indirectly available through the displacement field and corresponding reaction forces. The \(N_{I}\) invariants \(I_{i}\) of the deformation tensor \(F\) are inserted into a candidate library \(Q(\{I_{i}\}_{i=1}^{N_{I}})\) containing the candidate functions. Together with the corresponding weights \(\mathbf{\theta}\), the strain density \(\psi\) is determined:
\[\psi(\{I_{i}\}_{i=1}^{N_{I}})=Q^{T}(\{I_{i}\}_{i=1}^{N_{I}})\mathbf{\theta}. \tag{58}\]
Through derivation of the strain density \(\psi\) using automatic differentiation, the stresses \(\mathbf{\sigma}\) are determined. The problem is then cast into the weak form with which the linear momentum balance is enforced. The weak form is then minimized with respect to \(\mathbf{\theta}\) using a fixed-point iteration scheme (inspired by [398]), where a \(L_{p}\)-regularization is used to promote sparsity in \(\mathbf{\theta}\). Despite its young age, the approach has already been applied to plasticity [399], viscoelasticity [400], combinations [401], and has been extended to incorporate uncertainties through a Bayesian model [402]. Furthermore, the approach has been extended with an ensemble of input-convex NNs [389], yielding a more accurate, but less interpretable model.
A similar effort was recently carried out by [403, 404], where NNs are designed to retain interpretability. This is achieved through sparse connections in combination with specialized activation functions representing candidate functions, such that they are able to capture classical forms of constitutive terms. Through the sparse connections in the network and the specialized activation functions, the NN's weights become physical parameters, yielding an interpretable model. This is best understood by consulting Figure 6, where the strain energy density is expressed as
\[\hat{\psi}=\theta_{0}^{1}e^{\theta_{0}^{6}I_{1}}+\theta_{1}^{1}\ln(\theta_{1} ^{0}I_{1})+\theta_{2}^{1}e^{\theta_{2}^{6}I_{1}^{2}}+\theta_{2}^{1}\ln(\theta_ {2}^{0}I_{1}^{2})+\theta_{3}^{1}e^{\theta_{3}^{6}I_{2}}+\theta_{4}^{1}\ln( \theta_{4}^{0}I_{2})+\theta_{5}^{1}e^{\theta_{5}^{6}I_{2}^{2}}+\theta_{6}^{1} \ln(\theta_{6}^{0}I_{2}^{2}). \tag{59}\]
Differentiating the predicted strain energy density \(\hat{\psi}\) with respect to the invariants \(I_{i}\) yields the constitutive model, relating stress and strain.
### Numerical Methods
This subsection describes efforts in which NNs are used to replace or enhance classical numerical schemes to solve PDEs.
Figure 6: Automated model discovery through a sparsely connected NN with specialized activation functions acting as candidate functions. The thick black connections are not learnable, while the gray ones represent linearly weighted connections. Figure adapted and simplified from [403].
#### 3.3.1 Algorithm Enhancement
Classical algorithms can be enhanced by NNs, by learning corrections to commonly arising numerical errors, or by estimating tunable parameters within the algorithm. Corrections have, for example, been used for numerical quadrature [405] in the context of finite elements. Therein, NNs are used to predict adjustments to quadrature weights and positions from the nodal positions to improve the accuracy for distorted elements. Similarly, NNs have been applied as correction for strain-displacement matrices for distorted elements [406]. NNs have also been employed to provide improved gradient estimates. Specifically, [407] modify the gradient computation to match a fine scale simulation on a coarse grid:
\[\frac{\partial^{n}u}{\partial x^{n}}\approx\sum_{i}\alpha_{i}^{(n)}u_{i}. \tag{60}\]
The coefficients \(\alpha_{i}\) are predicted by NNs from the current coarse solution. Special constraints are imposed on \(\alpha_{i}\) to guarantee accurate derivatives. Another application are specialized strain mappings for damage mechanics embedded within individual finite elements learned by PINNs [408]. It has even been suggested to partially replace solvers. For example, [409] replace either the fluid or structural solver by a surrogate model for fluid-structure interaction problems.
Learning tunable parameters was demonstrated for the estimation of the largest possible time step using a RNN acting at the latent vector of an autoencoder [410]. Also, optimal test functions for finite elements were learned to improve stability [411].
#### 3.3.2 Multiscale Methods
Multiscale methods have been proposed to efficiently integrate and resolve systems acting on multiple scales. One approach are the learned constitutive models from Section 3.2 that incorporate the microstructure. This is essentially achieved through a homogenization at the mesoscale used within a macroscale simulation.
A related approach is element substructuring [412, 413], where superelements mimic the behavior of a conglomerate of classic basic finite elements. In [414], the superelements are enhanced by NNs, which draw on the boundary displacements to predict the displacements and stresses within the element as well as the reaction forces at the boundary. Through assembly of the reaction forces in the global finite element system, an equilibrium is reached with a Newton-Raphson solver. Similarly, the approach in [415] learns the internal forces from the coarse degrees of freedom of the superelements. These approaches are particularly valuable, as they can seamlessly incorporate history-dependent behavior using RNNs.
Finally, multiscale analysis can also be performed by first solving a coarse global model with a subsequent local analysis. This is referred to as zooming methods. In [416], a NN learns the global model and thereby predicts the boundary conditions for the local model. In a similar sense, DeepONets have been applied for the local analysis [417], whereas the global analysis is performed with a finite element solver. Both are conducted in an alternating fashion until convergence is reached.
#### 3.3.3 Optimization
Optimization is a fundamental task within computational mechanics and therefore addressed separately. It is not only used to find optimal structures, but also to solve inverse problems. Generally, the task can be formulated as minimizing a cost function \(C\) with respect to parameters \(\lambda\). In computational mechanics, \(\lambda\) is typically fed to a forward simulation \(u=F(\lambda)\), yielding a solution \(u\) inserted into the cost function \(C\). If the gradients \(\nabla_{\lambda}C\) are available, gradient-based optimization is the state-of-the-art [418], where the gradients are used to update \(\lambda\). In order to access the gradients, the forward simulation \(F\) has to be differentiable. This requirement is, for example, utilized within the branch of deep learning called differentiable physics [19]. Incorporating gradient
information from the numerical solver into the NN improves learning, feedback, and generalization. An overview and introduction to differentiable physics is provided in [19], with applications in [197, 407, 378, 419, 420, 421]17.
Footnote 17: Applications of differentiable physics vary widely and are addressed throughout this work.
The iterative gradient-based optimization procedure is illustrated in Figure 7. For an in-depth treatment of NNs in optimization, see the recent review [5].
Inserting a learned forward operator \(F\), as those discussed in Section 2.1, into an optimization problem provides two advantages [422, 423, 424, 425, 426]. Firstly, a faster forward operator results in faster optimization iterations. Secondly, the gradient computation is simplified, as automatic differentiation through the forward operator \(F\) is straightforward in contrast to the adjoint state method [427, 428]. Note however, that for time-stepping procedures, the computational cost might be greater for automatic differentiation, as shown in [290]. Applications include full waveform inversion [290], topology optimization [429, 430, 431], and control problems [47, 45, 419].
Similarly, an operator replacing the sensitivity computation can be learned [432, 433, 434, 431]. This can be achieved in a supervised manner with precomputed sensitivities to reduce the cost \(C\)[433, 431], or by intending to maximize the improvement of the cost function after the gradient update [432, 434]. In [432, 434], an evolutionary algorithm was employed for the general case that the sensitivites are not readily available. Training can adaptively be reintroduced during the optimization phase, if the cost \(C\) does not decrease [431], improving the NN for the specific problem it is handling. Taking this idea to the extreme, the NN is trained on the initial gradient updates of a specific optimization. Later, solely the NN delivers the sensitivities [435] with supervised updates every \(n\) updates to improve accuracy, where \(n\) is a hyperparameter. The ideas of learning a forward operator and a sensitivity operator are combined in [430], where it is pointed out that the sensitivity from automatic differentiation through the learned forward operator can be inaccurate, despite an accurate forward operator18. Therefore, an additional loss term is added to the cost function, enforcing the correctness of the sensitivity through labels obtained with the adjoint state method. Alternatively, the sensitivity computation can be enhanced by correcting the sensitivity computation performed on a coarse grid, as proposed in [436] and related to the multiscale techniques discussed in Section 3.3.2. Here, the adjoint field used for the sensitivity computation is reduced by both a proper orthogonal decomposition, and a coarser discretization. Subsequently, a NN corrects the coarse estimate through a super-resolution NN [437]. Similarly, [431, 438] maps the forward solution on a coarse grid to the design variable sensitivity on a fine grid. A similar application is a correction term within a fixed-point iterator, as outlined in [439].
Footnote 18: Although automatic differentiation in principle has a high accuracy, oscillations between the sampled points may lead to spurious gradients with regard to the sampled points [218].
Related to the sensitivity predictions are approaches that directly predict an updated state. The goal is to decrease the total number of iterations. In practice, a combination of predictions and classical gradient-based updates is performed [86, 88, 87, 440]. The main variations between the methods in the literature are the inputs and how far the forecasting is performed. In [86], the update is obtained from the current state and gradient, while [88] predicts the final state from the history of initial updates. The history is also considered in [87], but the prediction is performed on subpatches which are then stitched together.
Figure 7: Gradient-based optimization.
Another option of introducing NNs to the optimization loop is to use NNs as an ansatz of \(\lambda\), see e.g. [441, 442, 419, 443, 444, 445, 446, 290]. In the context of inverse problems [441, 442, 419, 443, 444, 445, 446, 447, 448, 449], the NN acts as regularizer on a spatially varying inverse quantity \(\lambda(x)=I_{NN}(x;\boldsymbol{\theta})\), providing both smoother and sharper solutions. For topology optimization with a NN parametrization of the density function [446, 447, 448, 449], no regularizing effect was observed. It was however possible to obtain a greater design diversity through different initializations of the NN. Extensions using specialized NN architectures for implicit representations [450, 451, 452, 453, 454, 455] have been presented in the context of topology optimization in [456]. Furthermore, [443, 447, 290] showed how to conduct the gradient computation without automatic differentiation through the solver \(F\). The gradient computation is split up via the chain rule:
\[\nabla_{\boldsymbol{\theta}}C=\nabla_{\lambda}C\cdot\nabla_{\boldsymbol{\theta }}\lambda. \tag{61}\]
The first gradient \(\nabla_{\lambda}C\) is computed with the adjoint state method, such that the solver can be treated as a black box. The second gradient \(\nabla_{\boldsymbol{\theta}}\lambda\) is obtained through automatic differentiation. An additional advantage of the NN ansatz is that, if applied to multiple solutions with a problem specific input, the NN is trained. Thus, after sufficient inversions, the NN can be used as predictor, as presented in [457]. The training can also be performed in combination with labeled data, yielding a semi-supervised approach, as demonstrated in [458, 161].
### Post-Processing
Post-processing concerns the modification and interpretation of the computed solution. One motivation is to reduce the numerical error of the computed solution. This can for example be achieved with super-resolution techniques relying on specialized CNN architectures from computer vision [459, 460]. Coarse to fine mappings can be obtained in a supervised manner using matching coarse and fine simulations as labeled data, as presented for turbulent flows [461, 437] and topology optimization [462, 463, 464]. The mapping is typically performed from coarse to fine solution fields, but mappings from a posteriori errors have been proposed as well [465]. Further specialized extensions to the cost function have been suggested in the context of de-homogenization [466].
The methods can analogously be applied to temporal data where the solution is refined at each time step, - as, e.g., presented with RNNs as corrector of reduced order models [467]. However, coarse discretizations in dynamical models lead to an error accumulation, that increases with the number of time steps. Thus, a simple coarse-to-fine post-processing at each time step is not sufficient. To this end, [420, 421] apply a correction at each time step before the coarse solver predicts the next time step. As the correction is propagated through the solver, the sensitivities of the solver must be computed to perform the backward propagation. Therefore, a differentiable solver (i.e., differentiable physics) has to be employed. This significantly outperforms the purely supervised approach, where the entire coarse trajectory is applied without corrections in between. The number of steps performed is a hyperparameter, which increases the accuracy but comes with a higher computational effort. This concept is referred to as solver-in-the-loop.
Further variations perform the coarse-to-fine mapping in a patch-based manner, where the interfaces require a special treatment [468]. Another approach uses a NN to map the coarse solution to the closest fine solution stored in a database [469]. The mapping is performed on patches of the domain.
Other post-processing tasks include feature extraction. After a topology optimization, NNs have been used to extract basic shapes to be used in a subsequent shape optimization [470, 471]. Another aspect that can be ensured through post-processing is manufacturability.
Lastly, adaptive mesh refinement falls under the category of post-processing as well. Closely related to the meshing approaches discussed in Section 3.1.3, NNs have been proposed as error indicators [472, 337] that are trained in a supervised manner. The error estimators can subsequently be employed to adapt the mesh based on the error.
## 4 Discretizations As Neural Networks
NNs are composed of linear transformations and non-linear functions, which are basic building blocks of most PDE discretizations. Thus, the motivation to utilize NNs to construct discretizations of PDEs herefore are twofold. Firstly, deep learning techniques can hereby be exploited within classical discretization frameworks. Secondly, novel NN architectures arise, which are more tailored towards many physical problems in computational mechanics but also find their use cases outside of that field.
### Finite Element Method
One method are finite element NNs [473, 474] (see [475, 476, 477, 478, 479, 480] for applications), for which we consider the system of equations from a finite element discretization with the stiffness matrix \(K_{ij}\), degrees of freedom \(u_{j}\), and the body load \(b_{i}\):
\[\sum_{j=1}^{N}K_{ij}u_{j}-b_{i}=0,i=1,2,\ldots,N. \tag{62}\]
Assuming constant material properties along an element and uniform elements, a pre-integration of the local stiffness matrix \(k_{ij}^{e}=\alpha^{e}w_{ij}^{e}\) can be performed, as, e.g., shown in [481]. The goal is to pull out the material coefficients of the integration, leading to the following assembly of the global stiffness matrix:
\[K_{ij}=\sum_{e=1}^{M}\alpha^{e}W_{ij}^{e}\text{ with }W_{ij}^{e}=\begin{cases}w_{ ij}^{e}\text{ if }i,j\in e\\ 0\text{ else }\end{cases}. \tag{63}\]
Inserting the assembly into the system of equations from equation (62) yields
\[\sum_{j=1}^{N}\left(\sum_{e=1}^{M}\alpha^{e}W_{ij}^{e}\right)u_{j}-b_{i}=0,i= 1,2,\ldots,N. \tag{64}\]
The nested summation has a similar structure of a FC-NN, \(a_{i}^{(l)}=\sigma(z_{i}^{(l)})=\sigma(\sum_{j=1}^{N^{(l)}}a_{j}^{(l-1)}+b_{ i}^{(l)})\) without activation and bias (see Figure 8):
\[a_{i}^{(2)}=\sum_{j=1}^{N^{(2)}}W_{ij}^{(1)}a_{j}^{(1)}=\sum_{j=1}^{N^{(2)}}W_ {ij}^{(1)}(\sum_{k=1}^{N^{(1)}}W_{jk}^{(0)}a_{k}^{(0)}). \tag{65}\]
Thus, the stiffness matrix \(K_{ij}\) is the hidden layer. In a forward problem, \(W_{ij}^{e}\) are non-learnable weights, while \(u_{j}\) contains a mixture of learnable weights and non-learnable weights coming from the imposed Dirichlet boundary conditions. A loss can be formulated in terms of body load mismatch, as \(\frac{1}{2}\sum_{i=1}^{N}(\hat{b}_{i}-b_{i})^{2}\). In the inverse setting, \(\alpha^{e}\) becomes learnable - instead of \(u_{j}\), which is then fixed. For partial domain knowledge in the inverse case, \(u_{j}\) becomes partially learnable.
A different approach are the hierarchical deep-learning NNs (HiDeNNs) [482] with extensions in [483, 484, 485, 486, 483]. Here, shape functions are treated as NNs constructed from basic building blocks. Consider, for example, the one-dimensional linear shape functions
\[N_{1}(x)=\frac{x-x_{2}^{e}}{x_{1}^{e}-x_{2}^{e}} \tag{66}\] \[N_{2}(x)=\frac{x-x_{1}^{e}}{x_{2}^{e}-x_{1}^{e}}, \tag{67}\]
Figure 8: Finite element NNs, prediction of forces \(b_{i}\) from material coefficients \(\alpha^{e}\) via assembly of global stiffness matrix \(K_{ij}\), and evaluations of equations with the displacements \(u_{j}\)[474].
which can be represented as a NN, as shown in Figure 9, where the weights depend on the nodal positions \(x_{1}^{e},x_{2}^{e}\). The interpolated displacement field \(u^{e}\), which is valid in the element domain \(\Omega^{e}\), is obtained by multiplication with the nodal displacements \(u_{1}^{e},u_{2}^{e}\), treated as shared NN weights.
\[u^{e}=N_{1}^{e}(x)u_{1}^{e}+N_{2}^{e}(x)u_{2}^{e} \tag{68}\]
They are shared, as the nodal displacements \(u_{1}^{e},u_{2}^{e}\) are also used for the neighboring elements \(u^{e-1},u^{e+1}\). Finally the displacement over the entire domain \(u\) is obtained by superposition of all elemental displacement fields \(u^{e}\), which are first multiplied by a step function defined as 1 inside the corresponding element domain \(\Omega^{e}\) and 0 outside.
A forward problem is solved with a minimization of the variational loss function, as presented in Section 3.2 with the nodal values \(u_{i}^{e}\) as learnable weights. According to [482], this is equivalent to iterative solution procedures in finite elements. The additional advantage is a seamless integration of \(r\)-refinement, i.e., the shift of nodal positions to optimal positions by making the nodal positions \(x_{i}^{e}\) learnable. Special care has to be taken to avoid element inversion, which is handled by an additional term in the loss function. Inverse problems can similarly be solved by using learnable input parameters, as presented for topology optimization [488].
The method has been combined with reduced order modeling techniques [484]. Furthermore, the shape functions have been extended with convolutions [486, 487]. Specifically, a second weighting field \(W(x)\) is introduced to enhance the finite element space \(u^{c}(x)\) through convolutions:
\[u^{c}(x)=u^{e}(x)*W(x). \tag{69}\]
This introduces a smoothing effect over the elements and can efficiently be implemented using CNNs and, thereby, obtain a more favorable data-structure to exploit the full parallelization capabilities of GPUs [487]. The enhanced space has been incorporated in the HiDeNN framework. While an independent confirmation is still missing, the authors promise a speedup of several orders of magnitude compared to traditional finite element solvers [488].
Another approach related to finite elements was presented as FEA-net [489, 490]. Here, the matrix-vector multiplication of the global stiffness matrix \(\mathbf{K}\) and solution vector \(\mathbf{u}\) including the assembly of the global stiffness matrix is replaced by a convolution. In other words, the computation of the force vector \(\mathbf{f}\) is used to compute the residual \(\mathbf{r}\).
\[\mathbf{r}=\mathbf{f}-\mathbf{K}\cdot\mathbf{u} \tag{70}\]
Figure 9: HiDeNN with one-dimensional linear elements [482].
Assuming a uniform mesh with homogeneous material properties, the mesh is defined by the segment illustrated in Figure 10. The degree of freedom \(u_{j}\) only interacts with the stiffness contributions \(K_{1}^{1},K_{i}^{2},K_{i+1}^{1},K_{i+1}^{2}\) of its neighboring elements \(i\) and \(i+1\). Therefore, the force component \(f_{j}\) acting on node \(j\) can be expressed by a convolution:
\[f_{j}=[K_{i}^{1},K_{i}^{2}+K_{i+1}^{1},K_{i+1}^{2}]*[U_{j-1},U_{j},U_{j+1}] \tag{71}\]
This can analogously be applied to all degrees of freedoms, with the same convolution filter \(\mathbf{W}=[K^{1},K^{1}+K^{2},K^{2}]\), assuming the same stiffness contributions for each element.
\[\mathbf{K}\cdot\mathbf{u}=\mathbf{W}*\mathbf{U} \tag{72}\]
The convolution can then be exploited in iterative schemes which minimize the residual \(\mathbf{r}\) from Equation (70). This saves the effort of constructing and storing the global stiffness matrix. By constructing the filter \(\mathbf{W}\) as a function of the material properties of the adjacent elements, heterogeneities can be taken into account [490]. If the same iterative solver is employed, FEA-Net is able to outperform classical finite elements for non-linear problems on uniform grids.
### Finite Difference Method
Similar ideas have been proposed for finite differences [491], as employed in [290], for example, where convolutional kernels are used as an implementation of stencils exploiting the efficient NN libraries with GPU capabilities. Here, the learnable parameters can be the finite difference stencil for inverse problems or the output for forward problems. This has, for example, been presented in the context of full waveform inversion, which is modeled as a RNN [492, 493]. The stencils are written as convolutional filters and repeatedly applied to the current state and the corresponding inputs. These are the wave field, the material distribution, and the source. The problem can then be regarded as a RNN. However, it is computationally expensive to perform automatic differentiation throughout the time steps for full waveform inversion, thereby obtaining the sensitivities with respect to \(\gamma\) - both regarding memory and wall clock computational time. A remedy is to combine automatic differentiation with the adjoint state method as in [447, 443, 290] and discussed in Section 3.3.3.
Taking this idea one step further, the discretized wave equation can be regarded as an analog RNN [494] where the weights are the material distribution. Here, a binary material is learned in a trainable region between source and probing location. The input \(x(t)\) is encoded as a signal and emitted as source, which is measured at the probing locations \(y_{i}(t)\) as output. By integrating the outputs, a classification of the input can be performed.
### Material Discretizations
Deep material networks [495, 496] construct a NN from a material distribution. An output is constructed from basic building blocks, inspired by analytical homogenization techniques. Given two materials defined in terms of their compliance tensors \(c_{1}\), \(c_{2}\), and volume fractions \(f_{1},f_{2}\), an analytical effective compliance tensor \(\bar{c}\) is computed. The effective tensor is subsequently rotated with a rotation tensor \(R\), defined in terms of the three rotation angles \(\alpha,\beta,\gamma\), yielding a rotated effective tensor \(\bar{c}_{r}\). Thus, the building block takes as input two compliance tensors \(c_{1},c_{2}\) and outputs a rotated effective compliance tensor \(\bar{c}_{r}\), where \(f_{1},f_{2},\alpha,\beta,\gamma\) are the learnable parameters (see Figure 12). By connecting these building blocks, a large network can be created. The network is applied to homogenization tasks of RVEs [495, 496], where the material of the phases is varied during evaluation.
Figure 10: Segment of one-dimensional finite element mesh with degrees of freedom (left). Local element definition with stiffness contributions (right).
### Neural Differential Equations
In a more general setting, neural ordinary differential equations [378] consider the forward Euler discretization of ordinary differential equations. Specifically, RNNs are viewed as Euler discretizations of continuous transformations [497, 498, 499]. Consider the iterative update rule of the hidden states \(y_{t+1}=y(t+\Delta t)\) of a RNN.
\[y_{t+1}=y_{t}+f(y_{t};\mathbf{\theta}) \tag{73}\]
Here, \(f\) is the evaluation of one recurrent unit in the RNN. In the limit of the time step size \(\lim\Delta t\to 0\), the dynamics of the hidden units \(y_{t}\) can be parametrized by an ordinary differential equation
\[\frac{dy(t)}{dt}=f(y(t),t;\mathbf{\theta}) \tag{74}\]
The input to the network is the initial condition \(y(0)\), and the output is the solution \(y(T)\) at time \(T\). The output of the NN, \(y(T)\), is obtained by solving Equation (74) with a differential equation solver. The sensitivity computation for the weight update is obtained using the adjoint state method [500, 428], as backpropagating through each time step of the solver leads to a high memory cost. This also makes it possible to treat the solver as a black box. Similar extensions to PDEs [498] have been proposed by considering recurrent CNNs with residual connections, where the CNNs act as spatial gradients.
Similarly, [501] establish a connection between deep residual RNNs and iterative solvers. Residual connections in NNs allow information to bypass NN layers. Consider the estimation of the next state of a PDE with a classical solver \(u_{t+1}=u(t+\Delta t)=F[u(t)]\). The residual \(r_{t+1}=r(t+\Delta t)\) is determined in terms of the ground truth \(u_{t+1}^{\mathcal{M}}\):
\[r_{t+1}=u_{t+1}^{\mathcal{M}}-u_{t+1}. \tag{75}\]
An iterative correction scheme is formulated with a NN. The iterations are indicated with the superindex (\(k\)).
\[u_{t+1}^{(k+1)} =u_{t+1}^{(k)}+f_{NN}(r_{t+1}^{(k+1)};\mathbf{\theta}) \tag{76}\] \[r_{t+1}^{(k+1)} =u_{t+1}^{\mathcal{M}}-u_{t+1}^{(k)} \tag{77}\]
Figure 11: Analog RNN.
Figure 12: A single building block of the deep material network [495].
Note that the residual connection, i.e., \(u_{t+1}^{(k)}\) as directly used in the prediction of \(u_{t+1}^{(k+1)}\), allows information to pass past the recurrent unit \(f_{NN}\). A related approach can be found in [502], where an autoencoder iteratively acts on a solution until convergence. In the first iteration, a random initial solution is used as input.
## 5 Generative Approaches
Generative approaches (see [16] for an in-depth review in the field of design and [503] for a hands-on textbook) aim to model the underlying probability distribution of a data set to generate new data that resembles the training data. Three main methodologies exist:
* autoencoders,
* generative adversarial networks (GANs),
* diffusion models.
Currently, there are two prominent areas of application in computational mechanics. One area of focus is microstructure generation (Section 5.4.1), which aims to produce a sufficient quantity of realistic training data for surrogate models, as described in Section 2.1. The second key application area is generative design (Section 5.4.2), which relies on algorithms to efficiently explore the design space within the constraints established by the designer.
### Autoencoders
Autoencoders facilitate data generation by mapping high-dimensional training data \(\{\mathbf{x}_{i}\}_{i=1}^{N}\) to a lower-dimensional latent space \(\{\mathbf{h}_{i}\}_{i=1}^{N}\) which can be sampled efficiently. Specifically, an encoder \(\mathbf{\hat{h}}=E_{NN}(\mathbf{x};\mathbf{\theta}^{c})\) transforms an input sample \(\mathbf{x}\) to a reduced latent vector \(\mathbf{\hat{h}}\). A corresponding decoder \(\mathbf{\hat{x}}=D_{NN}(\mathbf{\hat{h}};\mathbf{\theta}^{d})\) reconstructs the original sample \(\mathbf{x}\) from this latent vector \(\mathbf{\hat{h}}\). As mentioned in Paragraph 2.1.1.3, the encoder can serve as a tool for dimensionality reduction, whereas the decoder, within the scope of generative approaches, operates as a generator. By emulating the probability distribution of the latent space \(\{\mathbf{\hat{h}}_{i}\}_{i=1}^{N}\), variational autoencoders [504, 505] are able to generate new data that resembles the training data.
### Generative Adversarial Networks
GANs [506] emulate data distributions by setting up a two-player adversarial game between two NNs:
* the generator \(G_{NN}\),
* the discriminator \(D_{NN}\).
The generator creates predictions \(\mathbf{\hat{y}}=G_{NN}(\mathbf{\xi};\mathbf{\theta}_{G})\) from random noise \(\mathbf{\xi}\), while the discriminator attempts to distinguish between these generated predictions \(\mathbf{\hat{y}}\) from real data \(\mathbf{y}^{\mathcal{M}}\). The discriminator assigns a probability score \(\hat{p}=D_{NN}(\mathbf{y};\mathbf{\theta}_{D})\) which evaluates the likelihood of a datapoint \(\mathbf{y}\) being real or generated. The quality of both the generator and the discriminator can be expressed via the following cost function:
\[C=\frac{1}{N_{D}}\sum_{i=1}^{N_{D}}\log\Bigl{[}D_{NN}(\mathbf{y}_{i};\mathbf{\theta}_ {D})\Bigr{]}+\frac{1}{N_{G}}\sum_{i=1}^{N_{G}}\log\Bigl{[}1-D_{NN}\bigl{(}G_{ NN}(\mathbf{\xi}_{i};\mathbf{\theta}_{G});\mathbf{\theta}_{D}\bigr{)}\Bigr{]}. \tag{78}\]
Here, \(N_{D}\) real samples and \(N_{G}\) generated samples are used for training. The goal for the generator is to minimize the cost function, implying that the discriminator fails to distinguish between real and generated samples. However, the discriminator strives to maximize the cost. Therefore, this is formulated as a minimax optimization problem
\[\min_{\mathbf{\theta}_{G}}\max_{\mathbf{\theta}_{D}}C. \tag{79}\]
Convergence is ideally reached at the Nash equilibrium [507], where the discriminator always outputs a probability of \(1/2\), signifying its inability to distinguish between real and generated samples. However, GANs can be challenging to train. Problems like mode collapse [508] can arise. Here, the generator learns only a few modes from the training data. In the extreme case, only a single sample from the training data is learned, yielding a low discriminator score, yet an undesirable outcome. To combat mode collapse, design diversity can be either promoted in the learning algorithm or the cost [509, 508]. Another challenge lies in balancing the training of the two NNs. If the discriminator learns too quickly and manages to distinguish all generated samples, the gradient of the cost function (Equation (78)) with respect to the weights becomes zero, halting further progress. A possible remedy is to use the Wasserstein distance in the cost function [510].
Additionally, GANs can be modified to include inputs that control the generated data. This can be achieved in a supervised manner with conditional GANs [511]. The conditional GAN does not just receive random noise, but also an additional input. This supplementary input is considered by the discriminator, which assesses whether the input-output pair are real or generated. An unsupervised alternative are InfoGANs [512], which disentangle the input information, i.e., the random input \(\xi\), defining the generated data. This is achieved by introducing an additional parameter \(c\), a latent code to the generator \(G_{NN}(\xi,c;\mathbf{\theta}_{G})\). To ensure that the parameter is used by the NN, the cost (Equation (78)) is extended by a mutual information term [513]\(I(c,G_{NN}(x,c;\mathbf{\theta}_{G}))\) ensuring that the generated data varies meaningfully based on the input latent code \(c\).
In comparison to variational autoencoders, GANs typically generate higher quality data. However, the advantage of autoencoders lies in their ability to construct a well-structured latent space, where proper sampling leads to smooth interpolations in the generated space. In other words, small changes in the latent space correspond to small changes in the generated space - a characteristic not inherent to GANs. To achieve smooth interpolations, autoencoders can be combined with GANs [514], where the autoencoder acts as generator in the GAN framework, employing both an autoencoder loss and a GAN loss.
### Diffusion Models
Diffusion models enhanced by NNs [515, 516, 517] convert random noise \(\mathbf{x}\) into a sample resembling the training data through a series of transformations. Given a data set \(\{\mathbf{y}_{i}^{0}\}_{i=1}^{N}\) that corresponds to the distribution \(q(\mathbf{x}^{0})\), a forward noising process \(q(\mathbf{x}^{t}|\mathbf{x}^{t-1})\) is introduced. This process adds Gaussian noise to \(\mathbf{x}^{t-1}\) at each time step \(t-1\). The process is applied iteratively
\[q(\mathbf{x}^{0},\mathbf{x}^{1},\dots,\mathbf{x}^{T})=q(\mathbf{x}^{0})\prod_{t=1}^{T}q(\mathbf{ x}^{t}|\mathbf{x}^{t-1}). \tag{80}\]
After a sufficient number of iterations \(T\), the resulting distribution approximates a Gaussian distribution. Consequently, a random sample from a Gaussian distribution \(\mathbf{x}_{T}\) can be denoised with the reverse denoising process \(q(\mathbf{x}^{t-1}|\mathbf{x}^{t})\), resulting in a sample \(\mathbf{x}^{0}\) that matches the original distribution \(q(\mathbf{x}^{0})\). The reverse denoising process is, however, unknown and therefore modeled as a Gaussian distribution, where the mean and covariance are learned by a NN. With the learned denoising process, data can be generated by denoising samples drawn from a Gaussian distribution. Note the similarity to autoencoders. Instead of learning a mapping to a hidden random state \(\mathbf{h}_{i}\), the encoding is prescribed as the iterative application of Gaussian noise [503].
A related approach are normalizing flows [518] (see [519] for an introduction and extensive review). Here, a basic probability distribution is transformed through a series of invertible transformations, i.e., flows. The goal is to model distributions of interest. The individual transformations can be modeled by NNs. A normalization is required, such that each intermediate probability distribution integrates to one.
### Applications
#### 5.4.1 Data Generation
The most straightforward application of variational autoencoders and GANs in computational mechanics is the generation of new data, based on existing examples. This has been demonstrated in [520, 521, 522, 523, 524] for microstructures in [68] for velocity models used in full waveform inversion, and in [525] for optimized structures using GANs. Variational autoencoders have also been used to model the crossover operation in evolutionary algorithms to create new designs from parent designs [526]. Applications of diffusion models for microstructure generation can be found in [527, 528, 529].
Microstructures pose a unique challenge due to their inherent three-dimensional nature, while often only two-dimensional reference images are available. This has led to the development of specialized architectures that are capable of creating three-dimensional structures from representative two-dimensional slices [530, 531, 532]. The approach typically involves treating three-dimensional voxel data as a sequence of two-dimensional slices of pixels. Sequences of images are predicted from individual slices, ultimately forming a three-dimensional microstructure. In [533], a RNN is applied to a two-dimensional reference image, yielding an additional dimension, and consequently creating a three-dimensional structure. The RNN is applied at the latent vector inside an encoder decoder architecture, such that the inputs and outputs of the RNN have a relatively small size. Similarly, [534, 535] apply a transformer [147] to the latent vector. An alternative formulation using variational autoencoder GANs is presented in [536] to reconstruct three-dimensional voxel models of porous media from two-dimensional images.
The generated data sets can subsequently be leveraged to train surrogate models, as demonstrated in [537, 538, 539, 540, 541, 542, 543, 544, 545, 546, 547, 548, 549] where CNNs were used to verify the physical properties of designs, and in the study by [540] on the homogenization of microstructures with CNNs. Similarly, [541, 68] generate realistic material distributions, such as velocity distributions, to train an inverse operator for full waveform inversion.
#### 5.4.2 Generative Design & Design Optimization
Within generative design, the generator can also be considered as a reparametrization of the design space that reduces the number of design variables. With autoencoders, the latent vector serves as the design parameter [542, 543], which is then optimized19. In the context of GANs, the optimization task is aimed at the random input \(\boldsymbol{\xi}\) provided to the generator. This approach is demonstrated in various studies, such as ship hull design parameterized by NURBS surfaces [545], airfoil shapes expressed with Bezier curves [546, 547], structural optimization [548], and full waveform inversion [549]. For optimization, variational autoencoder GANs are particularly important, as the GAN ensures high quality designs, while the autoencoder ensures well-behaving gradients. This was shown for microstructure optimization in [550].
Footnote 19: It is worth noting, that to ensure designs that are physically meaningful, a style transfer technique can be implemented [544]. Here, the training data is perceived as a style, and the Gram matrices’ difference, characterizing the distribution of visual patterns or textures in the generated designs, is minimized.
An important requirement for generative design is design diversity. Achieving this involves ensuring that the entire design space is spanned by the generated data. For this, the cost function can be extended, as presented in [551], using determinantal point processes [552] or in [545] with a space-filling term [553].
Other strategies are specifically focused on promoting design diversity. This involves identifying novel designs via a novelty score [554]. The novelty within these designs is segmented and used to modify the GAN using methods outlined in [555]. An alternative approach proposed by [556] quantifies creativity and maximizes it. This is achieved by performing a classification in pre-determined categories by the discriminator. If the classification is unsuccessful, the design must lie outside
the categories and is therefore deemed creative. Thus the generator then seeks to minimize the classification accuracy.
However, some applications necessitate a resemblance to prior designs due to factors such as aesthetics [557] or manufacturability [558]. In [557], a pixel-wise \(L^{1}\)-distance to previous designs is included in the loss20. A complete workflow with generative design enforcing resemblance of previous designs and surrogate model training for the quantification of mechanical properties is described in [559]. Another option is the use of style transfer techniques [544], which in [560] is incorporated into a conventional topology optimization scheme [561] as a constraint in the loss. These are tools with the purpose of incorporating vague constraints based on previous designs for topology optimization.
Footnote 20: Similarly, this loss can be used to filter out designs that are too similar.
GANs can also be applied to inverse problems, as presented in [562] for full waveform inversion. The generator predicts the material distribution, which is used in a differentiable simulation providing the forward solution in the form of a seismogram. The discriminator attempts to distinguish between the seismogram indirectly coming from the generator and the measured seismograms. The underlying material distribution is determined through gradient descent.
#### 5.4.3 Conditional Generation
As stated earlier, GANs can take specific inputs to dictate the output's nature. The key difference to data-driven surrogate models from Section 2.1 is that GANs provide a tool to generate multiple outputs given the same conditional input. They are thus applicable to problems with multiple solutions, such as design optimization or data generation.
Examples of conditional generation are rendered cars from car sketches [563], hierarchical shape generation [564], where the child shape considers its parent shape and topology optimization with predictions of optimal structures from initial fields, e.g., strain energy, of the unoptimized structure [565, 566]. Physical properties can also be used as input. The properties are computed by a differentiable solver after generation and are incorporated in the loss. This was, e.g., presented in [567] for airplane shapes, and in [568] for inverse homogenization. For full waveform inversion, [569] trains a conditional GAN with seismograms as input to predict the corresponding velocity distributions. A similar effort is made by [570] with CycleGANs [571] to circumvent the need for paired data. Here, one generator generates a seismogram \(\hat{y}=G_{y}(x)\) and another a corresponding velocity distribution \(\hat{x}=G_{x}(y)\). The predictions are judged by two separate discriminators. Additionally, a cycle-consistency loss ensures that a prediction from a prediction, i.e., \(G_{y}(\hat{x})\) or \(G_{x}(\hat{y})\), matches the initial input \(x\) or \(y\). This cycle-consistency loss ensures, that the learned transformations preserve the essential features and structures of the original seismograms or velocity distributions when they are transformed from seismogram to velocity distribution and back again.
Lastly, coarse-to-fine mappings as previously discussed in Section 3.4, can also be learned by GANs. This was, for example, demonstrated in topology optimization, where a conditional GAN refines coarse designs obtained from classical optimizations [572, 565] or CNN predictions [77]. For temporal problems, such as fluid flows, the temporal coherence between time steps poses an additional challenge. Temporal coherence can be ensured by a second discriminator, which receives three consecutive frames of either the generator or the real data and decides if they are real or generated. The method is referred to as tempoGAN [573].
#### 5.4.4 Anomaly Detection
Finally, a last application of generative models is anomaly detection, see [574] for a review. This is particularly valuable for non-destructive testing, where flawed specimens can be identified in terms of anomalies. The approach relies on generative models and attempts to reconstruct the
geometry. At first, the generative model is trained on structures without flaws. During evaluation, the structures to be tested are then fed through the NN. In case of an autoencoder, as in [575], it is fed through the encoder and decoder. For a GAN, as discussed, e.g., in [576, 577, 578], the input of the generator is optimized to fit the output as well as possible. The mismatch in reconstruction then provides a spatially dependent measure of where an anomaly, i.e., defect is located.
Another approach is to use the discriminator directly, as presented in [579]. If a flawed specimen is given to the discriminator, it will be categorized as fake, as it was not part of the undamaged structures during training. The discriminator can also be used to check if the domain of application of a surrogate model is valid. Trained on the same training data as the surrogate model, the discriminator estimates the dissimilarity between the data to be tested and the training data. For large discrepancies, the discriminator detects that the surrogate model becomes invalid.21
Footnote 21: Note however, that the discriminator does not guarantee an accurate assessment of the validity of the surrogate model.
## 6 Deep Reinforcement Learning
In reinforcement learning, an agent interacts with an environment through a sequence of actions \(a_{t}\), which is illustrated in Figure 13. Upon executing an action \(a_{t}\), the agent receives an updated state \(s_{t+1}\) and reward \(r_{t+1}\) from the environment. The agent's objective is to maximize the cumulative reward \(R_{\Sigma}\). The environment can be treated as a black box. This presents an advantage in computational mechanics when differentiable physics are not feasible. Reinforcement learning has achieved impressive results such as human-level performance in games like Atari [580], Go [581], and StarCraft II [582]. Further, reinforcement learning has been successfully been demonstrated in robotics [583]. An example hereof is learning complex maneuvers for autonomous helicopter flight [584, 585, 586].
A comprehensive review of reinforcement learning exceeds the scope of this work, since it represents a major branch of machine learning. An introduction is, e.g., given in [8, 21], and an in-depth textbook is [587]. However, at the intersection of these domains lies deep reinforcement learning, which employs NNs to model the agent's actions. In Appendix A, we present the main concepts of deep reinforcement learning and delve into two prominent methodologies: deep policy networks (Appendix A.1) and deep Q-learning (Appendix A.2) in view of applications in computational mechanics.
### Applications
Deep reinforcement learning is mainly used for inverse problems (see [8] for a review within fluid mechanics), where the PDE solver is treated as a black box, and assumed to not be differentiable.
The most prominent application are control problems. One example is discovering swimming strategies for fish - with the goal of efficiently minimizing the distance to a leader fish [588, 589]. The environment is given by the Navier Stokes equation. Another example is balancing rigid bodies
Figure 13: Reinforcement learning in which an agent interacts with an environment with actions \(a_{t}\), states \(s_{t}\), and rewards \(r_{t}\). Figure adapted from [587].
with fluid jets while using as little force as possible [590]. Similarly, [591] control jets in order to reduce the drag around a cylinder. Reducing the drag around a cylinder is also achieved by controlling small rotating cylinders in the wake of the flow [592]. A more complex example is controlling unmanned aerial vehicles [593]. The control schemes are learned by interacting with simulations and, subsequently, applied in experiments.
Further applications in connection with inverse problems are learning filters to perturb flows in order to match target flows [594]. Also, constitutive laws can be identified. The individual arithmetic manipulations within a constitutive law can be represented as graphs. An agent constructs the graph in order to best match simulation and measurement [595], which yields an interpretable law.
Topology optimization has also been tackled by reinforcement learning. Specifically, the ability to predict only binary states (material or no material) is desirable - instead of intermediate states, as in solid isotropic material with penalization [596, 597]. This has been shown with binary truss structures, modeled with graphs in order to minimize the total structural volume under stress constraints. In [598], an agent removes trusses from existing structures, and trusses are added in [599]. Similarly, [600] removes finite elements in solid structures to modify the topology. Instead, [601] pursues design diversity. Here a NN surrogate model predicts near optimal structures from reference designs. The agent then learns to generate reference designs as input, such that the corresponding optimal structures are as diverse as possible.
Also, high-dimensional PDEs have been solved with reinforcement learning [602, 603]. This is achieved by recasting the PDE into stochastic control problems, thereby solving these with reinforcement learning.
Finally, adaptive mesh refinement algorithms have been learned by reinforcement learning [604]. An agent decides whether an element is to be refined based on the current state, i.e., the mesh and solution. The reward is subsequently defined in terms of the error reduction, which is computed with a ground truth solution. The trained agent can thus be applied to adaptive mesh refinement to previously unseen simulations.
#### 6.1.1 Extensions
Each interaction with the environment requires solving the differential equation, which, due to the many interactions, makes reinforcement learning expensive. The learning can be accelerated through some basic modifications. The learning can be perfectly parallelized by using multiple environments simultaneously [605], or by using multiple agents within the same environment [606]. Another idea is to construct a surrogate model of the environment and thereby exploit model-based approaches [607, 608, 609, 610]. The general procedure consists of three steps:
* model learning: learn surrogate of environment,
* behavior learning: learn policy or value function,
* environment interaction: apply learned policy and collect data.
Most approaches construct the surrogate with data-driven modeling (Section 2.1), but physics-informed approaches have been proposed as well [607, 609] (Section 3.2).
## 7 Conclusion
In order to structure the state-of-the-art, an overview of the most prominent deep learning methods employed in computational mechanics was presented. Six main categories were identified: simulation substitution, simulation enhancement, discretizations as NNs, generative approaches, and
deep reinforcement learning.
Despite the variety and abundance of the literature, few approaches are competitive in comparison to classical methods. With only few exceptions, current research is still in its early stages, with a focus on showcasing possibilities without focusing too much attention on accuracy and efficiency. Future research must, nevertheless, shift its focus to incorporate more in-depth investigations into the performance of the developed methods - including thorough and meaningful comparisons to classical methods. This is in agreement with the recent review article on deep learning in topology optimization [5], where critical and fair assessments are requested. This includes the determination of generalization capabilities, greater transparency by including, e.g., worst case performances to illustrate reliability, and computation times without disregard of the training time.
In line with this, and to the best of our knowledge, we provide a final overview outlining the potentials and limitations of the discussed methods.
* Simulation substitution has potential for surrogate modeling of parameterized models that need to be evaluated many times. This is however, currently only realizable for small parameter spaces, due to the amount of data required. Complex problems can still be solved if they are first reduced to a low-dimensional space through model order reduction techniques. Physics-informed learning further reduces the amount of required data and improves the generalization capabilities. However, enforcing physics through penalty terms increases the computational effort, where the solutions still do not necessarily satisfy the corresponding physical laws. Instead, enforcing physics by construction, which guarantees the enforced physics, seems more favorable.
* Simulation enhancement is currently one of the most useful approaches. It is in particular beneficial for tasks where classical methods show difficulties. An excellent example for this is the formulation of constitutive laws, which are inherently phenomenological and thereby well-suited to be identified from data using tools such as deep learning. In addition, simulation enhancement, makes it possible to draw on insights gained from classical methods developed since the inception of computational mechanics. Furthermore, it is currently more realistic to learn smaller components of the simulation chain with NNs rather than the entire model. These components should ideally be expensive and have limited requirements regarding the accuracy. Lastly, it is also easier to assess whether a method enhanced by deep learning outperforms the classical method, as direct and fair comparisons are readily possible.
* such as NN libraries and GPUs.
* Generative approaches have been shown to be highly versatile in applications of computational mechanics since the accuracy of a specific instance under investigation is less of a concern here. They have been used for realistic data generation to train other machine learning models, incorporate vague constraints based on data within optimization frameworks, and detect anomalies.
* for example in controlling unmanned vehicles in complex physics environments. It is mainly applicable for problems where efficient differentiable physics solvers are unavailable, which is why it is popular in control problems for turbulence. In the presence of differentiable solvers, gradient-based methods are however still the state-of-the-art [418] and thus preferred.
## Acknowledgements
The authors gratefully acknowledge the funding through the joint research project Geothermal-Alliance Bavaria (GAB) by the Bavarian State Ministry of Science and the Arts (StMWK) as well as the Georg Nemetschek Institut (GNI) under the project DeepMonitor.
## Declarations
### Conflict of interest
No potential conflict of interest was reported by the authors.
## Appendix A Deep Reinforcement Learning
In reinforcement learning, the environment is commonly modeled as a Markov Decision Process (MDP). This mathematical model is defined by a set of all possible states \(S\), actions \(A\), and associated rewards \(R\). Furthermore, the probability of getting to the next state \(s_{t+1}\) from the previous \(s_{t}\) with action \(a_{t}\) is given by \(\mathbb{P}(s_{t+1}|s_{t},a_{t})\). Thus, the environment is not necessarily deterministic. One key aspect of a Markov Decision Process is the Markov property, stating that future states depend solely on the current state and action, and not the history of states and actions.
The goal of a reinforcement learning algorithm is to determine a policy \(\pi(s,a)\) which dictates the next action \(a_{t}\) in order to maximize the cumulative reward \(R_{\Sigma}\). The cumulative reward \(R_{\Sigma}\) is discounted by a discount factor \(\gamma^{t}\) in order to give more importance to immediate rewards.
\[R_{\Sigma}=\sum_{t=0}^{\infty}\gamma^{t}r_{t} \tag{81}\]
The quality of a policy \(\pi(s,a)\) can be assessed by a state-value function \(V_{\pi}(s)\), defined as the expected future reward given the current state \(s\) and following the policy \(\pi\). Similarly, an action-value function \(Q_{\pi}(s)\) determines the expected future reward given the current state \(s\) and action \(a\) and then following the policy \(\pi\). The expected value along a policy \(\pi\) is denoted as \(\mathbb{E}_{\pi}\).
\[V_{\pi}(s)=\mathbb{E}_{\pi}\big{[}R_{\Sigma}(t)|s\big{]} \tag{82}\] \[Q_{\pi}(s,a)=\mathbb{E}_{\pi}\big{[}R_{\Sigma}(t)|s,a\big{]} \tag{83}\]
The optimal value and quality function correspondingly follow the optimal policy:
\[V(s)=\max_{\pi}V_{\pi}(s), \tag{85}\] \[Q(s,a)=\max_{\pi}Q_{\pi}(s). \tag{86}\]
The approaches can be subdivided into model-based and model-free. Model-based methods incorporate a model of the environment. In the most general sense, a probabilistic environment, this entitles the probability distribution of the next state \(\mathbb{P}(s_{t+1}|s_{t},a_{t})\) and of the next reward \(\mathbb{R}(r_{t+1}|s_{t+1},s_{t},a_{t})\). The model of the environment can be cheaply sampled to improve the policy \(\pi\) with model-free reinforcement learning techniques [611, 612, 613, 614] discussed in the sequel (Appendices A.1 and A.2). However, if the model is differentiable, the gradient of the reward can directly be used to update the policy [615, 616, 617, 618, 619, 620]. This is identical to the optimization through differentiable physics solvers discussed in Section 3.3.3. Model-free reinforcement learning techniques can be used to enhance the optimization.
A further distinction is made between policy-based [621, 622, 623, 624, 625] and value-based [626, 627, 628] approaches. Policy-based methods, such as deep policy networks [21] (Appendix A.1), directly optimize the policy. By contrast, value-based methods, such as deep Q-learning [628] (Appendix A.2)
learn the value function from which the optimal policy is selected. Actor-critic methods, such as proximal policy optimization [629] combine the ideas with an actor that performs a policy and a critic that judges its quality. Both can be modeled by NNs.
### Deep Policy Networks
In deep policy networks, the policy, i.e., the mapping of states to actions, is modeled by a NN \(\hat{a}=\pi(s;\mathbf{\theta})\). The quality of the NN is assessed by the expected cumulative reward \(R_{\Sigma}\), formulated in terms of the action-value function \(Q(s,a)\).
\[C=R_{\Sigma}=\mathbb{E}\big{[}Q(s,a)\big{]} \tag{87}\]
Its gradient (see [622, 21, 624] for a derivation), given as:
\[\nabla_{\mathbf{\theta}}R_{\Sigma}=\mathbb{E}\big{[}Q(s,a)\nabla_{\mathbf{\theta}}\log \bigl{(}\pi(s,a;\mathbf{\theta})\bigr{)}\big{]} \tag{88}\]
can be applied within a gradient ascent scheme to learn the optimal policy.
### Deep Q-Learning
Deep Q-learning identifies the optimal action-value function \(Q(s,a)\) from which the optimal policy is extracted. Q-Learning relies on the Bellman optimality criterion [630, 631]. By separating the reward \(r_{0}\) at the first step, the recursion formula of the optimal state-value function, i.e., the Bellman optimality criterion, can be established:
\[V(s) =\max_{\pi}\mathbb{E}_{\pi}\Bigl{[}\sum_{t=0}^{\infty}\gamma^{t}r _{t}|s_{0}=s\Bigr{]} \tag{89}\] \[=\max_{\pi}\mathbb{E}_{\pi}\Bigl{[}r_{0}+\sum_{t=1}^{\infty} \gamma^{t}r_{t}|s_{1}=s^{\prime}\Bigr{]}\] (90) \[=\max_{\pi}\mathbb{E}_{\pi}\bigl{[}r_{0}+\gamma V(s^{\prime}) \bigr{]}. \tag{91}\]
Here, \(s^{\prime}\) represents the next state after \(s\). This can be done analogously for the action-value function.
\[Q(s,a)=\max_{\pi}\mathbb{E}_{\pi}\bigl{[}r_{0}+\gamma Q(s^{\prime},a^{\prime}) \bigr{]} \tag{92}\]
The recursion enables an update formula, referred to as temporal difference (TD) learning [632, 633]. Specifically, the current estimate \(Q^{(m)}\) at state \(s_{t}\) is compared to the more accurate estimate at the next state \(s_{t+1}\) using the obtained reward \(r_{t}\), referred to as the TD target estimate. The difference is the TD error, which in combination with a learning rate \(\alpha\) is used to update the function \(Q^{(m)}\):
\[Q^{(m+1)}(s_{t},a_{t})=Q^{(m)}(s_{t},a_{t})+\alpha\overbrace{\underbrace{(r_ {t}+\gamma\max_{a}Q(s_{t+1},a)}_{\text{TD target estimate}}-\underbrace{Q^{(m)}(s_{t},a_{t})}_{ \text{model prediction}}}^{\text{TD error}}). \tag{93}\]
Here, the TD target estimate only looks one step ahead - and is therefore referred to as TD(0). The generalization is called TD(N). In the limit \(N\to\infty\), the method is equivalent to Monte Carlo learning, where all steps are performed and a true target is obtained.
Deep Q-learning introduces a NN for the action-value function \(Q(s,a;\mathbf{\theta})\). Its quality is assessed with a loss composed of the mean squared error of the TD error.
\[C=\mathbb{E}\Big{[}\frac{1}{2}\bigl{(}r_{t}+\gamma\max_{a}Q(s_{t+1},a;\mathbf{ \theta})-Q(s_{t},a_{t};\mathbf{\theta})\bigr{)}^{2}\Big{]} \tag{94}\]
Lastly, the optimal policy \(\pi(s)\) maximizing the action-value function \(Q(s,a;\mathbf{\theta})\) is extracted:
\[\pi(s)=\operatorname*{arg\,max}_{a}\,Q(s,a;\mathbf{\theta}) \tag{95}\] |
2301.00168 | Blowup dynamics for equivariant critical Landau--Lifshitz flow | The existence of finite time blowup solutions for the two-dimensional
Landau--Lifshitz equation is a long-standing problem, which exists in the
literature at least since 2001 (E, Mathematics Unlimited--2001 and Beyond,
Springer, Berlin, P.410, 2001). A more refined description in the equivariant
class is given in (van den Berg and Williams, European J. Appl. Math., 24(6),
912--948, 2013). In this paper, we consider the blowup dynamics of the
Landau--Lifshitz equation $$ \partial_tu=\mathfrak{a}_1u\times\Delta
u-\mathfrak{a}_2u\times(u\times\Delta u),\quad x\in\mathbb{R}^2, $$ where
$u\in\mathbb{S}^2$, $\mathfrak{a}_1+i\mathfrak{a}_2\in\mathbb{C}$ with
$\mathfrak{a}_2\geq0$ and $\mathfrak{a}_1+\mathfrak{a}_2=1$. We prove the
existence of 1-equivariant Krieger--Schlag--Tataru type blowup solutions near
the lowest energy steady state. More precisely, we prove that for any $\nu>1$,
there exists a 1-equivariant finite-time blowup solution of the form $$
u(x,t)=\phi(\lambda(t)x)+\zeta(x,t),\quad \lambda(t)=t^{-1/2-\nu}, $$ where
$\phi$ is a lowest energy steady state and $\zeta(t)$ is arbitrary small in
$\dot{H}^1\cap\dot{H}^2$. The proof is accomplished by renormalizing the blowup
profile and a perturbative analysis in the spirit of (Krieger, Schlag and
Tataru, Invent. Math., 171(3), 543--615, 2008), (Perelman, Comm. Math. Phys.,
330(1), 69--105, 2014) and (Ortoleva and Perelman, Algebra i Analiz, 25(2),
271--294, 2013). | Fangyu Han, Zhong Tan | 2022-12-31T09:41:19Z | http://arxiv.org/abs/2301.00168v1 | # Blowup dynamics for equivariant critical Landau-Lifshitz flow
###### Abstract.
The existence of finite time blowup solutions for the two-dimensional Landau-Lifshitz equation is a long-standing problem, which exists in the literature at least since 2001 (E, Mathematics Unlimited-2001 and Beyond, Springer, Berlin, P.410, 2001). A more refined description in the equivariant class is given in (van den Berg and Williams, European J. Appl. Math., 24(6), 912-948, 2013). In this paper, we consider the blowup dynamics of the Landau-Lifshitz equation
\[\partial_{t}u=\mathfrak{a}_{1}u\times\Delta u-\mathfrak{a}_{2}u\times(u\times \Delta u),\quad x\in\mathbb{R}^{2},\]
where \(u\in\mathbb{S}^{2}\), \(\mathfrak{a}_{1}+i\mathfrak{a}_{2}\in\mathbb{C}\) with \(\mathfrak{a}_{2}\geq 0\) and \(\mathfrak{a}_{1}+\mathfrak{a}_{2}=1\). We prove the existence of \(1\)-equivariant Krieger-Schlag-Tataru type blowup solutions near the lowest energy steady state. More precisely, we prove that for any \(\nu>1\), there exists a \(1\)-equivariant finite-time blowup solution of the form
\[u(x,t)=\phi(\lambda(t)x)+\zeta(x,t),\quad\lambda(t)=t^{-1/2-\nu},\]
where \(\phi\) is a lowest energy steady state and \(\zeta(t)\) is arbitrary small in \(\dot{H}^{1}\cap\dot{H}^{2}\). The proof is accomplished by renormalizing the blowup profile and a perturbative analysis in the spirit of (Krieger, Schlag and Tataru, Invent. Math., 171(3), 543-615, 2008), (Perelman, Comm. Math. Phys., 330(1), 69-105, 2014) and (Ortoleva and Perelman, Algebra i Analiz, 25(2), 271-294, 2013).
Key words and phrases:Landau-Lifshitz flow; equivariant solution; critical energy; blowup dynamics 2010 Mathematics Subject Classification: Primary 35Q55, 35Q60, 35B44; Secondary 35K45, 82D40, 58J35 Corresponding author: Z. Tan ([email protected])
## 1. Introduction and main result
### Introduction
The Landau-Lifshitz flow from the \(m\)-dimensional Riemannian manifold \((\mathcal{M},g)\) to the two-sphere \(\mathbb{S}^{2}\) is given by
\[\begin{cases}\partial_{t}u=\mathfrak{a}_{1}u\times\Delta_{\mathcal{M}}u- \mathfrak{a}_{2}u\times(u\times\Delta_{\mathcal{M}}u),\quad x\in\mathcal{M}, \ t\in\mathbb{R},\\ u|_{t=0}=u_{0}\in\mathbb{S}^{2},\quad x\in\mathcal{M},\end{cases} \tag{1.1}\]
where \(\mathfrak{a}_{1}+i\mathfrak{a}_{2}\in\mathbb{C}\) with \(\mathfrak{a}_{2}\geq 0\) and \(\mathfrak{a}_{1}+\mathfrak{a}_{2}=1\), \(u=(u_{1},u_{2},u_{3})\) is a three-dimensional vector with normalized length that satisfies \(u(x,t):\mathcal{M}\times\mathbb{R}\to\mathbb{S}^{2}\), \(g=|\det(g_{ij})|\) is the Riemann metric, \(\Delta_{\mathcal{M}}\) is the Laplace-Beltrami operator defined by \(\Delta_{\mathcal{M}}u=\frac{1}{\sqrt{g}}\partial_{x_{i}}(g^{ij}\sqrt{g} \partial_{x_{j}}u)\), where \((g^{ij})\) is the inverse of \((g_{ij})\). This is an important model first developed by Landau and Lifshitz [35] to model the effects of magnetic fields on ferromagnetic materials and to describe the evolution of continuous spin fields in ferromagnets.
In fact, the Landau-Lifshitz flow is closely related to some other important geometric flows, for instance, the harmonic map heat flow and the Schrodinger map flow.
#### 1.1.1. Harmonic heat flow
When \(\mathfrak{a}_{1}=0\) and \(\mathfrak{a}_{2}=1\), (1.1) becomes a parabolic harmonic heat flow:
\[\text{(Harmonic heat flow)}\quad\begin{cases}\partial_{t}u=\Delta_{\mathcal{M}} u+|\nabla u|^{2}u,\quad x\in\mathcal{M},\,t\in\mathbb{R},\\ u|_{t=0}=u_{0},\quad x\in\mathcal{M},\end{cases} \tag{1.2}\]
where \(u(x,t)\in\mathbb{S}^{2}\) and \(|\nabla u|^{2}=\sum_{i,j}\sum_{k}g^{ij}\partial_{x_{i}}u_{k}\partial_{x_{j}}u_ {k}\). This is an important model in liquid crystal flow and ferromagnetism (see, e.g., [3][4]). In addition, it is also related to the harmonic map. The harmonic map \(u\) satisfies the Euler-Lagrange equation: \(\Delta_{\mathcal{M}}u+|\nabla u|^{2}u=0\), the theory of which was first established in 1964 by Eells and Sampson [26], who proved that any map can be deformed into a harmonic map in a certain geometric context.
When \(\mathcal{M}\) is a Riemann surface, Struwe [53] proved the existence and uniqueness of weak solutions with at most finitely many singularities. For a further extension of this conclusion and for the higher dimensional case, see [27][17][52]. Chang, Ding and Ye [13] constructed the first example of finite-time blowup solutions for the harmonic heat flow. For the case where the initial value is defined on \(\mathbb{D}^{2}\subset\mathbb{R}^{2}\) and the target manifold is \(\mathbb{S}^{2}\), van den Berg, Hulshof and King [4] used formal asymptotic analysis to predict the existence of blowup solutions with quantifiable blowup rate
\[\lambda_{L}(t)\approx C\frac{|T-t|^{L}}{|\ln(T-t)|^{\frac{3L}{2L-1}}},\quad L \in\mathbb{N}^{*}.\]
Since the heat flow in two dimensions is energy critical, the formation of singularity by energy concentration is possible. It is well known that concentration implies non-trivial harmonic map of bubbles at a finite number of blowup points, see for instance [14][16][21][39][47][48][53][55][57] for more details. For the case of \(\mathcal{M}=\mathbb{R}^{2}\), Gustafson, Nakanish and Tsai [31] proved that the asymptotical stability of the \(k\)-equivariant harmonic map for \(k\geqslant 3\) and gave a class of infinite-time equivariant blowup solutions near the \(2\)-equivariant harmonic map. Raphael and Schweyer [50][51] have selected a family of initial values which are arbitrarily close to the lowest energy harmonic map under the energy critical topology, and proved that the corresponding solutions blowup in finite time with rate \(\lambda_{L}(t)\), where \(L\geqslant 1\) is arbitrary. The case of \(L=1\) corresponds to a stable regime. When there is no assumption of symmetry, Davila, del Pino and Wei [19] construct a solution in a bounded region in \(\mathbb{R}^{2}\), which blowup exactly at pre-given finite number of points, at each of which the blowup profile is close to the asymptotic singularity expansion of the \(1\)-corotational harmonic map and the blowup rate is \(\lambda_{L}(t)\) with \(L=1\). This rate is similar to that expected in \(1\)-corrotational heat flow, see [4]. For the existence and uniqueness results, please refer to [12][16][26][39][40] and the references therein.
#### 1.1.2. Schrodinger map flow
When \(\mathfrak{a}_{1}=1\) and \(\mathfrak{a}_{2}=0\), (1.1) becomes the Schrodinger map flow, which is a fundamental content in differential geometry, see[15][18][23][56]. By the action of the complex structure \(u\times\), the Schrodinger map can be written as
\[\text{(Schr\"{o}dinger map)}\quad\begin{cases}u\times\partial_{t}u=-\Delta u-| \nabla u|^{2}u,\quad x\in\mathcal{M},\,t\in\mathbb{R},\\ u|_{t=0}=u_{0},\quad x\in\mathcal{M},\end{cases} \tag{1.3}\]
where \(u(x,t)\in\mathbb{S}^{2}\).
The local well-posedness of Schrodinger map can be found in [22][41][54]. When the target manifold is \(\mathbb{S}^{2}\), Bejenaru et al. [8] proved the global well-posedness with small data in the critical space. Their results were generalized by Li [36][38] to the case of Kahler manifold target. The static solution of the Schrodinger flow is a harmonic map. When the energy is less than \(4\pi\), the \(1\)-equivariant solutions are global in time and scattering (see [7]). Gastafson et al. [29][30][31] proved that the harmonic map is asymptotically stable with respect to the Schrodinger map in the \(k\)-equivariant class for \(k\geq 3\), which shows that such solutions do not blowup near the harmonic map. The case of \(k=2\) is still an important open problem. However, in the \(1\)-equivariant class, Bejenaru and Tataru [9] proved that the harmonic map is stable under a smooth well-localized perturbation, but unstable in the \(\dot{H}^{1}\) topology. Merle, Raphael and Rodnianski [43] proved the existence of a codimension one set of smooth well localized initial data arbitrarily close to the ground state harmonic map, which generates finite time type II blowup solutions. They also gave a sharp description of the corresponding singularity formation. Perelman [46] proved the existence of another class type II blowup solutions with a different blowup behavior. For more results on the global well-posedness of solutions near the ground state, see [5][6][8][9][32] and the references therein.
#### 1.1.3. Landau-Lifshitz flow
Landau-Lifshitz flow (1.1) was first proposed in the study of classically continuous isotropic Heisenberg ferromagnetic chains. It describes the evolution of magnetic moments in classical ferromagnetic and anti-ferromagnetic chains, which is an important basis for understanding the non-stationary magnetism (see, e.g., [35][60]).
For the global existence and partial regularity of weak solutions, see for instance [2][11][28][33][42][58]. In particular, when \(\mathcal{M}\) is a Riemannian surface, Guo and Hong [28] proved uniqueness of weak solutions and regularity except for at most finitely many points. When \(\mathcal{M}=\mathbb{R}^{2}\), Ko [33] constructed a smooth solution away from a two-dimensional locally finite Hausdorff measure set by using the discretization approximation method. In general, for the high-dimensional weak solutions, we expect a better partial regularity results (i.e., no further assumptions of regularity or minimal energy), for example, there is a well-known example constructed by Riviere [49]: There exists a weak harmonic map from the ball \(B^{3}\subset\mathbb{R}^{3}\) to \(\mathbb{S}^{2}\), whose singular set is the closure of the ball \(\overline{B^{3}}\), and this conclusion also holds in the higher dimensions. Following the idea of this example, Chen and Struwe [17] proved the existence of partially regular solutions for high-dimensional harmonic heat flows, and Melcher [42] proved that the existence of global weak solutions for the Landau-Lifshitz flow in \(\mathbb{R}^{3}\), where the singular set has finite three-dimensional parabolic Hausdorff measure. Wang [58] generalized Melcher's result to the case of \(\mathcal{M}=\mathbb{R}^{m}\), \(m\leq 4\). If an assumption on the stability of weak solutions is attached, Moser [44] obtained a better estimate for the singular set.
Although the Landau-Lifshitz flow has been studied extensively, there are few studies on its dynamical behavior. In the \(m\)-equivariant class (\(m\geq 3\)), Gustafson, Nakanishi and Tsai [31] proved the stability of the harmonic map for Landau-Lifshitz flow. In the \(1\)-equivariant class, Li and Zhao [37] proved that the solutions with energy less than \(4\pi\) converge to a constant map in the energy space. van den Berg and Williams [10] obtained equivariant blowup solutions by formal expansion and verified them experimentally, but as the author stated in [10]: "mathematically
rigourous justification is required". The blowup dynamics of the \(1\)-equivariant Landau-Lifshitz equation near the equivariant harmonic map is an important open problem. In 2020, Xu and Zhao [61] proved the existence of a codimension one set of smooth well localized initial data arbitrarily close to the ground state harmonic map, which generates finite time type II blowup \(1\)-equivariant solutions. They also gave a sharp description of the corresponding singularity formation. Recently, based on the inner-outer gluing method and the distorted Fourier transform, Wei, Zhang and Zhou [59] constructed a finite-time blowup solution in \(\mathbb{R}^{2}\) without any symmetry.
The purpose of this paper is to consider the Landau-Lifshitz flow (1.1) with \(\mathcal{M}=\mathbb{R}^{2}\), and to prove the existence of type II blowup solutions in the \(1\)-equivariant class. This blowup solution has a continuous blowup rate, therefore, it different from that constructed in [61] and [59].
### Model and main result
#### 1.2.1. Setting of the problem
In this paper, we consider the initial value problem of the Landau-Lifshitz flow from \(\mathbb{R}^{2}\) to \(\mathbb{S}^{2}\):
\[\begin{cases}\partial_{t}u=\mathfrak{a}_{1}u\times\Delta u-\mathfrak{a}_{2}u \times(u\times\Delta u),\quad x=(x_{1},x_{2})\in\mathbb{R}^{2},\ t\in\mathbb{R},\\ u|_{t=0}=u_{0},\end{cases} \tag{1.4}\]
where \(u(t,x)=(u_{1}(x,t),u_{2}(x,t),u_{3}(x,t))\in\mathbb{S}^{2}\subset\mathbb{R}^{3}\) and \(\mathfrak{a}_{1}+i\mathfrak{a}_{1}\in\mathbb{C}\) with \(\mathfrak{a}_{2}\geq 0\) and \(\mathfrak{a}_{1}+\mathfrak{a}_{2}=1\).
Equation (1.4) conserves the energy
\[E(u)=\frac{1}{2}\int_{\mathbb{R}^{2}}|\nabla u(x,t)|^{2}dx. \tag{1.5}\]
The two-dimensional problem (1.4) is critical in the sense that (1.5) is invariant with respect to the scaling \(u(x,t)\to u(\lambda x,\lambda^{2}t)\), where \(\lambda\in\mathbb{R}_{+}=\{x\in\mathbb{R}|x>0\}\).
For a finite energy map \(u:\mathbb{R}^{2}\to\mathbb{S}^{2}\), we can define its topological degree as
\[\deg(u)=\frac{1}{4\pi}\int_{\mathbb{R}^{2}}u_{x_{1}}\cdot J_{u}u_{x_{2}}dx,\]
where \(J_{u}\) is a complex structure on \(\mathbb{S}^{2}\) defined by
\[J_{u}v=u\times v,\quad v\in\mathbb{R}^{3}.\]
According to (1.5), we get
\[E(u)\geq 4\pi|\deg(u)|, \tag{1.6}\]
where the equality is achieved at the harmonic map \(\phi_{m}\) (see [31]):
\[\begin{split}&\phi_{m}(x)=e^{m\theta R}Q^{m}(r),\quad Q^{m}=(h_{1}^ {m},0,h_{3}^{m})\in\mathbb{S}^{2},\\ & h_{1}^{m}(r)=\frac{2r^{m}}{r^{2m}+1},\quad h_{3}^{m}(r)=\frac{ r^{2m}-1}{r^{2m}+1}.\end{split} \tag{1.7}\]
Here \(m\in\mathbb{Z}^{+}\), \((r,\theta)\) is the polar coordinate in the plane \(\mathbb{R}^{2}:x_{1}+ix_{2}=e^{i\theta}r\) and \(R\) is the generator of horizontal rotations:
\[R=\Bigg{(}\begin{array}{ccc}0&-1&0\\ 1&0&0\\ 0&0&0\end{array}\Bigg{)},\]
which can also be equivalently written as
\[Ru=\mathbf{k}\times u,\quad\mathbf{k}=(0,0,1).\]
A direct calculation gives
\[\deg(\phi_{m})=m,\quad E(\phi_{m})=4\pi m.\]
Up to the symmetries, \(\phi_{m}\) are the only energy minimizer in their homotopy class.
Since \(\phi_{1}\) is crucial in the rest of this paper, we write \(\phi=\phi_{1}\), \(Q=Q_{1}\), \(h_{1}=h_{1}^{1}\) and \(h_{3}=h_{3}^{1}\).
#### 1.2.2. Main result
Based on reformatting blowup profile and perturbation method, Krieger, Schlag and Tataru [34] proved that the energy of solutions for the equivariant critical wave map concentrates in the cuspidal region:
\[0\leqslant t\lesssim\frac{1}{\lambda_{0}(t)},\quad\lambda_{0}(t)=\frac{1}{t^{ 1+\nu_{0}}},\ \nu_{0}>\frac{1}{2},\]
thus they obtained a class of solutions that blowup at \(r=t=0\). We call this type of blowup solutions as Krieger-Schlag-Tataru type blowup solutions. The aim of this paper is to prove that (1.4) also exists 1-equivariant Krieger-Schlag-Tataru type blowup solutions, where the initial data of the form
\[u_{0}=\phi+\zeta_{0},\]
where \(\zeta_{0}\) is 1-equivariant and is arbitrarily small in \(\dot{H}^{1}\cap\dot{H}^{3}\).
The main result of this paper is as follows.
**Theorem 1.1** (Existence of the Krieger-Schlag-Tataru type blowup solution).: _For any \(\nu>1\) and \(\alpha_{0}\in\mathbb{R}\), let \(\delta>0\) be sufficiently small, then there exists \(t_{0}>0\) such that (1.4) exists a 1-equivariant solution \(u\in C((0,t_{0}],\dot{H}^{1}\cap\dot{H}^{3})\) of the form:_
\[u(x,t)=e^{\alpha(t)R}\phi\left(\lambda(t)x\right)+\zeta(x,t), \tag{1.8}\]
_where_
\[\lambda(t)=t^{-\frac{1}{2}-\nu},\quad\alpha(t)=\alpha_{0}\ln t, \tag{1.9}\]
\[\|\zeta(t)\|_{\dot{H}^{1}\cap\dot{H}^{2}}\leqslant\delta,\quad\|\zeta(t)\|_{ \dot{H}^{3}}\leqslant C_{\nu,\alpha_{0}}\frac{1}{t},\quad\forall t\in(0,t_{0}]. \tag{1.10}\]
_Furthermore, \(\zeta(t)\to\zeta^{*}\) in \(\dot{H}^{1}\cap\dot{H}^{2}\) as \(t\to 0\), where \(\zeta^{*}\in H^{1+2\nu-}\), \(\nu-\) means any positive number less than \(\nu\)._
Here are some comments on the result.
_Remark 1.2_.: Singularity formation of finite energy solutions of two-dimensional Landau-Lifshitz flow (1.4) is an open problem, which was proposed by E (see Subsection 2.1 in [25]), Ding and Wang (see Remark 1.6 in [20]), Guo and Ding (see Preface in [24]) and Gustafson, Nakanishi and Tsai (see Section 1 in [31]), etc. For a more refined description of this problem in the equivariant class see [10]. In this paper, we prove the existence of a continuous blowup solution for the two-dimensional Landau-Lifshitz flow (1.4) in 1-equivariant class. The idea of proof is based on the reformatting blowup profile and perturbation method, which is very similar to the studies of Krieger, Schlag and Tataru [34], Perelman [46] and Ortoleva and Perelman [45].
_Remark 1.3_.: Note that Zhao and Xu [61] proved the existence of finite time \(1\)-equivariant type II blowup solutions with codimension one, and Wei, Zhang and Zhou [59] constructed a finite time type II blowup solution without symmetric assumption. Compared with the results of [61] and [59], we give a class of \(1\)-equivariant type II blowup solutions with continuous blowup rate. Therefore, our solution has a different singularity regime from theirs.
_Remark 1.4_.: In fact, similar to the discussion in this paper, the result also holds when \(\dot{H}^{3}\) is replaced by \(\dot{H}^{1+2s}\) in Theorem 1.1, where \(1\leqslant s<\nu\).
_Remark 1.5_.: When \(\mathfrak{a}_{1}=1\) and \(\mathfrak{a}_{2}=0\), (1.4) becomes the Schrodinger map flow, in this case Theorem 1.1 also holds, see [46] for more details. However, due to the appearance of \(\mathfrak{a}_{2}\neq 0\) and \(\mathfrak{a}_{1}\), the equation behaved as parabolic heat flow property, which is characterized particularly by the corresponding complex coefficients in the profiles of the self-similar and remote regions. This makes it difficult to match the self-similar and remote regions with the inner region. Fortunately, in the self-similar region, we found that the coefficients consisting of \(\mathfrak{a}_{1}\) and \(\mathfrak{a}_{2}\) have some elimination regime. Indeed, we observe that there exists a basis \(\{f_{j}^{1},f_{j}^{2}\}\) of solutions for equation \((\mathcal{L}-\tilde{\mu}_{j})f=0\), and \(f_{j}^{1}\) and \(f_{j}^{2}\) have asymptotic expansions at infinity, which do not contain \(\mathfrak{a}_{1}\) and \(\mathfrak{a}_{2}\) in the power of \(y\) (see Lemma 2.9 in Subsection 2.3). This allows us to construct solutions in the remote region that match the asymptotic expansion of the solutions in the inner region at the origin (see Subsection 2.4).
### Strategy of the proof
The proof of Theorem 1.1 consists of two steps, which are in Sections 2 and 3, respectively.
In Section 2, we construct approximate solutions \(u^{(N)}\) that have the form (1.8), (1.9) and (1.10), and satisfy (1.4) up to an arbitrary high order error \(O(t^{N})\).
In Section 3, by solving a time-forward problem with zero initial value at \(t=0\) with respect to the remainder (see Proposition 3.1), we solve the equation (1.4) exactly. The control of the remainder is obtained by energy estimates (see Section 3 for details), where the assumption \(\nu>1\) ensures that the approximate solutions we construct belong to \(\dot{H}^{1}\cap\dot{H}^{3}\), so that we can work in the framework of \(H^{3}\) well-posedness theory.
## 2. Approximate solutions
### Preliminaries and main result of present section
We consider the \(1\)-equivariant solutions of (1.4), i.e.,
\[u(x,t)=e^{\theta R}v(r,t),\quad v=(v_{1},v_{2},v_{3})\in\mathbb{S}^{2}\subset \mathbb{R}^{3}. \tag{2.1}\]
Thus, (1.4) restricted to the \(1\)-equivariant class yields
\[v_{t}=\mathfrak{a}_{1}v\times\left(\Delta v+\frac{R^{2}}{r^{2}}v\right)- \mathfrak{a}_{2}v\times\left[v\times\left(\Delta v+\frac{R^{2}}{r^{2}}v\right) \right], \tag{2.2}\]
and the corresponding energy is
\[E(u)=\pi\int_{0}^{\infty}\left(|v_{r}|^{2}+\frac{v_{1}^{2}+v_{2}^{2}}{r^{2}} \right)rdr.\]
Note that \(Q=(h_{1},0,h_{3})\) is a static solution of (2.2) satisfying the following identities:
\[\begin{split}&\partial_{r}h_{1}=-\frac{h_{1}h_{3}}{r},\quad \partial_{r}h_{3}=\frac{h_{1}^{2}}{r},\\ &\Delta Q+\frac{R^{2}}{r^{2}}Q=\kappa(r)Q,\quad\kappa(r)=-\frac{ 2h_{1}^{2}}{r^{2}}.\end{split} \tag{2.3}\]
The main purpose of this section is to prove the following proposition.
**Proposition 2.1**.: For any \(\delta>0\) sufficiently small and any \(N\) sufficiently large, there is a approximate solution \(u^{(N)}:\mathbb{R}^{2}\times\mathbb{R}^{*}_{+}\to\mathbb{S}^{2}\) of (1.4), where \(\mathbb{R}^{*}_{+}=\{x\in\mathbb{R}|x\geq 0\}\). Moreover, \(u^{(N)}\) satisfies the following estimates:
**(i):**: \(u^{N}\) is a \(C^{\infty}\)\(1\)-equivariant profile of the form
\[u^{N}=e^{\alpha(t)R}\left[\phi(\lambda(t)x)+\chi^{N}(\lambda(t)x,t)\right], \tag{2.4}\]
where \(\chi^{(N)}(y,t)=e^{\theta R}Z^{(N)}(\rho,t)\), \(\rho=|y|\), and \(Z^{(N)}\) satisfies that for any \(0<t\leq T(N,\delta)\) with some \(T(N,\delta)>0\),
\[\left\|\partial_{\rho}Z^{(N)}(t)\right\|_{L^{2}(\rho d\rho)}, \,\left\|\rho^{-1}Z^{(N)}(t)\right\|_{L^{2}(\rho d\rho)},\,\left\|\rho\partial _{\rho}Z^{(N)}(t)\right\|_{L^{\infty}}\leq C\delta^{2\nu}, \tag{2.6}\] \[\left\|\rho^{-l}\partial_{\rho}^{k}Z^{(N)}(t)\right\|_{L^{2}(\rho d \rho)}\leq C\delta^{2\nu-1}t^{\frac{1}{2}+\nu},\quad k+l=2,\] (2.7) \[\left\|\rho^{-l}\partial_{\rho}^{k}Z^{(N)}(t)\right\|_{L^{2}(\rho d \rho)}\leq Ct^{2\nu},\quad k+l=3,\] (2.8) \[\left\|\rho\partial_{\rho}Z^{(N)}(t)\right\|_{L^{\infty}},\, \left\|\rho^{-1}Z^{(N)}(t)\right\|_{L^{\infty}}\leq C\delta^{2\nu-1}t^{\nu},\] (2.9) \[\left\|\rho^{-l}\partial_{\rho}^{k}Z^{(N)}(t)\right\|_{L^{\infty} }\leq Ct^{2\nu},\,2\leq k+l\leq 3. \tag{2.5}\]
The constants \(C\) here and next do not depend on \(N\) and \(\delta\).
In addition, there holds
\[\left\|\chi^{(N)}(t)\right\|_{\dot{W}^{4,\infty}}+\left\|\langle y \rangle^{-1}\chi^{(N)}(t)\right\|_{\dot{W}^{5,\infty}}\leq Ct^{2\nu}, \tag{2.11}\] \[\langle x\rangle^{2(\nu-1)}\nabla^{4}u^{(N)}(t),\langle x\rangle^ {2(\nu-1)}\nabla^{2}u^{(N)}_{t}(t)\in L^{\infty}(\mathbb{R}^{2}), \tag{2.10}\]
where \(\langle x\rangle=\sqrt{1+x^{2}}\).
Furthermore, there exists \(\zeta^{*}_{N}\in\dot{H}^{1}\cap\dot{H}^{1+2\nu-}\) such that \(e^{\alpha(t)R}\chi^{(N)}(\lambda(t)\cdot,t)\to\zeta^{*}_{N}\) in \(\dot{H}^{1}\cap\dot{H}^{2}\) as \(t\to 0\).
**(ii):**: The corresponding error
\[r^{(N)}=-u^{(N)}_{t}+\mathfrak{a}_{1}u^{(N)}\times\Delta u^{(N)}-\mathfrak{a} _{2}u^{(N)}\times(u^{(N)}\times\Delta u^{(N)}) \tag{2.12}\]
satisfies the estimate:
\[\left\|r^{(N)}(t)\right\|_{H^{3}}+\left\|\partial_{t}r^{(N)}(t)\right\|_{H^{3} }+\left\|\langle x\rangle r^{(N)}(t)\right\|_{L^{2}}\leq t^{N},\quad 0<t\leq T( \delta,N). \tag{2.13}\]
Here are some comments on the proposition.
_Remark 2.2_.: Note that (2.5) and (2.6) imply
\[\left\|u^{(N)}(t)-e^{\alpha(t)R}\phi(\lambda(t)\cdot)\right\|_{\dot{H}^{1}\cap \dot{H}^{2}}\leq\delta^{2\nu-1},\quad\forall t\in(0,T(N,\delta)]. \tag{2.14}\]
_Remark 2.3_.: According to the construction, for any \(s<\nu\), \(\chi^{(N)}(t)\in\dot{H}^{1+2s}\) satisfies the following estimate:
\[\left\|\chi^{(N)}(t)\right\|_{\dot{H}^{1+2s}(\mathbb{R}^{2})}\leq C\left(t^{2 \nu}+t^{s(1+2\nu)}\delta^{2\nu-2s}\right). \tag{2.15}\]
_Remark 2.4_.: In fact, the remainder \(r^{(N)}\) satisfies that for any \(m,l,k\), if \(N\geq C_{l,m,k}\), then
\[\left\|\langle x\rangle^{l}\partial_{t}^{m}r^{(N)}(t)\right\|_{H^{k}}\leq C_{ l,m,k}t^{N-C_{l,m,k}}. \tag{2.16}\]
Next, we prove Proposition 2.1. For convenience, we consider only the case when \(\nu\) is an irrational number, and it is natural to extend it to the case when \(\nu\) is a rational number.
To construct an arbitrarily good approximate solution, we analyze three regions corresponding to three different spatial scales: the inner region with the scale \(r\lambda(t)\lesssim 1\), the self-similar region with scale \(r=O(t^{1/2})\) and the remote region with scale \(r=O(1)\). The inner region is the region where the blowup concentrates, in which we construct the solutions by perturbing the profile \(e^{\alpha(t)R}Q(\lambda(t)r)\). In the self-similar and remote regions, we construct solutions that are close to \(\mathbf{k}\). These solutions are essentially described by their corresponding linearized equations. More precisely, in the self-similar region, the profile of the solutions is uniquely determined by the matching conditions in the inner region, while in the remote region, the profile remains essentially a free parameter of the structure and can be matched only by the limit behavior at the origin, see Subsections 2.3 and 2.4 for more details. There are some closely related similar studies for other equations, such as the critical harmonic map heat flow [1][4] and the critical Schrodinger map flow [46] and the critical Schrodinger equation [45].
### Inner region \(r\lambda(t)\lesssim 1\)
First, we consider the inner region \(0\leq r\lambda(t)\leq 10t^{-\nu+\varepsilon_{1}}\), where \(0<\varepsilon_{1}<\nu\) is to be determined. Writing \(v(r,t)\) as
\[v(r,t)=e^{\alpha(t)R}V(\lambda(t)r,t),\quad V=(V_{1},V_{2},V_{3}),\]
and using (2.2), we get
\[\begin{split}& t^{1+2\nu}V_{t}+\alpha_{0}t^{2\nu}RV-t^{2\nu} \left(\nu+\frac{1}{2}\right)\rho V_{\rho}\\ &\quad=\mathfrak{a}_{1}V\times\left(\Delta V+\frac{R^{2}}{\rho^{ 2}}V\right)-\mathfrak{a}_{2}V\times\left[V\times\left(\Delta V+\frac{R^{2}}{ \rho^{2}}V\right)\right],\quad\rho=\lambda(t)r.\end{split} \tag{2.17}\]
We construct the solution of (2.17), which is a perturbation of the harmonic map \(Q(\rho)\):
\[V=Q+Z.\]
We further decompose \(Z\) as
\[Z(\rho,t)=z_{1}(\rho,t)f_{1}(\rho)+z_{2}(\rho,t)f_{2}+\gamma(\rho,t)Q(\rho),\]
where \(\{f_{1},f_{2}\}\) is an orthogonal frame on the tangent space \(T_{Q}\mathbb{S}^{2}\):
\[f_{1}(\rho)=\left(\begin{array}{c}h_{3}(\rho)\\ 0\\ -h_{1}(\rho)\end{array}\right),\quad f_{2}=\left(\begin{array}{c}0\\ 1\\ 0\end{array}\right).\]
Therefore, we obtain that
\[\gamma=\sqrt{1-|z|^{2}}-1=O(|z|^{2}),\quad z_{1}+iz_{2},\]
and the identities:
\[\begin{split}&\partial_{\rho}Q=-\frac{h_{1}}{\rho}f_{1},\quad \partial_{\rho}f_{1}=\frac{h_{1}}{\rho}Q,\quad f_{2}=Q\times f_{1},\\ &\Delta f_{1}+\frac{R^{2}}{\rho^{2}}f_{1}=-\frac{1}{\rho^{2}}f_{1 }-\frac{2h_{3}h_{1}}{\rho^{2}}Q.\end{split} \tag{2.18}\]
Now we write (2.17) as an equation with respect to \(z\). A direct calculation shows that
\[\begin{split} RV&=-h_{3}z_{2}f_{1}+[h_{3}z_{1}+h_{1}( 1+\gamma)]f_{2}-h_{1}z_{2}Q,\\ \rho\partial_{\rho}V&=[\rho\partial_{\rho}z_{1}-h_{ 1}(1+\gamma)]f_{1}+\rho\partial_{\rho}z_{2}f_{2}+(h_{1}z_{1}+\rho\partial_{ \rho}\gamma)Q.\end{split} \tag{2.19}\]
Next, we calculate the nonlinear term
\[\mathfrak{a}_{1}V\times\left(\Delta V+\frac{R^{2}}{\rho^{2}}V\right)- \mathfrak{a}_{2}V\times\left[V\times\left(\Delta V+\frac{R^{2}}{\rho^{2}}V \right)\right].\]
In the basis \(\{f_{1},f_{2},Q\}\), \(\Delta V+\frac{R^{2}}{\rho^{2}}V\) can be expressed as
\[\begin{split}\Delta V+\frac{R^{2}}{\rho^{2}}V=& \left(\Delta z_{1}-\frac{z_{1}}{\rho^{2}}-2\frac{h_{1}}{\rho} \gamma_{\rho}\right)f_{1}+\left(\Delta z_{2}-\frac{z_{2}}{\rho^{2}}\right)f_{ 2}\\ &+\left[\Delta\gamma+\kappa(\rho)(1+\gamma)+2\frac{h_{1}}{\rho} \partial_{\rho}z_{1}-2\frac{h_{1}h_{3}}{\rho^{2}}z_{1}\right]Q,\end{split}\]
Thus, we obtain that
\[V\times\left(\Delta V+\frac{R^{2}}{\rho^{2}}V\right)=\left[(1+\gamma)Lz_{2}+F _{1}(z)\right]f_{1}-\left[(1+\gamma)Lz_{1}+F_{2}(z)\right]f_{2}+F_{3}(z)Q,\]
and
\[\begin{split}& V\times\left[V\times\left(\Delta V+\frac{R^{2}}{ \rho^{2}}V\right)\right]\\ =&\left\{z_{2}F_{3}(z)+(1+\gamma)\left[(1+\gamma)Lz_ {1}+F_{2}(z)\right]\right\}f_{1}\\ &+\left\{(1+\gamma)\left[(1+\gamma)Lz_{2}+F_{1}(z)\right]-z_{1}F_ {3}(z)\right\}f_{2}\\ &-\left\{z_{1}\left[(1+\gamma)Lz_{1}+F_{2}(z)\right]+z_{2}\left[( 1+\gamma)Lz_{2}+F_{1}(z)\right]\right\}Q.\end{split}\]
Therefore,
\[\begin{split}&\mathfrak{a}_{1}V\times\left(\Delta V+\frac{R^{2}}{ \rho^{2}}V\right)-\mathfrak{a}_{2}V\times\left[V\times\left(\Delta V+\frac{R^ {2}}{\rho^{2}}V\right)\right]\\ =&\left(\mathfrak{a}_{1}\left[(1+\gamma)Lz_{2}+F_{1} (z)\right]-\mathfrak{a}_{2}\{z_{2}F_{3}(z)+(1+\gamma)\big{[}(1+\gamma)Lz_{1}+ F_{2}(z)\big{]}\}\right)f_{1}\\ &+\left(-\mathfrak{a}_{1}\left[(1+\gamma)Lz_{1}+F_{2}(z)\right]- \mathfrak{a}_{2}\{(1+\gamma)\big{[}(1+\gamma)Lz_{2}+F_{1}(z)\big{]}-z_{1}F_{3 }(z)\}\right)f_{2}\\ &+\left(\mathfrak{a}_{1}F_{3}(z)+\mathfrak{a}_{2}\{z_{1}\big{[}( 1+\gamma)Lz_{1}+F_{2}(z)\big{]}+z_{2}\left[(1+\gamma)Lz_{2}+F_{1}(z)\right]\} \right)Q,\end{split} \tag{2.20}\]
where
\[\begin{split} L&=-\Delta+\frac{1-2h_{1}^{2}}{\rho^{2}}, \\ F_{1}(z)&=z_{2}\left(\Delta\gamma-2\frac{h_{1}h_{3}}{ \rho^{2}}z_{1}+2\frac{h_{1}}{\rho}\partial_{\rho}z_{1}\right),\\ F_{2}(z)&=z_{1}\left(\Delta\gamma-2\frac{h_{1}h_{3}} {\rho^{2}}z_{1}+2\frac{h_{1}}{\rho}\partial_{\rho}z_{1}\right)+\frac{2h_{1}}{ \rho}(1+\gamma)\gamma_{\rho},\\ F_{3}(z)&=z_{1}\Delta z_{2}-z_{2}\Delta z_{1}+ \frac{2h_{1}}{\rho}z_{2}\gamma_{\rho}.\end{split} \tag{2.21}\]
By projecting (2.17) onto the plane \(\operatorname{span}\{f_{1},f_{2}\}\) and using (2.19), (2.20) and (2.21), we can rewrite (2.17) as
\[\begin{split} it^{1+2\nu}z_{t}-\alpha_{0}t^{2\nu}h_{3}z& -i\left(\frac{1}{2}+\nu\right)t^{2\nu}\rho z_{\rho}\\ &=(\mathfrak{a}_{1}-i\mathfrak{a}_{2})Lz+\mathfrak{a}_{1}F(z)+ \mathfrak{a}_{1}dt^{2\nu}h_{1}-\mathfrak{a}_{2}\widetilde{F}(z),\end{split} \tag{2.22}\]
where
\[d=\alpha_{0}-i\left(\frac{1}{2}+\nu\right),\] \[F(z) =\gamma Lz+z\left(\Delta\gamma+\frac{2h_{1}}{\rho}\partial_{\rho }z_{1}-\frac{2h_{1}h_{3}z_{1}}{\rho^{2}}\right)+\frac{2h_{1}}{\rho}(1+\gamma) \gamma_{\rho}+dt^{2\nu}\gamma h_{1},\] \[\widetilde{F}(z) =zF_{3}(z)-i|z|^{2}Lz+i(1+\gamma)\left[z\left(\Delta\gamma+ \frac{2h_{1}}{\rho}\partial_{\rho}z_{1}-\frac{2h_{1}h_{3}z_{1}}{\rho^{2}} \right)+\frac{2h_{1}}{\rho}(1+\gamma)\gamma_{\rho}\right].\]
Note that \(F\) and \(\widetilde{F}\) are functions at least quadratic with respect to \(z\).
Expand the solutions of (2.22) as a power series of \(t^{2\nu}\):
\[z(\rho,t)=\sum_{k\geq 1}t^{2\nu k}z^{k}(\rho). \tag{2.23}\]
Substituting (2.23) into (2.22), we obtain a system with respect to \(z^{k}\), \(k\geq 1\):
\[\begin{cases}Lz^{1}=&-\frac{\mathfrak{a}_{1}d}{\mathfrak{a}_{1}-i\mathfrak{a }_{2}}h_{1},\\ Lz^{k}=&\mathcal{F}_{k},\quad k\geq 2,\end{cases} \tag{2.24}\]
where \(\mathcal{F}_{k}\) depends only on \(z^{j}\), \(j=1,2,\cdots,k-1\), and (2.24) satisfies the following conditions at \(\rho=0\):
\[z^{k}(0)=\partial_{\rho}z^{k}(0)=0. \tag{2.25}\]
**Lemma 2.5**.: _There exists a unique solution \((z^{k})_{k\geq 1}\) to the problem (2.24), (2.25), where \(z^{k}\in C^{\infty}(\mathbb{R}_{+})\), \(\forall k\geq 1\). Furthermore,_
**(i):**: \(z^{k}\) _has an odd Taylor expansion at_ \(\rho=0\)_, and the leading term is of order_ \(2k+1\)_;_
**(ii):**: _As_ \(\rho\to\infty\)_,_ \(z^{k}\) _has the following expansion:_
\[z^{k}(\rho)=\sum_{l=0}^{2k}\sum_{j\leq k-\frac{l-1}{2}}c_{j,l}^{k}\rho^{2j-1}( \ln\rho)^{l}, \tag{2.26}\]
_where_ \(c_{j,l}^{k}=c_{j,l}^{k}(d,\mathfrak{a}_{1},\mathfrak{a}_{2})\) _is constant, and the asymptotic expansion (_2.26_) can be differentiated with respect to_ \(\rho\) _any number of times._
Proof.: Note that the equation \(Lf=0\) has the following two explicit exact solutions:
\[h_{1}(\rho),\quad h_{2}(\rho)=\frac{\rho^{4}+4\rho^{2}\ln\rho-1}{\rho(\rho^{2}+1 )}. \tag{2.27}\]
For the case of \(k=1\):
\[\begin{cases}&Lz^{1}=-\frac{\mathfrak{a}_{1}d}{\mathfrak{a}_{1}-i\mathfrak{a}_ {2}}h_{1},\\ &z^{1}(0)=\partial_{\rho}z^{1}(0)=0.\end{cases}\]
We get
\[\begin{split} z^{1}(\rho)=&-\frac{\mathfrak{a}_{1}d}{4( \mathfrak{a}_{1}-i\mathfrak{a}_{2})}\int_{0}^{\rho}\big{[}h_{1}(\rho)h_{2}(s)- h_{1}(s)h_{2}(\rho)\big{]}h_{1}(s)sds\\ =&-\frac{\mathfrak{a}_{1}d\rho}{\left(\mathfrak{a}_{1}-i \mathfrak{a}_{2}\right)\left(1+\rho^{2}\right)}\int_{0}^{\rho}\frac{s\left(s^ {4}+4s^{2}\ln s-1\right)}{\left(1+s^{2}\right)^{2}}ds\\ &+\frac{\mathfrak{a}_{1}d\left(\rho^{4}+4\rho^{2}\ln\rho-1\right) }{(\mathfrak{a}_{1}-i\mathfrak{a}_{2})\rho(\rho^{2}+1)}\int_{0}^{\rho}\frac{s ^{3}}{\left(1+s^{2}\right)^{2}}ds.\end{split} \tag{2.28}\]
Note that \(h_{1}\) is a \(C^{\infty}\) function, it has an odd Taylor expansion at \(\rho=0\) and the leading term of the expansion is linear, so we can expand \(z^{1}\) as an odd Taylor expansion with a cubic leading term. This proves \((\mathbf{i})\) for \(k=1\).
The asymptotic behavior of \(z^{1}\) at infinity can be obtained directly from (2.28):
\[z^{1}(\rho)=c^{1}_{1,0}\rho+c^{1}_{1,1}\rho\ln\rho+\sum_{j\leqslant 0}\sum_{ l=0,1,2}c^{1}_{j,l}\rho^{2j-1}(\ln\rho)^{l},\]
where \(c^{1}_{1,0}=-c^{1}_{1,1}=-\frac{\mathfrak{a}_{1}d}{\mathfrak{a}_{1}-i \mathfrak{a}_{2}}\). Thus, \((\mathbf{ii})\) holds for \(k=1\).
For the case of \(k>1\), we prove it by induction. Suppose that \(z^{j}\), \(j\leqslant k-1\), satisfy \((\mathbf{i})\) and \((\mathbf{ii})\). According to (2.22), we get that \(\mathcal{F}_{k}\) is an odd \(C^{\infty}\) function, moreover, its asymptotic expansion at \(\rho=0\) is zero at order \(2k-1\), and the asymptotic expansion as \(\rho\to\infty\) is
\[\begin{split}\mathcal{F}_{k}=&\sum_{j=1}^{k-1}\sum _{l=0}^{2k-2j-1}\alpha^{k}_{j,l}\rho^{2j-1}(\ln\rho)^{l}+\sum_{l=0}^{2k-2} \alpha^{k}_{0,l}\rho^{-1}(\ln\rho)^{l}\\ &+\sum_{l=0}^{2k-1}\alpha^{k}_{-1,l}\rho^{-3}(\ln\rho)^{l}+\sum_{ j\leqslant-2}\sum_{l=0}^{2k}\alpha^{k}_{j,l}\rho^{2j-1}(\ln\rho)^{l}.\end{split}\]
Thus, we obtain that
\[z^{k}(\rho)=\frac{1}{4}\int_{0}^{\rho}\big{[}h_{1}(\rho)h_{2}(s)-h_{1}(s)h_{2} (\rho)\big{]}\mathcal{F}_{k}(s)sds\]
is a \(C^{\infty}\) function, meanwhile, it can be expanded to an odd Taylor series at \(\rho=0\) with leading term of order \(2k+1\), and has the following asymptotic expansion as \(\rho\to\infty\):
\[z^{k}(\rho)=\sum_{l=0}^{2k}\sum_{j\leqslant k-\frac{l-1}{2}}c^{k}_{j,l}(\ln \rho)^{l}\rho^{2j-1}.\]
This proves Lemma 2.5.
By (2.23), we obtain a formal solution of (2.2):
\[v(r,t)=e^{\alpha(t)R}V\left(\lambda(t)r,t\right),\quad V(\rho,t)=Q+\sum_{k\geq 1 }t^{2\nu k}Z^{k}(\rho), \tag{2.29}\]
where \(Z^{k}=(Z_{1}^{k},Z_{2}^{k},Z_{3}^{k})\). Here \(Z_{i}^{k}\), \(i=1,2\), are smooth odd functions with respect to \(\rho\), and their asymptotic expansion at \(\rho=0\) is zero at order \(2k+1\), meanwhile, \(Z_{3}^{k}\) is an even function, and its asymptotic expansion at \(\rho=0\) is zero at order \(2k+2\). As \(\rho\to\infty\), we have
\[\begin{split} Z_{i}^{k}(\rho)=&\sum_{l=0}^{2k}\sum _{j\leq k-\frac{l-1}{2}}c_{j,l}^{k,i}(\ln\rho)^{l}\rho^{2j-1},\quad i=1,2,\\ Z_{3}^{k}(\rho)=&\sum_{l=0}^{2k}\sum_{j\leq k+1- \frac{l}{2}}c_{j,l}^{k,3}(\ln\rho)^{l}\rho^{2j-2},\end{split} \tag{2.30}\]
where the coefficients \(c_{j,l}^{k,i}=c_{j,l}^{k,i}(d,\mathfrak{a}_{1},\mathfrak{a}_{2})\) satisfy \(c_{k+1,0}^{k,3}=0\), \(\forall k\geq 1\). The asymptotic expansion (2.30) can be differentiated with respect to \(\rho\) any number of times.
According to (2.29) and (2.30), as \(\rho\to\infty\), each component of \(V(\rho,t)\) (i.e., \(V_{i}(\rho,t)\), \(i=1,2,3\)) has the following asymptotic expansion:
\[\begin{split} V_{i}(\rho,t)=&\sum_{j\leq 0}c_{j,0}^{0, 1}\rho^{2j-1}+\sum_{k\geq 1}t^{2\nu k}\sum_{l=0}^{2k}\sum_{j\leq k-\frac{l-1}{2}}c_{ j,l}^{k,i}(\ln\rho)^{l}\rho^{2j-1},\quad i=1,2\\ V_{3}(\rho,t)=& 1+\sum_{j\leq 0}c_{j,0}^{0,3}\rho^{2j-2}+ \sum_{k\geq 1}t^{2\nu k}\sum_{l=0}^{2k}\sum_{j\leq k+1-\frac{l}{2}}c_{j,l}^{k,3 }(\ln\rho)^{l}\rho^{2j-2}.\end{split}\]
Taking \(\rho\to\infty\) and \(y\equiv rt^{-\frac{1}{2}}\to 0\), the above expansions can be formally rewritten as the new expansions with respect to \(y\):
\[\begin{split} V_{i}(\lambda(t)r,t)=&\sum_{j\geq 0}t^{ \nu(2j+1)}\sum_{l=0}^{2j+1}\left(\ln y-\nu\ln t\right)^{l}V_{i}^{j,l}(y),\quad i =1,2,\\ V_{3}(\lambda(t)r,t)=& 1+\sum_{j\geq 1}t^{2\nu j} \sum_{l=0}^{2j}\left(\ln y-\nu\ln t\right)^{l}V_{3}^{j,l}(y),\\ V_{i}^{j,l}(y)=&\sum_{k\geq-j+\frac{l}{2}}c_{k,l}^{k +j,i}y^{2k-1},\quad i=1,2,\\ V_{3}^{j,l}(y)=&\sum_{k\geq-j+\frac{l}{2}}c_{k+1,l}^ {k+j,3}y^{2k},\end{split} \tag{2.31}\]
where \(c_{j,l}^{k,i}\) defined by (2.30) for \(k\neq 0\), and \(c_{j,0}^{0,i}\) is derived from the expansion of \(Q\) as \(\rho\to\infty\):
\[h_{1}(\rho)=\sum_{j\leq 0}c_{j,0}^{0,1}\rho^{2j-1},\quad c_{j,0}^{0,2}=0,\quad h _{3}(\rho)=1+\sum_{j\leq 0}c_{j,0}^{0,3}\rho^{2j-2}.\]
The expression (2.31) expanded with respect to \(y\) is crucial in the matching between the self-similar region and the inner region below.
For \(N\geq 2\), we define
\[z_{\text{in}}^{(N)}=\sum_{k=1}^{N}t^{2\nu k}z^{k},\quad z_{\text{in}}^{(N)}=z_{ \text{in},1}^{(N)}+iz_{\text{in},2}^{(N)}.\]
Substituting \(z_{\text{in}}^{(N)}\) into (2.22), we get the error
\[X_{N}=-it^{1+2\nu}\partial_{t}z_{\text{in}}^{(N)} +\alpha_{0}t^{2\nu}h_{3}z_{\text{in}}^{(N)}+i\left(\frac{1}{2}+ \nu\right)t^{2\nu}\rho\partial_{\rho}z_{\text{in}}^{(N)}\] \[+(\mathfrak{a}_{1}-i\mathfrak{a}_{2})Lz_{\text{in}}^{(N)}+ \mathfrak{a}_{1}F\left(z_{\text{in}}^{(N)}\right)+\mathfrak{a}_{1}dt^{2\nu}h_ {1}-\mathfrak{a}_{2}\widehat{F}\left(z_{\text{in}}^{(N)}\right).\]
According to the definition of \(z^{k}\), \(\rho<\langle\rho\rangle\) and \(\ln\rho<\ln(2+\rho)\), it is easy to verify that the error \(X_{N}\) satisfies the following estimate: There exists \(T(N)>0\), for any \(k,m\in\mathbb{N}\), \(0\leq l\leq(2N+1-k)_{+}\), \(0\leq\rho\leq 10t^{-\nu+\varepsilon_{1}}\) and \(0<t\leq T(N)\), we have
\[\left|\rho^{-1}\partial_{\rho}^{k}\partial_{t}^{m}X_{N}\right|\leq C_{k,l,m}t ^{2\nu N-m}\langle\rho\rangle^{2(N+1)-1-l-k}\ln(2+\rho), \tag{2.32}\]
where \(N_{+}=\max\{N,0\}\).
Let
\[\gamma_{\text{in}}^{(N)}= \sqrt{1-\left|z_{\text{in}}^{(N)}\right|^{2}}-1,\] \[Z_{\text{in}}^{(N)}= z_{\text{in},1}^{(N)}f_{1}+z_{\text{in},2}^{(N)}f_{2}+\gamma_{ \text{in}}^{(N)}Q,\] \[V_{\text{in}}^{(N)}= Q+Z_{\text{in}}^{(N)}\in\mathbb{S}^{2}.\]
Then \(V_{\text{in}}^{(N)}\) satisfies
\[t^{1+2\nu}\partial_{t}V_{\text{in}}^{(N)}+\alpha_{0}t^{2\nu}t^ {2\nu}RV_{\text{in}}^{(N)}-t^{2\nu}\left(\nu+\frac{1}{2}\right)\rho\partial_{ \rho}V_{\text{in}}^{(N)}\] \[= \mathfrak{a}_{1}V_{\text{in}}^{(N)}\times\left(\Delta V_{\text{ in}}^{(N)}+\frac{R^{2}}{\rho^{2}}V_{\text{in}}^{(N)}\right)\] \[-\mathfrak{a}_{2}V_{\text{in}}^{(N)}\times\left[V_{\text{in}}^{(N )}\times\left(\Delta V_{\text{in}}^{(N)}+\frac{R^{2}}{\rho^{2}}V_{\text{in}}^{ (N)}\right)\right]+\mathcal{R}_{\text{in}}^{(N)}, \tag{2.33}\]
where
\[\mathcal{R}_{\text{in}}^{(N)}=\text{Im}\left(X_{N}\right)f_{1}-\text{Re} \left(X_{N}\right)f_{2}+\frac{\text{Im}\left(\bar{X}_{N}z^{(N)}\right)}{1+ \gamma^{(N)}}Q\]
has the same estimate as the error \(X_{N}\). According to the analysis, for any \(0\leq\rho\leq 10t^{-\nu+\varepsilon_{1}}\) and \(0<t\leq T(N)\), we have
\[\left|\rho^{-l}\partial_{\rho}^{k}Z_{\text{in}}^{(N)}\right|\leq C_{k,l}t^{2 \nu}\langle\rho\rangle^{1-l-k}\ln(2+\rho),\quad k\in\mathbb{N},\quad l\leq(3-k )_{+}. \tag{2.34}\]
Thus, we obtain the following estimates.
**Lemma 2.6**.: _There exists \(T(N)>0\) such that for any \(0<t\leq T(N)\), the following holds._
**(i):**__\(Z_{\text{in}}^{(N)}(\rho,t)\) _satisfies_
\[\left\|\partial_{\rho}Z_{\text{in}}^{(N)}(t)\right\|_{L^{2}(\rho d \rho,0\leq\rho\leq 10t^{-\nu+\varepsilon_{1}})}\leq Ct^{\nu}, \tag{2.36}\] \[\left\|\rho^{-1}\partial_{\rho}Z_{\text{in}}^{(N)}(t)\right\|_{L^{ 2}(\rho d\rho,0\leq\rho\leq 10t^{-\nu+\varepsilon_{1}})}\leq Ct^{\nu}, \tag{2.35}\]
\[\left\|Z_{in}^{(N)}(t)\right\|_{L^{\infty}(0\leq\rho\leq 10t^{-\nu+ \varepsilon_{1}})}+\left\|\rho\partial_{\rho}Z_{in}^{(N)}(t)\right\|_{L^{\infty} (0\leq\rho\leq 10t^{-\nu+\varepsilon_{1}})}\leq Ct^{\nu}, \tag{2.38}\] \[\left\|\rho^{-l}\partial_{\rho}^{k}Z_{in}^{(N)}(t)\right\|_{L^{2} (\rho d\rho,0\leq\rho\leq 10t^{-\nu+\varepsilon_{1}})}\leq Ct^{2\nu}\left(1+|\ln t |\right),\quad k+l=2,\] (2.39) \[\left\|\rho^{-l}\partial_{\rho}^{k}Z_{in}^{(N)}(t)\right\|_{L^{2} (\rho d\rho,0\leq\rho\leq 10t^{-\nu+\varepsilon_{1}})}\leq Ct^{2\nu},\quad k+l \geq 3,\,l\leq(3-k)_{+},\] (2.40) \[\left\|\partial_{\rho}Z_{in}^{(N)}(t)\right\|_{L^{\infty}(0\leq \rho\leq 10t^{-\nu+\varepsilon_{1}})}\] \[\qquad\qquad\qquad+\left\|\rho^{-1}Z_{in}^{(N)}(t)\right\|_{L^{ \infty}(0\leq\rho\leq 10t^{-\nu+\varepsilon_{1}})}\leq Ct^{2\nu}\left(1+|\ln t |\right),\] (2.41) \[\left\|\rho^{-l}\partial_{\rho}^{k}Z_{in}^{(N)}(t)\right\|_{L^{ \infty}(0\leq\rho\leq 10t^{-\nu+\varepsilon_{1}})}\leq Ct^{2\nu},\quad 2\leq l +k,\,l\leq(3-k)_{+}. \tag{2.37}\]
**(ii):**: _The error \(\mathcal{R}_{in}^{(N)}\) has the estimate: If \(N>\varepsilon_{1}^{-1}\), then_
\[\left\|\rho^{-l}\partial_{\rho}^{k}\mathcal{R}_{in}^{(N)}(t)\right\|_{L^{2} (\rho d\rho,0\leq\rho\leq 10t^{-\nu+\varepsilon_{1}})}\leq t^{N\varepsilon_{1}}, \quad 0\leq l+k\leq 3, \tag{2.42}\]
### Self-similar region \(rt^{-\frac{1}{2}}\lesssim 1\)
Next, we consider the self-similar region \(10t^{\varepsilon_{1}}\leq rt^{-\frac{1}{2}}\leq 10t^{-\varepsilon_{2}}\), where \(0<\varepsilon_{2}<\frac{1}{2}\) is to be determined. In this region, we want the solution to be close to \(\mathbf{k}\). Using the stereographic projection:
\[(v_{1},v_{2},v_{3})=v\to w=\frac{v_{1}+iv_{2}}{1+v_{3}}\in\mathbb{C}\cup\{ \infty\},\]
equation (2.2) is equivalently transformed into
\[iw_{t}=\left(\mathfrak{a}_{1}-i\mathfrak{a}_{2}\right)\left[-\Delta w+\frac{1 }{r^{2}}w+G(w,\bar{w},w_{r})\right], \tag{2.43}\]
where
\[G(w,\bar{w},w_{r})=\frac{2\bar{w}}{1+|w|^{2}}\left(w_{r}^{2}-\frac{1}{r^{2}}w^ {2}\right).\]
Let
\[w(r,t)=e^{i\alpha(t)}W(y,t),\quad y=rt^{-\frac{1}{2}}. \tag{2.44}\]
Then (2.43) becomes
\[itW_{t}-\alpha_{0}W=\left(\mathfrak{a}_{1}-i\mathfrak{a}_{2}\right)\left[ \mathcal{L}W+G\left(W,\bar{W},W_{y}\right)\right], \tag{2.45}\]
where
\[\mathcal{L}=-\Delta+\frac{1}{y^{2}}+\frac{i}{2(\mathfrak{a}_{1}-i\mathfrak{a} _{2})}y\partial_{y}.\]
Thus, as \(y\to 0\), it follows from (2.31) that \(W\) has an expansion of the form:
\[W(y,t)=\sum_{j\geq 0}\sum_{l=0}^{2j+1}\sum_{\tilde{i}\geq-j+\frac{l}{2}}\alpha(j, \tilde{i},l)t^{\nu(2j+1)}\left(\ln y-\nu\ln t\right)^{l}y^{2\tilde{i}-1}, \tag{2.46}\]
where the coefficients \(\alpha(j,\tilde{i},l)\) can be precisely expressed as terms of \(c_{j^{\prime},l^{\prime}}^{k,i^{\prime}}\), \(1\leq k\leq j+\tilde{i}\), \(j^{\prime}\leq\tilde{i}\), \(0\leq l^{\prime}\leq l\), here \(c_{j,l}^{k,i}\) are defined by (2.30). This inspires us to assume
that \(W\) has the following form:
\[W(y,t)=\sum_{j\geqslant 0}\sum_{l=0}^{2j+1}t^{\nu(2j+1)}\left(\ln y-\nu\ln t \right)^{l}W_{j,l}(y). \tag{2.47}\]
Substituting (2.47) into (2.45), we get a system with respect to \(W_{j,l}\):
\[\begin{cases}(\mathfrak{a}_{1}-i\mathfrak{a}_{2})\mathcal{L}W_{0,1}=\mu_{0}W_ {0,1},\\ (\mathfrak{a}_{1}-i\mathfrak{a}_{2})\mathcal{L}W_{0,0}=\mu_{0}W_{0,0}-i\left( \frac{1}{2}+\nu\right)W_{0,1}+2(\mathfrak{a}_{1}-i\mathfrak{a}_{2})\frac{1}{ y}\partial_{y}W_{0,1},\end{cases} \tag{2.48}\]
\[\begin{cases}(\mathfrak{a}_{1}-i\mathfrak{a}_{2})\mathcal{L}W_{j,2j+1}=\mu_{j} W_{j,2j+1}+(\mathfrak{a}_{1}-i\mathfrak{a}_{2})\mathcal{G}_{j,2j+1},\\ (\mathfrak{a}_{1}-i\mathfrak{a}_{2})\mathcal{L}W_{j,2j}=\mu_{j}W_{j,2j}+( \mathfrak{a}_{1}-i\mathfrak{a}_{2})\mathcal{G}_{j,2j}-\frac{1}{2}i(2j+1)W_{ j,2j+1}\\ \qquad\qquad-i\nu(2j+1)W_{j,2j+1}+2(2j+1)(\mathfrak{a}_{1}-i\mathfrak{a}_{2}) \frac{1}{y}\partial_{y}W_{j,2j+1},\\ (\mathfrak{a}_{1}-i\mathfrak{a}_{2})\mathcal{L}W_{j,l}=\mu_{j}W_{j,l}+( \mathfrak{a}_{1}-i\mathfrak{a}_{2})\mathcal{G}_{j,l}-\frac{1}{2}i(l+1)W_{j,l+ 1}\\ \qquad\qquad-i\nu(l+1)W_{j,l+1}+2(l+1)(\mathfrak{a}_{1}-i\mathfrak{a}_{2}) \frac{1}{y}\partial_{y}W_{j,l+1}\\ \qquad\qquad+(l+1)(l+2)(\mathfrak{a}_{1}-i\mathfrak{a}_{2})\frac{1}{y^{2}}W_{ j,l+2},\quad 0\leqslant l\leqslant 2j-1,\end{cases} \tag{2.49}\]
where \(0\leqslant l\leqslant 2j+1\), \(j\geqslant 0\), \(\mu_{j}=-\alpha_{0}+i\nu(2j+1)\), and \(\mathcal{G}_{j,l}\) come from the nonlinear term \(G(W,\bar{W},W_{y})\), which depends only on \(W_{k,n}\), \(k\leqslant j-1\):
\[G(W,\bar{W},W_{y})=-\sum_{j\geqslant 1}\sum_{l=0}^{2j+1}t^{(2j+1)\nu }(\ln y-\nu\ln t)^{l}\mathcal{G}_{j,l}(y),\] \[\mathcal{G}_{j,l}(y)=\mathcal{G}_{j,l}\left(y;W_{k,n},\,0\leqslant n \leqslant 2k+1,\,0\leqslant k\leqslant j-1\right).\]
Thus, we obtain
**Lemma 2.7**.: _Given coefficients \(a_{j}\) and \(b_{j}\), \(j\geqslant 0\), there exists a unique solution \(W_{j,l}\in C^{\infty}(\mathbb{R}_{+}^{\ast})\), \(0\leqslant l\leqslant 2j+1\), \(j\geqslant 0\), to the system (2.48), (2.49) such that \(W_{j,l}\) has the following asymptotic expansion as \(y\to 0\):_
\[W_{j,l}(y)=\sum_{i\geqslant-j+\frac{l}{2}}d_{i}^{j,l}y^{2i-1}, \tag{2.50}\]
_where_
\[d_{1}^{j,1}=a_{j},\quad d_{1}^{j,0}=b_{j}. \tag{2.51}\]
_Furthermore, the asymptotic expansion (2.50) can be differentiated with respect to \(y\) any number of times._
Proof.: Let
\[\tilde{\mu}_{j}=\frac{\mu_{j}}{(\mathfrak{a}_{1}-i\mathfrak{a}_{2})}.\]
Note that the solutions of \((\mathcal{L}-\tilde{\mu}_{j})f=0\) has a basis \(\{e_{j}^{1},e_{j}^{2}\}\) satisfying
**(i):**: \(e_{j}^{1}\) is an odd \(C^{\infty}\) function, and \(e_{j}^{1}(y)=y+O(y^{3})\) as \(y\to 0\);
**(ii):**: \(e_{j}^{2}\in C^{\infty}(\mathbb{R}_{+}^{\ast})\) and it can be expressed as follows:
\[e_{j}^{2}(y)=\frac{1}{y}+\kappa_{j}e_{j}^{1}(y)\ln y+\tilde{e}_{j}^{2}(y), \quad\kappa_{j}=-\frac{i}{4(\mathfrak{a}_{1}-i\mathfrak{a}_{2})}-\frac{1}{2} \tilde{\mu}_{j},\]
where \(\tilde{e}_{j}^{2}\) is an odd \(C^{\infty}\) function, and \(\tilde{e}_{j}^{2}(y)=O\left(y^{3}\right)\) as \(y\to 0\).
We consider the system (2.48), according to \((\mathcal{L}-\tilde{\mu}_{0})W_{0,1}=0\) and (2.50) and (2.51), we get
\[W_{0,1}=a_{0}e_{0}^{1}.\]
Consider equation of \(W_{0,0}\):
\[(\mathcal{L}-\tilde{\mu}_{0})\,W_{0,0}=-\frac{i}{(\mathfrak{a}_{1}-i\mathfrak{ a}_{2})}\left(\frac{1}{2}+\nu\right)W_{0,1}+2\frac{1}{y}\partial_{y}W_{0,1}. \tag{2.52}\]
Notice that the right hand side of (2.52) has the following form: \(2a_{0}\frac{1}{y}+\) an odd \(C^{\infty}\) function, where the odd function is \(O(y)\) as \(y\to 0\). Thus, (2.52) has a unique solution \(W_{0,0}^{0}\), which has the following form:
\[W_{0,0}^{0}(y)=d_{0}\frac{1}{y}+\tilde{W}_{0,0}^{0}(y),\]
where \(d_{0}=\frac{a_{0}}{k_{0}}\), \(\tilde{W}_{0,0}^{0}\) is an odd \(C^{\infty}\) function, and \(\tilde{W}_{0,0}^{0}(y)=O(y^{3})\) as \(y\to 0\). Combining (2.50) and (2.51), we obtain
\[W_{0,0}=W_{0,0}^{0}+b_{0}e_{0}^{1}.\]
For the case of \(j\geq 1\), we have
\[(\mathcal{L}-\tilde{\mu}_{j})\,W_{j,l}=\mathcal{F}_{j,l},\quad 0\leq l\leq 2j+1, \tag{2.53}\]
where
\[\mathcal{F}_{j,2j+1}= \mathcal{G}_{j,2j+1},\] \[\mathcal{F}_{j,2j}= \mathcal{G}_{j,2j}-\frac{i}{2(\mathfrak{a}_{1}-i\mathfrak{a}_{2} )}(2j+1)W_{j,2j+1}\] \[-\frac{i}{\mathfrak{a}_{1}-i\mathfrak{a}_{2}}\nu(2j+1)W_{j,2j+1} +2(2j+1)\frac{1}{y}\partial_{y}W_{j,2j+1},\] \[\mathcal{F}_{j,l}= \mathcal{G}_{j,l}+\frac{i}{2(\mathfrak{a}_{1}-i\mathfrak{a}_{2} )}(l+1)W_{j,l+1}-\frac{i}{(\mathfrak{a}_{1}-i\mathfrak{a}_{2})}\nu(l+1)W_{j,l +1}\] \[+2(l+1)\frac{1}{y}\partial_{y}W_{j,l+1}+(l+1)(l+2)\frac{1}{y^{2}}W _{j,l+2},\quad 0\leq l\leq 2j-1. \tag{2.54}\]
The resolution of (2.53) is based on the following ODE lemma.
**Lemma 2.8** (Lemma 2.8 in [46]).: _Let \(F\) be a \(C^{\infty}(\mathbb{R}_{+}^{*})\) function of the form:_
\[F(y)=\sum_{j=k}^{0}F_{j}y^{2j-1}+\tilde{F}(y),\]
_where \(\tilde{F}\) is an odd \(C^{\infty}\) function, \(k\leq-1\). Then there exists a unique constant \(A\) such that the equation_
\[(\mathcal{L}-\tilde{\mu}_{j})u=F+A\frac{1}{y^{3}}\]
_exists a solution \(u\in C^{\infty}(\mathbb{R}_{+}^{*})\), which has the following asymptotic behavior as \(y\to 0\):_
\[u(y)=\sum_{j\geq k+1}u_{j}y^{2j-1},\quad u_{1}=0.\]
More precisely, suppose that \(W_{i,n}\) has the asymptotic behavior described in (2.50) and (2.51), where \(0\leq n\leq 2i+1\), \(i\leq j-1\), then it is not difficult to verify that \(\mathcal{G}_{j,l}\) has the following expansion as \(y\to 0\):
\[\begin{split}&\mathcal{G}_{j,2j+1}(y)=\sum_{i\geq 1}g_{j,2j+1} ^{i}y^{2i-1},\\ &\mathcal{G}_{j,2j}(y)=\sum_{i\geq 0}g_{j,2j}^{i}y^{2i-1},\\ &\mathcal{G}_{j,l}=\sum_{i\geq-j+\frac{l}{2}-1}g_{j,l}^{i}y^{2i- 1},\quad l\leq 2j-1.\end{split} \tag{2.55}\]
Consider the equation of \(W_{j,2j+1}\): \((\mathcal{L}-\tilde{\mu}_{j})W_{j,2j+1}=\mathcal{G}_{j,2j+1}\), we get
\[W_{j,2j+1}=W_{j,2j+1}^{0}+c_{0}e_{j}^{1}, \tag{2.56}\]
where \(W_{j,2j+1}^{0}\) is the unique odd \(C^{\infty}\) solution of \((\mathcal{L}-\tilde{\mu}_{j})f=\mathcal{G}_{j,2j+1}\), and \(W_{j,2j+1}^{0}(y)=O(y^{3})\) as \(y\to 0\), the constant \(c_{0}\) is to be determined.
Note that \(\mathcal{F}_{j,2j}\) has the following form:
\[\left(g_{j,2j}^{0}+2(2j+1)c_{0}\right)\frac{1}{y}+\text{ an odd }C^{\infty}\text{ function}.\]
Thus, we get
\[W_{j,2j}=W_{j,2j}^{0}+c_{1}e_{j}^{1}, \tag{2.57}\]
where \(W_{j,2j}^{0}\) is the unique solution of \((\mathcal{L}-\tilde{\mu}_{j})f=\mathcal{G}_{j,2j}\) satisfying
\[W_{j,2j}^{0}=d_{1}\frac{1}{y}+O\left(y^{3}\right),\quad d_{1}=\frac{g_{j,2j}^ {0}+2(2j+1)c_{0}}{2k_{j}}. \tag{2.58}\]
as \(y\to 0\), \(c_{1}\) is a constant similar to \(c_{0}\).
For \(\mathcal{F}_{j,2j-1}\), by (2.54), (2.55), (2.56), (2.57) and (2.58), we get
\[\mathcal{F}_{j,2j-1}=\left(g_{j,2j-1}^{-1}-4jd_{1}\right)\frac{1}{y^{3}}+const \cdot\frac{1}{y}+\text{an odd }C^{\infty}\text{ function},\]
The constant here can be calculated exactly:
\[const=g_{j,2j}^{0}-\frac{i}{\mathfrak{a}_{1}-i\mathfrak{a}_{2}}j(1-2\nu)d1+2( 2j+1)\tilde{c}_{0},\quad\tilde{c}_{0}=O(1).\]
However, the refinement of this constant has no effect on the proof of our main result, and we will continue to use the notation \(const\), which may have different values, but in principle it can also be calculated exactly.
Notice that \(\frac{1}{y^{3}}\) does not appear in \((\mathcal{L}-\tilde{\mu}_{j})W_{j,2j-1}\), where \(W_{j,2j-1}\) is defined as (2.50). Thus, the equation \((\mathcal{L}-\tilde{\mu}_{j})W_{j,2j-1}=\mathcal{F}_{j,2j-1}\) has a solution of the form (2.50) if and only if
\[g_{j,2j-1}^{-1}-4jd_{1}=0.\]
By (2.58), we get
\[c_{0}=\frac{k_{j}g_{j,2j-1}^{-1}-2jg_{j,2j}^{0}}{4j(2j+1)}.\]
By the choice of \(c_{0}\), we get
\[W_{j,2j-1}=W_{j,2j-1}^{0}+c_{2}e_{j}^{1},\]
where \(W^{0}_{j,2j-1}\) is the unique solution of \((\mathcal{L}-\tilde{\mu}_{j})f=\mathcal{F}_{j,2j-1}\) that satisfies
\[W^{0}_{j,2j-1}=const\cdot\frac{1}{y}+O\left(y^{3}\right)\]
as \(y\to\infty\). Continuing the above process, we obtain \(W_{j,2j-2},\cdots,W_{j,0}\), which has the form \(W_{j,2j+1-k}=W^{0}_{j,2j+1-k}+c_{k}e^{1}_{j}\), \(k\leqslant 2j+1\), where \(W^{0}_{j,2j+1-k}\) is the unique solution of \((\mathcal{L}-\tilde{\mu}_{j})f=\mathcal{F}_{j,2j+1-k}\) that has expansion of the form (2.50) with zero coefficient \(d^{j,l}_{1}\) as \(y\to 0\), and the constants \(c_{k}\), \(k\leqslant 2j-1\), are uniquely determined by the solvability conditions on the equation of \(W_{j,2j-k-1}\) (see Lemma 2.8). Finally, \(c_{2j+1}\) and \(c_{2j+2}\) are given by (2.51), i.e., \(c_{2j+1}=a_{j}\) and \(c_{2j+2}=b_{j}\).
Let \(W^{ss}_{j,l}(y)\) be the solution of the system (2.48)-(2.49) (see Lemma 2.7), where \(0\leqslant l\leqslant 2j+1\), \(j\geqslant 0\), \(a_{j}=\alpha(j,1,1)\), \(b_{j}=\alpha(j,1,0)\). Here \(\alpha(i,j,k)\) is defined by (2.46). Since the expansion (2.46) is a solution of (2.45), the uniqueness of Lemma 2.7 guarantees that
\[W^{ss}_{j,l}(y)=\sum_{i\geqslant-j+\frac{l}{2}}\alpha(j,i,l)y^{2i-1},\quad \text{as $y\to 0$.} \tag{2.59}\]
Next, we study the asymptotic behavior of \(W^{ss}_{j,l}\) at infinity, where \(0\leqslant l\leqslant 2j+1\) and \(j\geqslant 0\). We get
**Lemma 2.9**.: _Given coefficients \(a_{j,l}\) and \(b_{j,l}\), where \(0\leqslant l\leqslant 2j+1\), \(j\geqslant 0\), the system (2.48)-(2.49) has a unique solution of the form:_
\[W_{0,l}= W^{0}_{0,l}+W^{1}_{0,l},\quad l=0,1, \tag{2.61}\] \[W_{j,l}= W^{0}_{j,l}+W^{1}_{j,l}+W^{2}_{j,l},\quad 0\leqslant l\leqslant 2j+ 1,\ j\geqslant 1, \tag{2.60}\]
_where \((W^{\tilde{i}}_{j,l})_{\begin{subarray}{c}0\leqslant l\leqslant 2j+1\\ j\geqslant 1\end{subarray}}\), \(\tilde{i}=0,1\), are two solutions of the system (2.48) and (2.49) that has the following asymptotic behavior as \(y\to\infty\):_
\[\sum_{l=0}^{2j+1}(\ln y-\nu\ln t)^{l}W^{\tilde{i}}_{j,l}(y)=\sum_ {l=0}^{2j+1}\left(\ln y+(-1)^{\tilde{i}}\frac{\ln t}{2}\right)^{l}\hat{W}^{ \tilde{i}}_{j,l}(y),\quad\tilde{i}=0,1,\] \[\hat{W}^{0}_{j,l}(y)=y^{2i\alpha_{0}+2\nu(2j+1)}\sum_{k\geqslant 0 }\hat{w}^{j,l,0}_{k}y^{-2k},\] \[\hat{W}^{1}_{j,l}(y)=e^{\frac{iy^{2}}{4(a_{1}-ia_{2})}}y^{-2i \alpha_{0}-2\nu(2j+1)-2}\sum_{k\geqslant 0}\hat{w}^{j,l,-1}_{k}y^{-2k}, \tag{2.62}\]
_where_
\[\hat{w}^{j,l,0}_{0}=a_{j,l},\quad\hat{w}^{j,l,-1}_{0}=b_{j,l}. \tag{2.63}\]
_Finally, the interaction part \(W^{2}_{j,l}\) can be written as_
\[W^{2}_{j,l}(y)=\sum_{-j-1\leqslant m\leqslant j}e^{-\frac{imy^{2}}{4(a_{1}- ia_{2})}}W_{j,l,m}(y), \tag{2.64}\]
_where \(W_{j,l,m}\) has the following asymptotic behavior as \(y\to\infty\):_
\[\begin{split}& W_{j,l,m}(y)=\sum_{k\geqslant m+2}\sum_{\begin{subarray} {c}m-j\leqslant i\leqslant j-m\\ j-m-i\varepsilon 2\mathbb{Z}\end{subarray}}\sum_{s=0}^{2j+1-l}w_{k,\tilde{i},s}^{j,l,m}y^{2 \nu(2i+1)-2k}(\ln y)^{s},\quad m\geqslant 1,\\ & W_{j,l,m}(y)=\sum_{k\geq-m}\sum_{\begin{subarray}{c}-j-m-2 \leqslant i\leqslant j+m\\ j-m-i\varepsilon 2\mathbb{Z}\end{subarray}}\sum_{s=0}^{2j+1-l}w_{k,\tilde{i},s}^{j,l,m}y^{2 \nu(2i+1)-2k}(\ln y)^{s},\quad m\leqslant-2,\\ & W_{j,l,0}(y)=\sum_{k\geqslant 1}\sum_{\begin{subarray}{c}-j\leqslant i \leqslant j-2\\ j-i\varepsilon 2\mathbb{Z}\end{subarray}}\sum_{s=0}^{2j+1-l}w_{k,\tilde{i},s}^{j,l,0}y^{2 \nu(2i+1)-2k}(\ln y)^{s},\\ & W_{j,l,-1}(y)=\sum_{k\geqslant 1}\sum_{\begin{subarray}{c}-j+1 \leqslant i\leqslant j-1\\ j-i\varepsilon 2\mathbb{Z}\end{subarray}}\sum_{s=0}^{2j+1-l}w_{k,\tilde{i},s}^{j,l,- 1}y^{2\nu(2i+1)-2k}(\ln y)^{s}.\end{split} \tag{2.65}\]
_Furthermore, the asymptotic expansions (2.62) and (2.65) can be differentiated with respect to \(y\) any number of times. In addition, any solution of the system (2.48), (2.49) has the form (2.60), (2.61), (2.62), (2.64) and (2.65)._
Proof.: Note that the solutions \((\mathcal{L}-\tilde{\mu}_{j})f=0\) has a basis \(\{f_{j}^{1},f_{j}^{2}\}\), which have the following asymptotic behavior at infinity:
\[\begin{split}& f_{j}^{1}(y)=y^{-2i\tilde{\mu}_{j}(\mathfrak{a}_{1 }-i\mathfrak{a}_{2})}\sum_{k\geqslant 0}f_{j,1}^{k}y^{-2k},\\ & f_{j}^{2}(y)=e^{\frac{iy^{2}}{4(\mathfrak{a}_{1}-i\mathfrak{a} _{2})}}y^{2i\tilde{\mu}_{j}(\mathfrak{a}_{1}-i\mathfrak{a}_{2})-2}\sum_{k \geqslant 0}f_{j,2}^{k}y^{-2k},\end{split}\]
where \(f_{j,1}^{0}=f_{j,2}^{0}=1\). Thus, the solutions of the homogeneous equations
\[\begin{cases}\mathcal{L}g_{2j+1}=\tilde{\mu}_{j}g_{2j+1},\\ \mathcal{L}g_{2j}=\tilde{\mu}_{j}g_{2j}-\frac{i}{2(\mathfrak{a}_{1}-i \mathfrak{a}_{2})}(2j+1)g_{2j+1}\\ \qquad-\frac{i}{\mathfrak{a}_{1}-i\mathfrak{a}_{2}}\nu(2j+1)g_{2j+1}+2(2j+1) \frac{1}{y}\partial_{y}g_{2j+1},\\ \mathcal{L}g_{l}=\tilde{\mu}_{j}g_{l}-\frac{i}{2(\mathfrak{a}_{1}-i \mathfrak{a}_{2})}(l+1)g_{l+1}-\frac{i}{\mathfrak{a}_{1}-i\mathfrak{a}_{2}}\nu (l+1)g_{l+1}\\ \qquad+2(l+1)\frac{1}{y}\partial_{y}g_{l+1}+(l+1)(l+2)\frac{1}{y^{2}}g_{l+2}, \quad 0\leqslant l\leqslant 2j-1.\end{cases} \tag{2.66}\]
have a basis \(\{\tilde{\mathbf{g}}_{j}^{\tilde{i},m}\}_{m=0,\cdots,2j+1}^{\tilde{i}=1,2}\), where
\[\mathbf{g}_{j}^{\tilde{i},m}=\left(\tilde{g}_{j,0}^{\tilde{i},m},\cdots, \tilde{g}_{j,2j+1}^{\tilde{i},m}\right),\quad 0\leqslant m\leqslant 2j+1,\ \tilde{i}=1,2.\]
Here each component is defined as
\[\sum_{l=0}^{2j+1}(\ln y-\nu\ln t)^{l}g_{j,l}^{\tilde{i},m}(y)=\sum_{l=0}^{2j+1} \left(\ln y+\frac{(-1)^{\tilde{i}-1}}{2}\ln t\right)^{l}\xi_{j,l}^{\tilde{i},m}(y), \tag{2.67}\]
where \((\xi_{j,l}^{\tilde{i},m})_{l=0,\cdots,2j+1}\) is the unique solution of the following system:
\[\begin{cases}\mathcal{L}\xi_{2j+1}=\tilde{\mu}_{j}\xi_{2j+1},\\ \mathcal{L}\xi_{2j}=\tilde{\mu}_{j}\xi_{2j}-\frac{i}{\mathfrak{a}_{1}-i \mathfrak{a}_{2}}(2j+1)\left(\tilde{i}-1\right)\xi_{2j+1}+2(2j+1)\frac{1}{y} \partial_{y}\xi_{2j+1},\\ \mathcal{L}\xi_{l}=\tilde{\mu}_{j}\xi_{l}-\frac{i}{\mathfrak{a}_{1}-i \mathfrak{a}_{2}}(l+1)\left(\tilde{i}-1\right)\xi_{l+1}+2(l+1)\frac{1}{y} \partial_{y}\xi_{l+1}\\ \qquad+(l+1)(l+2)\frac{1}{y^{2}}\xi_{l+2},\quad 0\leqslant l\leqslant 2j-1,\end{cases} \tag{2.68}\]
that satisfies
\[\xi^{\tilde{i},m}_{j,l}(y)=0,\quad l>2j+1-m,\] \[\xi^{\tilde{i},m}_{j,2j+1-m}(y)=f^{\tilde{i}}_{j}(y),\] \[\xi^{1,m}_{j,l}(y)=y^{2i\alpha_{0}+2\nu(2j+1)}\sum_{k\geq 2j+1-l-m }\xi^{1}_{l,k}y^{-2k},\quad y\to+\infty,\] \[\xi^{2,m}_{j,l}(y)=e^{\frac{iy^{2}}{4(\mathfrak{a}_{1}-i\mathfrak{ a}_{2})}}y^{-2i\alpha_{0}-2\nu(2j+1)-2}\sum_{k\geq 2j+1-l-m}\xi^{2}_{l,k}y^{-2k}, \quad y\to+\infty. \tag{2.69}\]
For \(W_{0,l}\), \(l=0,1\), we have
\[\mathcal{L}W_{0,1} =\tilde{\mu}_{0}W_{0,1},\] \[\mathcal{L}W_{0,0} =\tilde{\mu}_{0}W_{0,0}-\frac{i}{\mathfrak{a}_{1}-i\mathfrak{a}_{ 2}}\left(\frac{1}{2}+\nu\right)W_{0,1}+2\frac{1}{y}\partial_{y}W_{0,1}.\]
Thus,
\[W_{0,l}(y)=\sum_{\stackrel{{ i=1,2}}{{m=0,1}}}A_{\tilde{i},m}g^{ \tilde{i},m}_{0,l}(y),\quad l=0,1,\]
where \(A_{\tilde{i},m}\) are constants, \(\tilde{i}=1,2\), \(m=0,1\). According to (2.67) and (2.69), we obtain that \(W_{0,l}\), \(l=0,1\) have the form (2.60) and (2.62), where \(\hat{w}^{j,l,0}_{0}=A_{1,1-l}\) and \(\hat{w}^{j,l,-1}_{0}=A_{2,1-l}\), \(l=0,1\). This combined with (2.63) yields \(A_{1,m}=a_{0,l-m}\) and \(A_{2,m}=b_{0,1-m}\), \(m=0,1\).
Next, we consider the case of \(j\geq 1\). Suppose that \(W_{\tilde{i},n}\), \(0\leq n\leq 2\tilde{i}+1\), \(\tilde{i}\leq j-1\) have has the asymptotic behavior pre-described in (2.61), (2.62), (2.64) and (2.65), then it can be verified that \((\mathfrak{a}_{1}-i\mathfrak{a}_{2})\mathcal{G}_{j,l}\) has the following form:
\[(\mathfrak{a}_{1}-i\mathfrak{a}_{2})\mathcal{G}_{j,l}(y)=\sum_{-j-1\leq m \leq j}e^{-\frac{im^{2}}{4(\mathfrak{a}_{1}-i\mathfrak{a}_{2})}}\mathcal{G}^{ m}_{j,l}(y), \tag{2.70}\]
where \(\mathcal{G}^{m}_{j,l}\), \(m=0,-1\), satisfy
\[\mathcal{G}^{m}_{j,l}(y)=\mathcal{G}^{m,0}_{j,l}(y)+\mathcal{G}^ {m,1}_{j,l}(y),\quad m=0,-1,\] \[\mathcal{G}^{0,0}_{j,l}(y)=\mathcal{G}_{j,l}\left(y;W^{0}_{\tilde {i},n},0\leq n\leq 2\tilde{i}+1,0\leq\tilde{i}\leq j-1\right),\] \[e^{\frac{iy^{2}}{4(\mathfrak{a}_{1}-i\mathfrak{a}_{2})}}\mathcal{ G}^{-1,0}_{j,l}(y)=\mathcal{G}_{j,l}\left(y;W^{1}_{\tilde{i},n},0\leq n\leq 2 \tilde{i}+1,0\leq\tilde{i}\leq j-1\right), \tag{2.71}\]
and have the following asymptotic behavior as \(y\to\infty\):
\[\mathcal{G}^{0,0}_{j,l}(y)=\sum_{k\geq 1}\sum_{s=0}^{2j+1-l}T^{j,l, 0}_{k,j,s}y^{2\nu(2j+1)-2k}(\ln y)^{s},\] \[\mathcal{G}^{0,1}_{j,l}(y)=\sum_{k\geq 2}\sum_{\stackrel{{ j =1,k\leq j-2}}{{j-i\mathfrak{a}_{2}}}}\sum_{s=0}^{2j+1-l}T^{j,l,0}_{k,j,s}y^{2 \nu(2i+1)-2k}(\ln y)^{s}, \tag{2.72}\]
\[\begin{split}&\mathcal{G}_{j,l}^{-1,0}(y)=\sum_{k\geqslant 2}\sum_{s=0}^{ 2j+1-l}T_{k,-j-1,s}^{j,l,-1}y^{-2\nu(2j+1)-2k}(\ln y)^{s},\\ &\mathcal{G}_{j,l}^{-1,1}(y)=\sum_{k\geqslant 1}\sum_{\begin{subarray} {c}-j+1\leqslant i\leqslant j-1\\ j-i+1e2\mathbb{Z}\end{subarray}}\sum_{s=0}^{2j+1-l}T_{k,i,s}^{j,l,-1}y^{2\nu (2i+1)-2k}(\ln y)^{s}.\end{split} \tag{2.73}\]
Finally, \(\mathcal{G}_{j,l}^{m}\), \(m\neq 0,-1\), have the following asymptotic behavior as \(y\to\infty\):
\[\begin{split}&\mathcal{G}_{j,l}^{m}(y)=\sum_{k\geqslant m|m|-1} \sum_{\begin{subarray}{c}-j-m-2\leqslant i\leqslant j+m\\ j-i+m=2\mathbb{Z}\end{subarray}}\sum_{s=0}^{2j+1-l}T_{k,\tilde{i},s}^{j,l,m}y^ {2\nu(2i+1)-2k}(\ln y)^{s},\quad m\leqslant-2.\end{split} \tag{2.74}\]
Thus, integrating (2.49) yields
\[\begin{split}& W_{j,l}=\tilde{W}_{j,l}+\sum_{ \begin{subarray}{c}i=1,2\\ m=0,\cdots,2j+1\end{subarray}}A_{\tilde{i},m}\tilde{\mathfrak{g}}_{j,l}^{ \tilde{i},m},\\ &\tilde{W}_{j,l}(y)=\sum_{-j-1\leqslant m\leqslant j}e^{-\frac{ imy^{2}}{4(\mathfrak{a}_{1}-i\mathfrak{a}_{2})}}\tilde{W}_{j,l}^{m}(y),\end{split} \tag{2.75}\]
where \(e^{-\frac{imy^{2}}{4(\mathfrak{a}_{1}-i\mathfrak{a}_{2})}}\tilde{W}_{j,l}^{m}(y)\) is the unique solution of (2.49) with \(\mathcal{G}_{j,l}\) replaced by \(e^{-\frac{imy^{2}}{4(\mathfrak{a}_{1}-i\mathfrak{a}_{2})}}\mathcal{G}_{j,l}^{ m}(y)\) that has the following asymptotic behavior as \(y\to+\infty\):
\[\begin{split}&\tilde{W}_{j,l}^{m}=\sum_{k\geqslant m+2}\sum_{ \begin{subarray}{c}m-j\leqslant i\leqslant j-m\\ j-m\in 2\mathbb{Z}\end{subarray}}\sum_{s=0}^{2j+1-l}\tilde{w}_{k,\tilde{i},s}^{j,l,m}y ^{2\nu(2i+1)-2k}(\ln y)^{s},\quad m\geqslant 1,\\ &\tilde{W}_{j,l}^{m}(y)=\sum_{k\geqslant-m}\sum_{\begin{subarray} {c}-j-m-2\leqslant i\leqslant j+m\\ j-m-i\in 2\mathbb{Z}\end{subarray}}\sum_{s=0}^{2j+1-l}\tilde{w}_{k,\tilde{i},s}^{j,l,m}y ^{2\nu(2i+1)-2k}(\ln y)^{s},\quad m\leqslant-2.\end{split} \tag{2.76}\]
Finally, for the case of \(m=0,-1\), we have
\[\begin{split}&\tilde{W}_{j,l}^{0}=\tilde{W}_{j,l}^{0,0}(y)+ \tilde{W}_{j,l}^{0,1}(y),\\ &\tilde{W}_{j,l}^{-1}=\tilde{W}_{j,l}^{-1,0}(y)+\tilde{W}_{j,l}^{ -1,1}(y),\end{split} \tag{2.77}\]
where \(\tilde{W}_{j,l}^{0,\tilde{i}}\) and \(e^{\frac{iy^{2}}{4(\mathfrak{a}_{1}-i\mathfrak{a}_{2})}}\tilde{W}_{j,l}^{-1, \tilde{i}}\) are solutions of (2.49) with \(\mathcal{G}_{j,l}\) replaced by \(\mathcal{G}_{j,l}^{0,\tilde{i}}\) and \(e^{\frac{iy^{2}}{4(\mathfrak{a}_{1}-i\mathfrak{a}_{2})}}\mathcal{G}_{j,l}^{-1, \tilde{i}}\), respectively. They have the following asymptotic behavior as
\(y\to\infty\):
\[\tilde{W}^{0,0}_{j,l}(y) =\sum_{k\geqslant 1}\sum_{s=0}^{2j+1-l}\tilde{w}^{j,l,0}_{k,j,s}y^{ 2\nu(2j+1)-2k}(\ln y)^{s},\] \[\tilde{W}^{0,1}_{j,l}(y) =\sum_{k\geqslant 1}\sum_{\genfrac{}{}{0.0pt}{}{-j+1\leqslant j-2}{j- \text{i}\in\mathbb{Z}}}\sum_{s=0}^{2j+1-l}\tilde{w}^{j,l,0}_{k,\tilde{i},s}y^{ 2\nu(2i+1)-2k}(\ln y)^{s},\] \[\tilde{W}^{-1,0}_{j,l}(y) =\sum_{k\geqslant 1}\sum_{s=0}^{2j+1-l}\tilde{w}^{\tilde{i},l,-1 }_{k,-j-1,s}y^{-2\nu(2j+1)-2k}(\ln y)^{s},\] \[\tilde{W}^{-1,1}_{j,l}(y) =\sum_{k\geqslant 1}\sum_{\genfrac{}{}{0.0pt}{}{-j+1\leqslant i \leqslant j-1}{j-\text{i}\in\mathbb{Z}}}\sum_{s=0}^{2j+1-l}\tilde{w}^{j,l,-1}_ {k,\tilde{i},s}y^{2\nu(2i+1)-2k}(\ln y)^{s}. \tag{2.78}\]
Note that \(W^{0}_{j,l}=\tilde{W}^{0,0}_{j,l}+\sum_{m=0}^{2j+1}A_{1,m}\mathbf{g}^{1,m}_{ j,l}\) and \(W^{1}_{j,l}=e^{-\frac{imy^{2}}{4(a_{1}-ia_{2})}}\tilde{W}^{-1,0}_{j,l}+\sum_{m= 0}^{2j+1}A_{2,m}\mathbf{g}^{2,m}_{j,l}\) are solutions of (2.49) with \(\mathcal{G}_{j,l}\) replaced by \(\mathcal{G}^{0,0}_{j,l}=\mathcal{G}_{j,l}(W^{0}_{j,n};\tilde{i}\leqslant j-1)\) and \(e^{\frac{i\nu^{2}}{4(a_{1}-ia_{2})}}\mathcal{G}^{-1,0}_{j,l}=\mathcal{G}_{j,l }(W^{1}_{i,n},\tilde{i}\leqslant j-1)\), respectively. Thus, \(W^{\tilde{i}}_{j,l}\), \(\tilde{i}=0,1\), \(0\leqslant l\leqslant 2j+1\), have the form (2.63), where \(\tilde{w}^{j,l,\tilde{i}}_{0}=A^{1-\tilde{i}}_{2j+1-l}\), \(\tilde{i}=0,-1\), \(l=0,\cdots,2j+1\). This combined with (2.63) gives \(A_{1,m}=a_{j,2j+1-m}\) and \(A_{2,m}=b_{j,2j+1-m}\), where \(m=0,\cdots,2j+1\).
Let \(W^{(N)}_{\text{in}}(y,t)\) be the stereographic projection of
\[V^{(N)}_{\text{in}}\left(t^{-\nu}y,t\right)=\left(V^{(N)}_{\text{in},1}\left( t^{-\nu}y,t\right),V^{(N)}_{\text{in},2}\left(t^{-\nu}y,t\right),V^{(N)}_{ \text{in},3}\left(t^{-\nu}y,t\right)\right),\]
i.e.,
\[W^{(N)}_{\text{in},1}(y,t)=\frac{V^{(N)}_{\text{in},1}\left(t^{-\nu}y,t\right) +iV^{(N)}_{\text{in},2}\left(t^{-\nu}y,t\right)}{1+V^{(N)}_{\text{in},3}\left( t^{-\nu}y,t\right)}\in\mathbb{C}\cup\{\infty\}.\]
Recalling the coordinate transformation (2.44), for \(N\geqslant 2\), we define
\[W^{(N)}_{ss}(y,t) =\sum_{j=0}^{N}\sum_{l=0}^{2j+1}t^{\nu(2j+1)}\left(\ln\rho\right) ^{l}W^{ss}_{j,l}(y),\] \[A^{(N)}_{ss} =-it\partial_{t}W^{(N)}_{ss}+\alpha_{0}W^{(N)}_{ss}+\left(\mathfrak{ a}_{1}-i\mathfrak{a}_{2}\right)\left[\mathcal{L}W^{(N)}_{ss}+G\left(W^{(N)}_{ss}, \bar{W}^{(N)}_{ss},\partial_{y}W^{(N)}_{ss}\right)\right],\] \[V^{(N)}_{ss}(\rho,t) =\left(\frac{2\operatorname{Re}\left(W^{(N)}_{ss}\right)}{1+\left| W^{(N)}_{ss}\right|^{2}},\frac{2\operatorname{Im}\left(W^{(N)}_{ss}\right)}{1+ \left|W^{(N)}_{ss}\right|^{2}},\frac{1-\left|W^{(N)}_{ss}\right|^{2}}{1+ \left|W^{(N)}_{ss}\right|^{2}}\right),\quad\rho=t^{-\nu}y,\] \[Z^{(N)}_{ss}(\rho,t) =V^{N}_{ss}(\rho,t)-Q(\rho).\]
Fixing \(\varepsilon_{1}=\frac{\nu}{2}\), according to the previous analysis, we obtain
**Lemma 2.10**.: _For \(0<t\leqslant T(N)\), there exists a positive constant \(C=C(\mathfrak{a}_{1},\mathfrak{a}_{2},\nu)\), such that_
**(i):** _For any_ \(k,l\) _and_ \(\frac{1}{10}t^{\varepsilon_{1}}\leqslant y\leqslant 10t^{\varepsilon_{1}}\)_,_
\[\left|y^{-l}\partial_{y}^{k}\partial_{t}^{i}\left(W^{(N)}_{ss}-W^{(N)}_{in} \right)\right|\leqslant C_{k,l,i}t^{\nu\left(N+1-\frac{l+k}{2}\right)-i}, \quad i=0,1. \tag{2.79}\]
**(ii):**: _The profile_ \(Z^{(N)}_{ss}\) _satisfies_
(2.80) \[\left\|\partial_{\rho}Z^{(N)}_{ss}(t)\right\|_{L^{2}\left(\rho d\rho, \frac{1}{10}t^{-\nu+\varepsilon_{1}}\leqslant\rho\leqslant 10t^{-\nu+ \varepsilon_{2}}\right)}\leqslant Ct^{\eta},\] (2.81) \[\left\|\rho^{-1}Z^{(N)}_{ss}(t)\right\|_{L^{2}\left(\rho d\rho, \frac{1}{10}t^{-\nu+\varepsilon_{1}}\leqslant\rho\leqslant 10t^{-\nu+ \varepsilon_{2}}\right)}\leqslant Ct^{\eta},\] (2.82) \[\left\|Z^{(N)}_{ss}(t)\right\|_{L^{\infty}\left(\frac{1}{10}t^{- \nu+\varepsilon_{1}}\leqslant\rho\leqslant 10t^{-\nu+\varepsilon_{2}}\right)} \leqslant Ct^{\eta},\] (2.83) \[\left\|\rho\partial_{\rho}Z^{(N)}_{ss}(t)\right\|_{L^{\infty} \left(\frac{1}{10}t^{-\nu+\varepsilon_{1}}\leqslant\rho\leqslant 10t^{-\nu+ \varepsilon_{2}}\right)}\leqslant Ct^{\eta},\] (2.84) \[\left\|\rho^{-l}\partial_{\rho}^{k}Z^{(N)}_{ss}(t)\right\|_{L^{2 }\left(\rho d\rho,\frac{1}{10}t^{-\nu+\varepsilon_{1}}\leqslant\rho\leqslant 1 0t^{-\nu+\varepsilon_{2}}\right)}\leqslant Ct^{\nu+ \frac{1}{2}+\eta},\quad k+l=2,\] (2.85) \[\left\|\rho^{-l}\partial_{\rho}^{k}Z^{(N)}_{in}(t)\right\|_{L^{2 }\left(\rho d\rho,\frac{1}{10}t^{-\nu+\varepsilon_{1}}\leqslant\rho\leqslant 1 0t^{-\nu+ \varepsilon_{2}}\right)}\leqslant Ct^{2\nu},\quad k+l\geqslant 3,\] (2.86) \[\left\|\rho^{-l}\partial_{\rho}^{k}Z^{(N)}_{ss}(t)\right\|_{L^{ \infty}\left(\frac{1}{10}t^{-\nu+\varepsilon_{1}}\leqslant\rho\leqslant 10t^{- \nu+\varepsilon_{2}}\right)}\leqslant Ct^{\nu+\eta},\quad k+l=1,\] (2.87) \[\left\|\rho^{-l}\partial_{\rho}^{k}Z^{(N)}_{ss}(t)\right\|_{L^{ \infty}\left(\frac{1}{10}t^{-\nu+\varepsilon_{1}}\leqslant\rho\leqslant 10t^{- \nu+\varepsilon_{2}}\right)}\leqslant Ct^{2\nu},\quad 2\leqslant k+l.\] _Here (and below)_ \(\eta\) _denotes constants that depend on_ \(\nu\) _and_ \(\varepsilon_{2}\)_, which may vary from line to line._
**(iii):**: _The error_ \(A^{(N)}_{ss}\) _satisfies the estimate:_
\[\left\|y^{-l}\partial_{y}^{k}\partial_{t}^{i}A^{(N)}_{ss}(t) \right\|_{L^{2}\left(ydy,\frac{1}{10}t^{\varepsilon_{1}}\leqslant y\leqslant 1 0t^{-\varepsilon_{2}}\right)}\leqslant Ct^{\nu N(1-2\varepsilon_{2})-i},\] \[0\leqslant l+k\leqslant 4,\,i=0,1. \tag{2.88}\]
### Remote region \(r\sim 1\)
Next, we consider the remote region \(t^{-\varepsilon_{2}}\leqslant rt^{-\frac{1}{2}}\). We use the formal solution constructed in Subsection 2.3:
\[\sum_{j\geqslant 0}\sum_{l=0}^{2j+1}t^{\nu(2j+1)}(\ln y-\nu\ln t)^{l}W^{ss}_{j,l} (y).\]
According to Lemma 2.9, this solution has the form (2.60), (2.61), (2.62), (2.64) and (2.65), where \(\hat{w}^{j,l,i}_{k}\) and \(w^{j,l,m}_{k,i,s}\) are some coefficients to be determined. Note that by taking the limits \(y\to\infty\) and \(r\to 0\), the main order terms of the expansion
\[\sum_{j\geqslant 0}\sum_{l=0}^{2j+1}t^{i\alpha_{0}+\nu(2j+1)}(\ln y-\nu\ln t)^{l}W ^{ss}_{j,l}(t^{-\frac{1}{2}}r)\]
is
\[\begin{split}&\sum_{j\geqslant 0}\sum_{l=0}^{2j+1}t^{i \alpha_{0}+\nu(2j+1)}(\ln y-\nu\ln t)^{l}W^{ss}_{j,l}(t^{-\frac{1}{2}}r)\\ &\sim\sum_{k\geqslant 0}\frac{t^{k}}{r^{2k}}\sum_{j\geqslant 0} \sum_{l=0}^{2j+1}\hat{w}^{j,l,0}_{k}(\ln r)^{l}r^{2i\alpha_{0}+2\nu(2j+1)}\\ &\quad+\frac{1}{t}e^{\frac{ir^{2}}{4(a_{1}-i\alpha_{2})t}}\sum_{k \geqslant 0}\frac{t^{k}}{r^{2k}}\sum_{j\geqslant 0}\sum_{l=0}^{2j+1}\hat{w}^{j,l,-1}_{k} \left(\frac{r}{t}\right)^{-2i\alpha_{0}-2\nu(2j+1)-2}\left(\ln\left(\frac{r}{t} \right)\right)^{l}.\end{split} \tag{2.89}\]
This inspires us to construct the solutions of (2.43) in the region \(t^{-\varepsilon_{2}}\leq rt^{-\frac{1}{2}}\) by perturbing the time-independent profile:
\[\sum_{j\geq 0}\sum_{l=0}^{2j+1}\beta_{0}(j,l)(\ln r)^{l}r^{2i\alpha_{0}+2\nu(2j+1 )},\]
where \(\beta_{0}(j,l)=\hat{w}_{0}^{j,l,0}\).
Let \(\theta\in C_{0}^{\infty}(\mathbb{R})\) be a cut-off function that satisfies
\[\theta(\xi)=\begin{cases}1,&\text{ if }\,|\xi|\leq 1,\\ 0,&\text{ if }\,|\xi|\geq 2.\end{cases}\]
For \(N\geq 2\) and \(\delta>0\), we define
\[f_{0}(r)\triangleq f_{0}^{N}(r)=\theta\left(\frac{r}{\delta}\right)\sum_{j=0} ^{N}\sum_{l=0}^{2j+1}\beta_{0}(j,l)(\ln r)^{l}r^{2i\alpha_{0}+2\nu(2j+1)}.\]
Note that \(e^{i\theta}f_{0}\in H^{1+2\nu-}\) and
\[\left\|e^{i\theta}f_{0}\right\|_{\dot{H}^{s}}\leq C\delta^{1+2\nu-s},\quad \forall 0\leq s<1+2\nu. \tag{2.90}\]
Let \(w(r,t)=f_{0}(r)+\chi(r,t)\), then \(\chi\) solves
\[i\chi_{t}=\left(\mathfrak{a}_{1}-i\mathfrak{a}_{2}\right)\left[-\Delta\chi+ \frac{1}{r^{2}}\chi+\mathcal{V}_{0}\partial_{r}\chi+\mathcal{V}_{1}\chi+ \mathcal{V}_{2}\bar{\chi}+\mathcal{N}+\mathcal{D}_{0}\right], \tag{2.91}\]
where
\[\mathcal{V}_{0}= \frac{4\bar{f}_{0}\partial_{r}f_{0}}{1+|f_{0}|^{2}},\] \[\mathcal{V}_{1}= -\frac{2|f_{0}|^{2}\left(2+|f_{0}|^{2}\right)}{r^{2}\left(1+|f_{0 }|^{2}\right)^{2}}-\frac{2\bar{f}_{0}^{2}(\partial_{r}f_{0})^{2}}{\left(1+|f_ {0}|^{2}\right)^{2}},\] \[\mathcal{V}_{2}= \frac{2\left(r^{2}(\partial_{r}f_{0})^{2}-f_{0}^{2}\right)}{r^{2} \left(1+|f_{0}|^{2}\right)^{2}},\] \[\mathcal{D}_{0}= \left(-\Delta+\frac{1}{r^{2}}\right)f_{0}+G\left(f_{0},\bar{f}_{ 0},\partial_{r}f_{0}\right).\]
Here \(\mathcal{N}\) contains the terms at least quadratic in \(\chi\), which has the following form:
\[\mathcal{N}= N_{0}\left(\chi,\bar{\chi}\right)+\chi_{r}N_{1}\left(\chi,\bar{ \chi}\right)+\chi_{r}^{2}N_{2}\left(\chi,\bar{\chi}\right),\] \[N_{0}(\chi,\bar{\chi})= G\left(f_{0}+\chi,\bar{f}_{0}+\bar{\chi},\partial_{r}f_{0} \right)-G\left(f_{0},\bar{f}_{0},\partial_{r}f_{0}\right)-\mathcal{V}_{1} \chi-\mathcal{V}_{2}\bar{\chi}, \tag{2.92}\] \[N_{1}(\chi,\bar{\chi})= \frac{4\partial_{r}f_{0}\left(\bar{f}_{0}+\bar{\chi}\right)}{1+|f _{0}+\chi|^{2}}-\mathcal{V}_{0},\] \[N_{2}(\chi,\bar{\chi})= \frac{2(\bar{f}_{0}+\bar{\chi})}{1+|f_{0}+\chi|^{2}}.\]
By (2.60), (2.61), (2.62), (2.64) and (2.65), we construct \(\chi\) of the form:
\[\chi(r,t)=\sum_{q\geq 0\atop k,k\geq 1}t^{2\nu q+k}\sum_{m\in\natural}\sum_{s=0} ^{q}e^{-im\Phi}(\ln r-\ln t)^{s}g_{k,q,m,s}(r), \tag{2.93}\]
where
\[\Phi(r,t)=2\alpha_{0}\ln t+\frac{r^{2}}{4(\mathfrak{a}_{1}-i \mathfrak{a}_{2})t}+\varphi(r),\] \[\natural=\left\{m:-\min\{k,q\}\leq m\leq\min\left\{(k-2)_{+},q \right\},\ q-m\in 2\mathbb{Z}\right\}.\]
Here \(\varphi(r)\) is to be determined.
By (2.93), we get
\[\chi_{t} =\sum_{\genfrac{}{}{0.0pt}{}{q\geq 0}{k\geq 2}}t^{2\nu q+k-2}\sum_{m \in\natural}\sum_{s=0}^{q}e^{-im\Phi}(\ln r-\ln t)^{s}\] \[\quad\cdot\bigg{[}(2\nu q+k-1-2im\alpha_{0})g_{k-1,q,m,s}+\frac{ imr^{2}}{4(\mathfrak{a}_{1}-i\mathfrak{a}_{2})}g_{k,q,m,s}\] \[\qquad\qquad\qquad-(s+1)g_{k-1,q,m,s+1}\bigg{]},\] \[\chi_{r} =\sum_{\genfrac{}{}{0.0pt}{}{q\geq 0}{k\geq 2}}t^{2\nu q+k-2}\sum_{m \in\natural}\sum_{s=0}^{q}e^{-im\Phi}(\ln r-\ln t)^{s}\bigg{[}\frac{-imr}{2( \mathfrak{a}_{1}-i\mathfrak{a}_{2})}g_{k-1,q,m,s}\] \[\quad+(\partial_{r}-im\varphi_{r})g_{k-2,q,m,s}+\frac{s+1}{r}g_{ k-2,q,m,s+1}\bigg{]},\] \[\chi_{rr} =\sum_{\genfrac{}{}{0.0pt}{}{q\geq 0}{k\geq 2}}t^{2\nu q+k-2}\sum_{m \in\natural}\sum_{s=0}^{q}e^{-im\Phi}(\ln r-\ln t)^{s}\bigg{\{}(-im\varphi_{ rr}-m^{2}\varphi_{r}^{2}\] \[\quad-2im\varphi_{r}\partial_{r}+\partial_{rr})g_{k-2,q,m,s}- \frac{m^{2}r^{2}}{4(\mathfrak{a}_{1}-i\mathfrak{a}_{2})^{2}}g_{k,q,m,s}\] \[\quad-\frac{2m^{2}r\varphi_{r}+imr\partial_{r}+imr}{2(\mathfrak{ a}_{1}-i\mathfrak{a}_{2})}g_{k-1,q,m,s}\] \[\quad\left(-\frac{im(s+1)}{r}\varphi_{r}-\frac{s+1}{r^{2}}+\frac {2(s+1)}{r}\partial_{r}\right)g_{k-2,q,m,s+1}\] \[\quad+\frac{(s+1)(s+2)}{r^{2}}g_{k-2,q,m,s+2}-\frac{im(s+1)}{2( \mathfrak{a}_{1}-i\mathfrak{a}_{2})}g_{k-1,q,m,s+1}\bigg{\}}. \tag{2.94}\]
Thus, substituting the hypothesis (2.93) into \((\mathfrak{a}_{1}-i\mathfrak{a}_{2})N_{i}\), \(i=0,1,2\), and \(-i\chi_{t}+(\mathfrak{a}_{1}-i\mathfrak{a}_{2})\left[-\Delta\chi+\frac{1}{r^{2} }\chi+\mathcal{V}_{0}\partial_{r}\chi+\mathcal{V}_{1}\chi+\mathcal{V}_{2} \bar{\chi}\right]\), we get
\[-i\chi_{t}+(\mathfrak{a}_{1}-i\mathfrak{a}_{2})\left[-\Delta \chi+\frac{1}{r^{2}}\chi+\mathcal{V}_{0}\partial_{r}\chi+\mathcal{V}_{1}\chi +\mathcal{V}_{2}\bar{\chi}\right]\] \[\qquad=\sum_{\genfrac{}{}{0.0pt}{}{q\geq 0}{k\geq 2}}t^{2\nu q+k-2} \sum_{m\in\natural}\sum_{s=0}^{q}e^{-im\Phi}(\ln r-\ln t)^{s}\Psi_{k,q,m,s}^{ lin},\] \[(\mathfrak{a}_{1}-i\mathfrak{a}_{2})N_{0}(\chi,\bar{\chi})=\sum_{ \genfrac{}{}{0.0pt}{}{q\geq 0}{k\geq 4}}t^{2\nu q+k-2}\sum_{m\in\natural}\sum_{s=0}^{q}e^{ -im\Phi}(\ln r-\ln t)^{s}\Psi_{k,q,m,s}^{nl,0},\] \[(\mathfrak{a}_{1}-i\mathfrak{a}_{2})\chi_{r}N_{1}(\chi,\bar{\chi}) =\sum_{\genfrac{}{}{0.0pt}{}{q\geq 0}{k\geq 3}}t^{2\nu q+k-2}\sum_{m\in\natural} \sum_{s=0}^{q}e^{-im\Phi}(\ln r-\ln t)^{s}\Psi_{k,q,m,s}^{nl,1}, \tag{2.95}\]
\[(\mathfrak{a}_{1}-i\mathfrak{a}_{2})(\chi_{r})^{2}N_{2}(\chi,\bar{\chi})=\sum_{ \genfrac{}{}{0.0pt}{}{g\geq 0}{k\geq 2}}t^{2\nu q+k-2}\sum_{m\in\natural}\sum_{s=0}^{q}e^{-im \Phi}(\ln r-\ln t)^{s}\Psi_{k,q,m,s}^{nl,2},\]
where
\[\Psi_{k,q,m,s}^{lin}=\frac{m(1+m)}{4(\mathfrak{a}_{1}-i\mathfrak{a}_{2})}r^{2} g_{k,q,m,s}+\Psi_{k,q,m,s}^{lin,1}+\Psi_{k,q,m,s}^{lin,2}, \tag{2.96}\]
where \(\Psi_{k,q,m,s}^{lin,1}\) and \(\Psi_{k,q,m,s}^{lin,2}\) depend only on \(g_{k-1,q,m,s^{\prime}}\), \(s^{\prime}=s,s+1\) and \(g_{k-2,q,m,s^{\prime}}\), \(s^{\prime}=s,s+1,s+2\), respectively. In fact, they can be written as
\[\Psi_{k,q,m,s}^{lin,1}= \bigg{[}-i(2\nu q+k-1-2im\alpha_{0})+m^{2}r\varphi_{r}+\frac{im}{ 2}+\frac{imr}{2}\partial_{r}\] \[-\left(\mathcal{V}_{0}-\frac{1}{r}\right)\frac{imr}{2}\bigg{]}g_ {k-1,q,m,s}+i(m+1)(s+1)g_{k-1,q,m,s+1}, \tag{2.98}\] \[\Psi_{k,q,m,s}^{lin,2}= \bigg{[}-(\mathfrak{a}_{1}-i\mathfrak{a}_{2})(-im\varphi_{rr}-m^{ 2}\varphi_{r}^{2}-2im\varphi_{r}\partial_{r}+\partial_{rr})\] \[+(\mathfrak{a}_{1}-i\mathfrak{a}_{2})\left(\mathcal{V}_{0}-\frac {1}{r}\right)(-im\varphi_{r}+\partial_{r})+\mathcal{V}_{1}+\frac{1}{r^{2}} \bigg{]}g_{k-2,q,m,s}\] \[-(\mathfrak{a}_{1}-i\mathfrak{a}_{2})\bigg{[}-\frac{2im(s+1)}{r} \varphi_{r}-\frac{s+1}{r^{2}}+\frac{2(s+1)}{r}\partial_{r}\] \[-\left(\mathcal{V}_{0}-\frac{1}{r}\right)\frac{s+1}{r}\bigg{]}g_ {k-2,q,m,s+1}\] \[-(\mathfrak{a}_{1}-i\mathfrak{a}_{2})\frac{(s+1)(s+2)}{r^{2}}g_ {k-2,q,m,s+2}\] \[+(\mathfrak{a}_{1}-i\mathfrak{a}_{2})\mathcal{V}_{2}\bar{g}_{k-2,q,-m,s}. \tag{2.97}\]
Here and below we assume that \(g_{k,q,m,s}=0\) if \((k,q,m,s)\notin\Omega\), where
\(\Omega=\left\{(k,q,m,s)|k\geqslant 1,\,q\geqslant 0,\,0\leqslant s\leqslant q, \,q-m\in 2\mathbb{Z},\,-\min\{k,q\}\leqslant m\leqslant\min\{k-1,q\}\right\}.\)
Note that \(\Psi_{k,q,m,s}^{nl,i}\), \(i=0,1\), depend only on \(g_{k^{\prime},q^{\prime},m^{\prime},s^{\prime}}\), where \(k^{\prime}\leqslant k-2\), i.e.,
\[\Psi_{k,q,m,s}^{nl,0} =\Psi_{k,q,m,s}^{nl,0}\left(r;g_{k^{\prime},q^{\prime},m^{\prime},s^{\prime}},\,k^{\prime}\leqslant k-3\right),\] \[\Psi_{k,q,m,s}^{nl,1} =\Psi_{k,q,m,s}^{nl,1}\left(r;g_{k^{\prime},q^{\prime},m^{\prime},s^{\prime}},\,k^{\prime}\leqslant k-2\right).\]
By (2.94), \(\Psi_{k,q,m,s}^{nl,2}\) has the follow structure:
\[\Psi_{2,q,m,s}^{nl,2} =-\delta_{m,-2}\frac{r^{2}\bar{f}_{0}}{2(\mathfrak{a}_{1}-i \mathfrak{a}_{2})\left(1+|f_{0}|^{2}\right)}\sum_{\genfrac{}{}{0.0pt}{}{q_{1}+ q_{2}=q}{s_{1}+s_{2}=s}}g_{1,q_{1},-1,s_{1}}g_{1,q_{2},-1,s_{2}},\] \[\Psi_{k,q,m,s}^{nl,2} =\Psi_{k,q,m,s}^{nl,2,0}+\tilde{\Psi}_{k,q,m,s}^{nl,2},\quad k \geqslant 3,\] \[\Psi_{k,q,m,s}^{nl,2,0} =\frac{(m+1)r^{2}\bar{f}_{0}}{\left(\mathfrak{a}_{1}-i\mathfrak{a} _{2}\right)\left(1+|f_{0}|^{2}\right)}\sum_{\genfrac{}{}{0.0pt}{}{q_{1}+q_{2}=q}{ s_{1}+s_{2}=s}}g_{1,q_{1},-1,s_{1}}g_{1,q_{2},m+1,s_{2}}, \tag{2.99}\]
where \(\tilde{\Psi}_{k,q,m,s}^{nl,2}\) depends only on \(g_{k^{\prime},q^{\prime},m^{\prime},s^{\prime}}\) (\(k^{\prime}\leqslant k-2\)), i.e.,
\[\tilde{\Psi}_{k,q,m,s}^{nl,2}=\tilde{\Psi}_{k,q,m,s}^{nl,2}\left(r;\,g_{k^{ \prime},q^{\prime},m^{\prime},s^{\prime}},\,k^{\prime}\leqslant k-2\right).\]
Note that
\[\Psi_{k,q,-1,s}^{nl,2,0}=0,\quad\forall k,q,s.\]
Thus, (2.91) is equivalent to
\[\begin{cases}\Psi^{lin}_{2,0,0,0}+(\mathfrak{a}_{1}-i\mathfrak{a}_{2})\mathcal{D}_ {0}=0,\\ \Psi^{lin}_{k,q,m,s}+\Psi^{nl}_{k,q,m,s}=0,\quad(k,q,m,s)\in\Omega,\ (k,q,m,s)\neq(2,0,0,0), \end{cases} \tag{2.100}\]
where \(\Psi^{nl}_{k,q,m,s}=\Psi^{nl,0}_{k,q,m,s}+\Psi^{nl,1}_{k,q,m,s}+\Psi^{nl,2}_{k, q,m,s}\).
Now we rewrite (2.100) as the following system for \(k\geq 1\):
\[\begin{cases}\Psi^{lin}_{2,0,0,0}+(\mathfrak{a}_{1}-i\mathfrak{a}_{2})\mathcal{ D}_{0}=0,\\ \Psi^{lin}_{2,2j,0,s}=0,\quad(j,s)\neq(0,0),\\ \Psi^{lin}_{2,2j+1,-1,s}=0,\end{cases} \tag{2.101}\]
and
\[\begin{cases}\Psi^{lin}_{k+1,q,m,s}+\Psi^{nl}_{k+1,q,m,s}=0,\quad m=0,-1,\ k \geq 2,\\ \Psi^{lin}_{k,q,m,s}+\Psi^{nl}_{k,q,m,s}=0,\quad m\neq 0,-1,\ k\geq 2.\end{cases} \tag{2.102}\]
For (2.101), select \(\varphi\) as
\[\varphi(r)=-i\int_{0}^{r}\frac{\bar{f}_{0}(s)\partial_{s}f_{0}(s)-f_{0}(s) \partial_{s}\bar{f}_{0}(s)}{1+\left|f_{0}(s)\right|^{2}}ds. \tag{2.103}\]
By (2.97), we rewrite (2.101) of the form:
\[\begin{cases}g_{1,0,0,0}=-i(\mathfrak{a}_{1}-i\mathfrak{a}_{2})\mathcal{D}_{0},\\ (4\nu j+1)g_{1,2j,0,s}-(s+1)g_{1,2j,0,s+1}=0,\quad(j,s)\neq(0,0),\\ \left[2\nu(2j+1)+2i\alpha_{0}+2-r\frac{d}{dr}\ln\left(1+\left|f_{0}\right|^{2 }\right)\right]g_{1,2j+1,-1,s}\\ +r\partial_{r}g_{1,2j+1,-1,s}=0.\end{cases} \tag{2.104}\]
By (2.89), we get a solution of (2.104):
\[\begin{cases}&g_{1,0,0,0}=-i(\mathfrak{a}_{1}-i\mathfrak{a}_{2})\mathcal{D}_ {0},\\ &g_{1,2j,0,s}=0,\quad(j,s)\neq(0,0),\\ &g_{1,2j+1,-1,s}=\beta_{1}(j,s)\left(1+\left|f_{0}\right|^{2}\right)r^{-2i \alpha_{0}-2\nu(2j+1)-2},\\ &0\leq s\leq 2j+1,\,0\leq j\leq N,\\ &g_{1,2j+1,-1,s}=0,\quad j>N,\end{cases} \tag{2.105}\]
where \(\beta_{1}(j,l)=\hat{w}_{0}^{j,l,-1}\).
In order to solve (2.102), we first introduce some new notation: For \(m\in\mathbb{Z}\), let \(\mathcal{A}_{m}\) be the space consisting of all continuous functions \(a:\mathbb{R}_{+}\to\mathbb{C}\) satisfying
**(i):**: \(a\in C^{\infty}(\mathbb{R}_{+}^{*})\) and \(\operatorname{supp}(a)\subset\{r\leq 2\delta\}\);
**(ii):**: For \(0\leq r<\delta\), \(a\) has the following absolutely convergent expansion:
\[a(r)=\sum_{n\geq K(m)\atop n-m-1\leq 2\mathbb{Z}}\sum_{l=0}^{n}\alpha_{n}(\ln r )^{l}r^{2\nu n},\]
where \(K(m)\) is defined by
\[K(m)=\begin{cases}m+1,\quad m\geq 0,\\ \left|m\right|-1,\quad m\leq-1.\end{cases}\]
In addition, for \(k\geq 1\), let \(\mathcal{B}_{k}\) be the space consisting of all continuous functions \(b:\mathbb{R}_{+}\to\mathbb{C}\) satisfying
**(i):**: \(b\in C^{\infty}(\mathbb{R}^{*}_{+})\);
**(ii):**: For \(0\leqslant r<\delta\), \(b\) has the following absolutely convergent expansion:
\[b(r)=\sum_{n=0}^{\infty}\sum_{l=0}^{2n}\beta_{n,l}(\ln r)^{l}r^{4\nu n};\]
**(iii):**: For \(r\geqslant 2\delta\), \(b\) is a polynomial with degree \(k-1\).
Finally, assume \(\mathcal{B}^{0}_{k}=\{b\in\mathcal{B}_{k}|b(0)=0\}\).
Thus, for any \(m\) and \(k\), we have \(r\partial_{r}\mathcal{A}_{m}\subset\mathcal{A}_{m}\), \(r\partial_{r}\mathcal{B}_{k}\subset\mathcal{B}_{k}\) and \(\mathcal{B}_{k}\mathcal{A}_{m}\subset\mathcal{A}_{m}\). In addition, note that
\[f_{0}\in\mathcal{A}_{0},\quad\varphi\in\mathcal{B}^{0}_{1}, \quad g_{1,0,0,0}\in r^{2i\alpha_{0}-2}\mathcal{A}_{0},\] \[g_{1,2j+1,-1,s}\in r^{-2i\alpha_{0}-2\nu(2j+1)-2}\mathcal{B}_{1 },\quad 0\leqslant s\leqslant 2j+1.\]
Furthermore, for any \((k,q,m,s)\in\Omega\), if \(g_{k,q,m,s}\in r^{2(1+2m)i\alpha_{0}-2\nu q-2k}\mathcal{A}_{m}\) (\(m\neq-1\)) and \(g_{k,q,-1,s}\in r^{-2i\alpha_{0}-2\nu q-2k}\mathcal{B}_{k}\), then
\[\begin{array}{l}\Psi^{lin,i}_{k,q,m,s},\;\Psi^{nl,i}_{k,q,m,s},\;\check{ \Psi}^{nl,2}_{k,q,m,s}\in r^{2(1+2m)i\alpha_{0}-2\nu q-2(k-1)}\mathcal{A}_{m}, \quad m\neq-1,\\ \Psi^{lin,2}_{k,q,-1,s},\;\Psi^{nl,i}_{k,q-1,s},\;\check{\Psi}^{nl,2}_{k,q,-1,s }\in r^{2i\alpha_{0}-2\nu q-2(k-1)}\mathcal{B}_{k-2},\end{array} \tag{2.106}\]
where \(i=1,2\), \(j=0,1,2\).
For (2.102), according to (2.96), (2.97), (2.97), (2.98), (2.99) and (2.103), we can rewrite it as
\[\begin{cases}&(2\nu q+k)g_{k,q,0,s}-(s+1)g_{k,q,0,s+1}=C_{k,q,0,s}+D_{k,q,s}, \\ &r\partial_{r}g_{k,q,-1,s}+\left(2\nu q+k+1-\frac{r\left(f_{0}\partial_{r}f_{ 0}+f_{0}\partial_{r}f_{0}\right)}{1+|f_{0}|^{2}}\right)g_{k,q,-1,s}=C_{k,q,-1, s},\\ &\frac{m(1+m)}{4(\mathfrak{a}_{1}-i\mathfrak{a}_{2})}r^{2}g_{k,q,m,s}=B_{k,q,m,s},\quad m\neq 0,-1,\end{cases} \tag{2.107}\]
where \(B_{k,q,m,s}\) and \(C_{k,q,m,s}\) depend only on \(g_{k^{\prime},q^{\prime},m^{\prime},s^{\prime}}\), \(k^{\prime}\leqslant k-1\), i.e.,
\[\begin{array}{l}B_{k,q,m,s}=B_{k,q,m,s}\left(r;\,g_{k^{\prime},q^{\prime},m ^{\prime},s^{\prime}},\,k^{\prime}\leqslant k-1\right),\quad m\neq 0,-1,\\ C_{k,q,m,s}=C_{k,q,m,s}\left(r;\,g_{k^{\prime},q^{\prime},m^{\prime},s^{ \prime}},\,k^{\prime}\leqslant k-1\right),\quad m=0,-1,\end{array}\]
More precisely, they have the form:
\[\begin{array}{l}B_{k,q,m,s}=-\Psi^{lin,1}_{k,q,m,s}-\Psi^{lin,2}_{k,q,m,s}- \Psi^{nl}_{k,q,m,s},\quad m\neq 0,-1,\\ C_{k,q,m,s}=-i\Psi^{lin,2}_{k+1,q,m,s}-i\check{\Psi}^{nl}_{k+1,q,m,s},\quad m= 0,-1.\end{array} \tag{2.108}\]
Finally, \(D_{k,q,s}\) depend only on \(g_{k,q,l,s}\):
\[D_{k,q,s}=-i\Psi^{nl,2,0}_{k+1,q,0,s}=\frac{-ir^{2}\bar{f}_{0}}{\left( \mathfrak{a}_{1}-i\mathfrak{a}_{2}\right)\left(1+\left|f_{0}\right|^{2}\right) }\sum_{\genfrac{}{}{0.0pt}{}{q_{1}+q_{2}=q}{s_{1}+q_{2}=s}}g_{1,q_{1},-1,s_{1} }g_{k,q_{2},1,s_{2}}, \tag{2.109}\]
where \(D_{2,q,s}=0\).
_Remark 2.11_.: It can be verified that if
\[\begin{array}{l}g_{k,q,m,s}=0,\quad\forall q>(2N+1)(2k-2),\ m\neq 0,1,\\ g_{k,q,m,s}=0,\quad\forall q>(2N+1)(2k-1),\ m=0,1,\end{array}\]
then
\[\begin{array}{l}B_{k,q,m,s}=0,\quad\forall q>(2N+1)(2k-2),\ m\neq 0,1,\\ C_{k,q,m,s}=0,\quad\forall q>(2N+1)(2k-1),\ m=0,1,\end{array}\]
\[D_{k,q,s}=0,\quad\forall q>(2N+1)(2k-1).\]
Now we prove the following result.
**Lemma 2.12**.: _The system (2.107) exists a unique solution \(\left(g_{k,q,m,s}\right){}_{(k,q,m,s)\in\Omega},\) which satisfies_
\[g_{k,q,m,s}\in r^{2(2m+1)i\alpha_{0}-2\nu q-2k}\mathcal{A}_{m}, \quad m\neq-1,\] \[g_{k,q,-1,s}\in r^{-2i\alpha_{0}-2\nu q-2k}\mathcal{B}_{k}. \tag{2.110}\]
_Moreover,_
\[g_{k,q,m,s}=0,\quad\forall q>(2N+1)(2k-2),\ m\neq 0,-1,\] \[g_{k,q,m,s}=0,\quad\forall q>(2N+1)(2k-1),\ m=0,-1. \tag{2.111}\]
Proof.: For the case of \(k=2\), by (2.107), (2.108) and (2.99), we get
\[(4\nu j+2)g_{2,2j,0,s}-(s+1)g_{2,2j,0,s+1}=C_{2,2j,0,s},\quad 0 \leqslant s\leqslant 2j,\,0\leqslant j, \tag{2.113}\] \[r\partial_{r}g_{2,2j+1,-1,s}+\left(2\nu(2j+1)+3-\frac{r\left( \bar{f}_{0}\partial_{r}f_{0}+f_{0}\partial_{r}\bar{f}_{0}\right)}{1+\left|f_{ 0}\right|^{2}}\right)g_{2,2j+1,-1,s}\] \[\quad=C_{2,2j+1,-1,s},\quad 0\leqslant s\leqslant 2j+1,\,0 \leqslant j,\] (2.114) \[\frac{1}{2(\mathfrak{a}_{1}-i\mathfrak{a}_{2})}r^{2}g_{2,2j,-2,s} =B_{2,2j,-2,s},\quad 0\leqslant s\leqslant 2j,\,1\leqslant j. \tag{2.112}\]
Note that \(B_{2,q,m,s}\) and \(C_{2,q,m,s}\) depend only on \(g_{1,q^{\prime},m^{\prime},s^{\prime}}\), so they can be regarded as known here. By (2.106), (2.108) and Remark 2.11, they satisfy
\[B_{2,q,-2,s}\in r^{-6i\alpha_{0}-2\nu q-2}\mathcal{A}_{-2},\] \[C_{2,q,0,s}\in r^{2i\alpha_{0}-2\nu q-4}\mathcal{A}_{0},\] \[C_{2,q,-1,s}\in r^{-2i\alpha_{0}-2\nu q-4}\mathcal{B}_{1},\] \[B_{2,q,-2,s}=0,\quad q>2(2N+1),\] \[C_{2,q,m,s}=0,\quad q>3(2N+1),\ m=0,-1.\]
Thus, by (2.112), (2.113) and (2.114), we get
\[g_{2,2j,-2,s}=\frac{2(\mathfrak{a}_{1}-i\mathfrak{a}_{2})}{r^{2 }}B_{2,2j,-2,s}\in r^{-6i\alpha_{0}-2\nu q-4}\mathcal{A}_{-2},\] \[0\leqslant s\leqslant 2j,\ 1\leqslant j,\] \[g_{2,2j,0,2j}=\frac{1}{4j\nu+2}C_{2,2j,0,2j}\in r^{2i\alpha_{0} -4\nu j-4}\mathcal{A}_{0},\quad 0\leqslant j, \tag{2.115}\] \[g_{2,2j,0,s}=\frac{1}{4j\nu+2}C_{2,2j,0,s}+\frac{s+1}{4j\nu+2}g_ {2,2j,0,s+1}\in r^{2i\alpha_{0}-4\nu j-4}\mathcal{A}_{0},\] \[0\leqslant s\leqslant 2j,\] \[g_{2,2j,-2,s}=0,\quad j>2N+1,\] \[g_{2,2j,0,s}=0,\quad j\geqslant 3N+2,\]
For (2.113), we set
\[g_{2,2j+1,-1,s}=r^{-2i\alpha_{0}-3-2\nu(2j+1)}\left(1+\left|f_{0}\right|^{2} \right)\hat{g}_{2,2j+1,-1,s},\]
then \(\hat{g}_{2,2j+1,-1,s}\) satisfies
\[\partial_{r}\hat{g}_{2,2j+1,-1,s}=r^{-2}\hat{C}_{2,2j+1,-1,s}, \tag{2.116}\]
where
\[\hat{C}_{2,2j+1,-1,s}=r^{2i\alpha_{0}+4+2\nu(2j+1)}\frac{1}{1+\left|f_{0}\right|^{ 2}}C_{2,2j+1,-1,s}.\]
Since \(C_{2,2j+1,-1,s}\in r^{-2i\alpha_{0}+4+2\nu(2j+1)-4}\mathcal{B}_{1}\), we obtain that
**(i):**: For \(0\leq r<\delta\), \(\hat{C}_{2,2j+1,-1,s}\) has an absolutely convergent expansion of the following form:
\[\hat{C}_{2,2j+1,-1,s}=\sum_{n=0}^{\infty}\sum_{l=0}^{2n}\beta_{n,l}r^{4\nu n}( \ln r)^{l},\]
**(ii):**: For \(r\geq 2\delta\), \(\hat{C}_{2,2j+1,-1,s}\) is a constant.
Thus, (2.116) has a unique solution \(\hat{g}_{2,2j+1,-1,s}\in\frac{1}{r}\mathcal{B}_{2}\), which can be written as
\[\hat{C}_{2,2j+1,-1,s}(r)=\int_{0}^{r}\left[\frac{1}{\rho^{2}} \left(\hat{C}_{2,2j+1,-1,s}(\rho)-\beta_{0,0}\right)-\frac{1}{r}\beta_{0,0} \right]d\rho,\] \[0\leq s\leq 2j+1,\ 0\leq j.\]
Finally, since \(C_{2,q,-1,s}=0\) when \(q>3(2N+1)\), we have
\[g_{2,2j+1,-1,s}=0,\quad j>3N+1.\]
By the induction, suppose that for \(k=2,\cdots,l-1\) (\(l\geq 3\)), (2.107) exists a solution \((g_{k,q,m,s})_{\stackrel{{(k,q,m,s)\in\Omega}}{{2\leq k\leq l-1}}}\), which satisfies (2.110) and (2.111). For the case of \(k=l\), according to the last equation of (2.107), we get
\[\frac{m(m+1)}{4(\mathfrak{a}_{1}-i\mathfrak{a}_{2})}r^{2}g_{l,q,m,s}=B_{l,q,m,s},\quad m\neq 0,-1,\]
where \(B_{l,q,m,s}\) are known, and by (2.106), (2.108) and Remark 2.11, they satisfy
\[B_{l,q,m,s}\in r^{2(2m+1)i\alpha_{0}-2\nu q-2(l-1)}\mathcal{A}_{m},\]
\[B_{l,q,m,s}=0,\quad q>2(2N+1)(2l-2).\]
Thus, we obtain that if \(m\neq 0,-1\), then
\[g_{l,q,m,s}=\frac{4(\mathfrak{a}_{1}-i\mathfrak{a}_{2})}{m(m+1) r^{2}}B_{l,q,m,s}\in r^{2(2m+1)i\alpha_{0}-2\nu q-2l}\mathcal{A}_{m},\] \[g_{l,q,m,s}=0,\quad q>2(2N+1)(2l-2). \tag{2.117}\]
Next, we consider the equation of \(g_{l,2j,0,s}\):
\[(4\nu j+l)g_{l,2j,0,s}-(s+1)g_{l,2j,0,s+1}=C_{l,2j,0,s}+D_{l,2j,s},\quad 0 \leq s\leq 2j,\ 0\leq j. \tag{2.118}\]
Note that the term on the right hand side \(C_{l,2j,0,s}+D_{l,2j,s}\) depends only on \(g_{l,q_{1},1,s_{1}}\) and \(g_{k,q_{2},m_{2},s_{2}}\), \(k\leq l-1\). Moreover, by (2.106), (2.108), (2.117) and Remark 2.11, it satisfies
\[C_{l,2j,0,s}+D_{l,2j,s}\in r^{2i\alpha_{0}-4\nu j-2l}\mathcal{A }_{0},\] \[C_{l,2j,0,s}+D_{l,2j,s}=0,\quad j>(2N+1)(2l-1).\]
Therefore, the solutions of (2.118) satisfy
\[g_{l,2j,0,s}\in r^{2i\alpha_{0}-4\nu j-2l}\mathcal{A}_{0},\quad 0 \leq s\leq 2j,\ 0\leq j,\] \[g_{l,2j,0,s}=0,\quad j>(2N+1)(2l-1).\]
Finally, for \(g_{l,2j+1,-1,s}\), \(0\leqslant s\leqslant 2j+1\), \(0\leqslant j\), by (2.107), we have
\[\left(2\nu(2j+1)+l+1-\frac{r\left(\bar{f}_{0}\partial_{r}f_{0}+f_ {0}\partial f_{0}\right)}{1+\left|f_{0}\right|^{2}}\right)g_{l,2j+1,-1,s}+r \partial_{r}g_{l,2j+1,m,s}\\ =C_{l,2j+1,-1,s}, \tag{2.119}\]
where \(C_{l,2j+1,-1,s}\in r^{-2i\alpha_{0}-2\nu(2j+1)-2l}\mathcal{B}_{l-1}\) satisfies
\[C_{l,2j+1,-1,s}=0,\quad 2j>(2N+1)(2l-1). \tag{2.120}\]
Equation (2.119) exists a unique solution \(g_{l,2j+1,-1,s}\in r^{-2i\alpha_{0}-2\nu(2j+1)-2l}\mathcal{B}_{l}\). More precisely, it can be written as
\[g_{l,2j+1,-1,s}= r^{-2i\alpha_{0}-2\nu(2j+1)-l-1}\left(1+\left|f_{0}\right|^{2} \right)\hat{g}_{l.2j+1,-1,s},\] \[\hat{g}_{l,2j+1,-1,s}= \int_{0}^{r}\rho^{-l}\left[\,\hat{C}_{l.2j+1,-1,s}-\sum_{0 \leqslant n\leqslant\frac{l-1}{4\nu}}\sum_{p=0}^{2n}\beta_{n,p}\rho^{4\nu n}( \ln\rho)^{p}\,\right]d\rho\] \[-\int_{r}^{\infty}\rho^{-l}\sum_{0\leqslant n\leqslant\frac{l-1}{ 4\nu}}\sum_{p=0}^{2n}\beta_{n,p}\rho^{4\nu n}(\ln\rho)^{p}d\rho,\]
where
\[\hat{C}_{l,2j+1,-1,s}= r^{2i\alpha_{0}+2\nu(2j+1)+2l}\frac{1}{1+\left|f_{0} \right|^{2}}C_{l,2j+1,-1,s},\] \[\hat{C}_{l,2j+1,-1,s}= \sum_{n=0}^{\infty}\sum_{p=0}^{2n}\beta_{n,p}r^{n}(\ln r)^{p},\quad r <\delta.\]
By (2.120), we get
\[g_{l,2j+1,-1,s}=0,\quad 2j+1>(2N+1)(2l-1).\]
We define
\[w_{\rm rem}^{(N)}(r,t)=f_{0}(r)+\sum_{{(k,q,m,s)\in\Omega}\atop{ k\leqslant N}}t^{k+2\nu q}e^{-im\Phi}(\ln r-\ln t)^{s}g_{k,q,m,s}(r),\] \[A_{\rm rem}^{(N)}(r,t)=-i\partial_{t}w_{\rm rem}^{(N)}-\Delta w_{ \rm rem}^{(N)}+\frac{1}{r^{2}}w_{\rm rem}^{(N)}+G\left(w_{\rm rem}^{(N)},\bar {w}_{\rm rem}^{(N)},\partial_{r}w_{\rm rem}^{(N)}\right),\] \[W_{\rm rem}^{(N)}(r,t)=w_{\rm rem}^{(N)}\left(rt^{-\frac{1}{2}}, (\mathfrak{a}_{1}-i\mathfrak{a}_{2})t\right).\]
According to the previous analysis, we obtain
**Lemma 2.13**.: _There exist \(T(N,\delta)>0\) and \(C=C(\mathfrak{a}_{1},\mathfrak{a}_{2},\nu)>0\) such that for any \(0<t\leqslant T(N,\delta)\), the following hold._
**(i):**: _For any_ \(0\leqslant l\)_,_ \(k\leqslant 4\)_,_ \(i=0,1\) _and_ \(\frac{1}{10}t^{-\varepsilon_{2}}\leqslant y\leqslant 10t^{-\varepsilon_{2}}\)_, if_ \(N\) _sufficiently large (and depends on_ \(\varepsilon_{2}\)_), then_
\[\left|y^{-l}\partial_{y}^{k}\partial_{t}^{i}\left(W_{ss}^{(N)}-W_{\rm rem}^{(N )}\right)\right|\leqslant t^{\nu(1-2\varepsilon_{2})N}+t^{\varepsilon_{2}N}. \tag{2.121}\]
**(ii):**: _The profile_ \(w_{\text{rem}}^{(N)}(r,t)\) _satisfies_
\[\left\|r^{-l}\partial_{r}^{k}\left(w_{\text{rem}}^{(N)}(t)-f_{0} \right)\right\|_{L^{2}(rdr,r\geqslant\frac{1}{10}t^{\frac{1}{2}-\varepsilon_{2} })}\leqslant C\epsilon^{\eta},\quad 0\leqslant k+l\leqslant 3, \tag{2.123}\] \[\left\|r\partial_{r}w_{\text{rem}}^{(N)}(t)\right\|_{L^{\infty}(r \geqslant\frac{1}{10}t^{\frac{1}{2}-\varepsilon_{2}})}\leqslant C\delta^{2\nu},\] (2.124) \[\left\|r^{-l}\partial_{r}^{k}w_{\text{rem}}^{(N)}(t)\right\|_{L^ {\infty}(r\geqslant\frac{1}{10}t^{\frac{1}{2}-\varepsilon_{2}})}\leqslant C \left(\delta^{2\nu-k-l}+t^{\nu-\frac{k+l}{2}+\eta}\right),\] \[0\leqslant k+l\leqslant 4,\] (2.125) \[\left\|r^{-l-1}\partial_{r}^{k}w_{\text{rem}}^{(N)}(t)\right\|_{L^ {\infty}(r\geqslant\frac{1}{10}t^{\frac{1}{2}-\varepsilon_{2}})}\leqslant C \left(\delta^{2\nu-6}+t^{\nu-3+\eta}\right),\quad k+l=5. \tag{2.122}\]
**(iii):**: _If_ \(N\) _sufficiently large, then the error_ \(A_{\text{rem}}^{(N)}(r,t)\) _satisfies_
\[\left\|r^{-l}\partial_{r}^{k}\partial_{t}^{i}A_{\text{rem}}^{(N)}(t)\right\|_ {L^{2}(rdr,r\geqslant\frac{1}{10}t^{\frac{1}{2}-\varepsilon_{2}})}\leqslant t ^{\varepsilon_{2}N},\quad 0\leqslant k+l\leqslant 3,\,i=0,1. \tag{2.126}\]
### Proof of Proposition 2.1
Now we start to prove Proposition 2.1. Fix \(\varepsilon_{2}\) satisfying \(0<\varepsilon_{2}<\frac{1}{2}\). For \(N\geqslant 2\), we define
\[\hat{W}_{\text{ex}}^{(N)}(\rho,t)= \theta\left(t^{\nu-\varepsilon_{1}}\rho\right)W_{\text{in}}^{(N) }\left(t^{\nu}\rho,t\right)\] \[+\left[1-\theta\left(t^{\nu-\varepsilon_{1}}\rho\right)\right] \theta\left(t^{\nu+\varepsilon_{2}}\rho\right)W_{ss}^{(N)}\left(t^{\nu}\rho,t\right)\] \[+\left(1-\theta\left(t^{\nu+\varepsilon_{2}}\rho\right)\right)w _{\text{rem}}^{(N)}\left(t^{\nu+\frac{1}{2}}\rho,t\right),\] \[V_{\text{ex}}^{(N)}(\rho,t)= \left(\frac{2\operatorname{Re}\left(\hat{W}_{\text{ex}}^{(N)} \right)}{1+\left|\hat{W}_{\text{ex}}^{(N)}\right|^{2}},\frac{2\operatorname{Im }\left(\hat{W}_{\text{ex}}^{(N)}\right)}{1+\left|\hat{W}_{\text{ex}}^{(N)} \right|^{2}},\frac{1-\left|\hat{W}_{\text{ex}}^{(N)}\right|^{2}}{1+\left|\hat{ W}_{\text{ex}}^{(N)}\right|^{2}}\right)\!.\]
Then \(V_{\text{ex}}^{(N)}(\rho,t)\) is well-defined for \(\rho\) sufficiently large. In addition, for \(\rho<t^{-\nu+\varepsilon_{1}}\), we take \(V_{\text{ex}}^{(N)}(\rho,t)\) to be \(V_{\text{in}}^{(N)}(\rho,t)\). That is, we assume
\[V^{(N)}(\rho,t)=V_{\text{in}}^{(N)}(\rho,t),\quad\rho\leqslant \frac{1}{2}t^{-\nu+\varepsilon_{1}},\] \[V^{(N)}(\rho,t)=V_{\text{ex}}^{(N)}(\rho,t),\quad\rho\geqslant \frac{1}{2}t^{-\nu+\varepsilon_{1}},\] \[u^{(N)}(x,t)=e^{(\alpha(t)+\theta)R}V^{(N)}\left(\lambda(t)|x|,t \right).\]
Thus, we obtain a \(1\)-equivariant \(C^{\infty}\) profile \(u^{(N)}:\mathbb{R}^{2}\times\mathbb{R}_{+}^{*}\to\mathbb{S}^{2}\). According to \(\mathbf{(i)}\) in Lemma 2.6, \(\mathbf{(ii)}\) in Lemma 2.10 and \(\mathbf{(ii)}\) in Lemma 2.13, we obtain that for any \(N\geqslant 2\), \(u^{(N)}\) satisfies \(\mathbf{(i)}\) in Proposition 2.1, where \(\zeta_{N}^{*}\) is given by
\[\zeta_{N}^{*}(x)=e^{\theta R}\hat{\zeta}_{N}^{*}\left(|x|\right),\]
here \(\hat{\zeta}_{N}^{*}\) is defined by
\[\hat{\zeta}_{N}^{*}=\left(\frac{2\operatorname{Re}\left(f_{0} \right)}{1+\left|f_{0}\right|^{2}},\frac{2\operatorname{Im}\left(f_{0}\right)}{ 1+\left|f_{0}\right|^{2}},\frac{1-\left|f_{0}\right|^{2}}{1+\left|f_{0}\right|^ {2}}\right).\]
According to \(\mathbf{(ii)}\) in Lemma 2.6, \(\mathbf{(i)}\) in Lemma 2.10 and \(\mathbf{(i)}\) and \(\mathbf{(iii)}\) in Lemma 2.13, we obtain that for any \(N\) sufficiently large, the error \(r^{(N)}=-u_{t}^{(N)}+\mathfrak{a}_{1}u^{(N)}\times\Delta u^{(N)}-\mathfrak{a}_ {2}u^{(N)}\times(u^{(N)}\times\Delta u^{(N)})\) satisfies
\[\left\|r^{(N)}(t)\right\|_{H^{3}}+\left\|\hat{\partial}_{t}r^{(N)}(t)\right\|_{ H^{1}}+\left\|\langle x\rangle r^{(N)}(t)\right\|_{L^{2}}\leqslant t^{\eta N}, \quad t\leqslant T(N,\delta),\]
where \(\eta=\eta(\nu,\varepsilon_{2})>0\). Rewriting \(N=\frac{N}{\eta}\), we obtain a family of approximate solutions \(u^{(N)}(t)\) satisfying Proposition 2.1.
## 3. The proof of Theorem 1.1
In this section, we prove the theorem 1.1 using the compactness method, which relies on the following auxiliary proposition.
### Auxiliary proposition
Let \(u^{(N)}\) and \(T=T(N,\delta)\) be as stated in Proposition 2.1, we consider the following Cauchy problem:
\[\begin{cases}&u_{t}=\mathfrak{a}_{1}u\times\Delta u-\mathfrak{a}_{2}u\times(u \times\Delta u),\quad t\geq t_{1},\\ &u|_{t=t_{1}}=u^{(N)}(t_{1}),\end{cases} \tag{3.1}\]
where \(0<t_{1}<T\).
We have the following result.
**Proposition 3.1**.: For any \(N\) sufficiently large, there exists \(0<t_{0}<T\) such that for any \(t_{1}\in(0,t_{0})\), (3.1) exists a solution \(u(t)\) satisfying
**(i):**: \(u-u^{(N)}\in C\left([t_{1},t_{0}],H^{3}\right)\) and
\[\left\|u-u^{(N)}\right\|_{H^{3}}\leq t^{\frac{N}{2}},\quad\forall t_{1}\leq t \leq t_{0}. \tag{3.2}\]
**(ii):**: \(\langle x\rangle\left(u(t)-u^{(N)}(t)\right)\in L^{2}\) and
\[\left\|\langle x\rangle\left(u(t)-u^{(N)}(t)\right)\right\|_{L^{2}}\leq t^{ \frac{N}{2}},\quad\forall t_{1}\leq t\leq t_{0}. \tag{3.3}\]
Proof.: Our proof is to use the bootstrap argument. Let
\[u^{(N)}(x,t)=e^{\alpha(t)R}U^{(N)}\left(\lambda(t)x,t\right),\] \[r^{(N)}(x,t)=\lambda^{2}(t)e^{\alpha(t)R}R^{(N)}\left(\lambda(t )x,t\right),\] \[u(x,t)=e^{\alpha(t)R}U\left(\lambda(t)x,t\right),\] \[U(y,t)=U^{(N)}(y,t)+S(y,t),\] \[U^{(N)}(y,t)=\phi(y)+\chi^{(N)}(y,t).\]
Then, \(S(t)\) solves
\[t^{1+2\nu}S_{t}+\alpha_{0}t^{2\nu}RS-t^{2\nu}\left(\nu+\frac{1} {2}\right)y\cdot\nabla \tag{3.4}\] \[= \mathfrak{a}_{1}\left(S\times\Delta U^{(N)}+U^{(N)}\times\Delta S +S\times\Delta S\right)\] \[-\mathfrak{a}_{2}U\times(U\times\Delta U)+\mathfrak{a}_{2}U^{(N)} \times\left(U^{(N)}\times\Delta U^{(N)}\right)+R^{N}\] \[= \mathfrak{a}_{1}\left(S\times\Delta U^{(N)}+U^{(N)}\times\Delta S +S\times\Delta S\right)+R^{N}\] \[-\mathfrak{a}_{2}U\times(U\times\Delta S)-\mathfrak{a}_{2}S\times \left(U\times\Delta U^{(N)}\right)-\mathfrak{a}_{2}U^{(N)}\times\left(S\times \Delta U^{(N)}\right).\]
Suppose that
\[\|S\|_{L^{\infty}(\mathbb{R}^{2})}\leq\delta_{1}, \tag{3.5}\]
where \(\delta_{1}\) sufficiently small. Note that \(S\) is \(1\)-equivariant and satisfies
\[2(\phi,S)+2(\chi^{(N)},S)+|S|^{2}=0, \tag{3.6}\]
where \(\left\|\chi^{(N)}\right\|_{L^{\infty}(\mathbb{R}^{2})}\leqslant C\delta^{2\nu}\) (see (2.5)). Thus, the bootstrap hypothesis (3.5) implies
\[\left\|S\right\|_{L^{\infty}(\mathbb{R}^{2})}\leqslant C\left\|\nabla S\right\|_ {L^{2}(\mathbb{R}^{2})}. \tag{3.7}\]
#### 3.1.1. Energy control
We first derive the bootstrap control for the following energy norm:
\[J_{1}(t)=\int_{\mathbb{R}^{2}}\left(|\nabla S|^{2}+\kappa(\rho)|S|^{2}\right) dy,\quad\rho=|y|.\]
By (3.4), we get
\[t^{1+2\nu}\frac{d}{dt}\int|\nabla S|^{2}dy\] \[= -2\mathfrak{a}_{1}\int(S\times\Delta U^{(N)},\Delta S)dy+2\int( \nabla R^{(N)},\nabla S)dy\] \[+2\mathfrak{a}_{2}\int\left(U\times\left(U\times\Delta S\right), \Delta S\right)dy+2\mathfrak{a}_{2}\int\left(S\times\left(U\times\Delta U^{(N )}\right),\Delta S\right)dy\] \[+2\mathfrak{a}_{2}\int\left(U^{(N)}\times\left(S\times\Delta U^ {(N)}\right),\Delta S\right)dy, \tag{3.8}\]
and
\[t^{1+2\nu}\frac{d}{dt}\int\kappa(\rho)|S|^{2}dy\] \[= -\left(\frac{1}{2}+\nu\right)t^{2\nu}\int\left(2\kappa+\rho \kappa^{\prime}\right)(S,S)dy\] \[+2\mathfrak{a}_{1}\int\kappa\left(U^{(N)}\times\Delta S,S\right) dy+2\int\kappa\left(R^{N},S\right)dy\] \[-2\mathfrak{a}_{2}\int\kappa\left(U\times\left(U\times\Delta S \right),\Delta S\right)dy\] \[-2\mathfrak{a}_{2}\int\kappa\left(S\times\left(U\times\Delta U^ {(N)}\right),\Delta S\right)dy\] \[-2\mathfrak{a}_{2}\int\kappa\left(U^{(N)}\times\left(S\times \Delta U^{(N)}\right),\Delta S\right)dy. \tag{3.9}\]
Since \(U^{(N)}=\phi+\chi^{(N)}\), where \(\phi\) satisfies \(\Delta\phi=\kappa\phi\), we get
\[(S\times\Delta\phi,\Delta S)-\kappa\left(\phi\times\Delta S,S\right)=0.\]
This combined with (3.8) and (3.9) gives
\[t^{1+2\nu}\frac{d}{dt}J_{1}(t)=\sum_{i=1}^{10}\mathcal{E}_{i},\]
where
\[\mathcal{E}_{1}= -2\mathfrak{a}_{1}\int\left(S\times\Delta\chi^{(N)},\Delta S \right)dy,\] \[\mathcal{E}_{2}= 2\mathfrak{a}_{1}\int\kappa\left(\chi^{(N)}\times\Delta S,S \right)dy,\] \[\mathcal{E}_{3}= -\left(\frac{1}{2}+\nu\right)t^{2\nu}\int\left(2\kappa+\rho \kappa^{\prime}\right)(S,S)dy, \tag{3.10}\]
\[\mathcal{E}_{4}= 2\int\left[\left(\nabla R^{(N)},\nabla S\right)+\kappa\left(R^{(N)}, S\right)\right]dy\] \[\mathcal{E}_{5}= 2\mathfrak{a}_{2}\int\left(U\times\left(U\times\Delta S\right), \Delta S\right)dy=-2\mathfrak{a}_{2}\int\left|\left(U\times\Delta S\right) \right|^{2}dy,\] \[\mathcal{E}_{6}= 2\mathfrak{a}_{2}\int\left(S\times\left(U\times\Delta U^{(N)} \right),\Delta S\right)dy=-2\mathfrak{a}_{2}\int\left(U\times\Delta U^{(N)},S \times\Delta S\right)dy,\] \[\mathcal{E}_{7}= 2\mathfrak{a}_{2}\int\left(U^{(N)}\times\left(S\times\Delta U^{ (N)}\right),\Delta S\right)dy=-2\mathfrak{a}_{2}\int\left(S\times\Delta U^{(N )},U^{(N)}\times\Delta S\right)dy,\] \[\mathcal{E}_{8}= -2\mathfrak{a}_{2}\int\kappa\left(U\times\left(U\times\Delta S \right),\Delta S\right)dy=2\mathfrak{a}_{2}\int\kappa\left|\left(U\times \Delta S\right)\right|^{2}dy,\] \[\mathcal{E}_{9}= -2\mathfrak{a}_{2}\int\kappa\left(S\times\left(U\times\Delta U ^{(N)}\right),\Delta S\right)dy=2\mathfrak{a}_{2}\int\kappa\left(U\times \Delta U^{(N)},S\times\Delta S\right)dy,\] \[\mathcal{E}_{10}= -2\mathfrak{a}_{2}\int\kappa\left(U^{(N)}\times\left(S\times \Delta U^{(N)}\right),\Delta S\right)dy=2\mathfrak{a}_{2}\int\kappa\left(S \times\Delta U^{(N)},U^{(N)}\times\Delta S\right)dy,\]
By Proposition 2.1, we obtain
\[\begin{split}|\mathcal{E}_{j}|&\leqslant Ct^{2\nu} \left\|S\right\|_{H^{1}}^{2},\quad j=1,2,3,\\ |\mathcal{E}_{4}|&\leqslant Ct^{N+\nu+\frac{1}{2}} \left\|\nabla S\right\|_{L^{2}},\\ \mathcal{E}_{5}+\mathcal{E}_{8}&\leqslant 0.\end{split} \tag{3.10}\]
For \(\mathcal{E}_{i}\), \(i=6,7,9,10\), we decompose \(U^{(N)}\) and \(S\) in the basis \(\{f_{1},f_{2},Q\}\):
\[\begin{split}& U^{(N)}(y,t)=e^{\theta R}\left[\left(1+z_{3}^{(N)} (\rho,t)\right)Q(\rho)+z_{1}^{(N)}(\rho,t)f_{1}(\rho)+z_{2}^{(N)}(\rho,t)f_{2} (\rho)\right],\\ & S(y,t)=e^{\theta R}\left[\zeta_{3}(\rho,t)Q(\rho)+\zeta_{1}( \rho,t)f_{1}(\rho)+\zeta_{2}(\rho,t)f_{2}(\rho)\right].\end{split} \tag{3.11}\]
Thus,
\[\mathcal{E}_{6}= -2\mathfrak{a}_{2}\int\left(U\times\Delta U^{(N)},S\times \Delta S\right)dy\] \[= -2\mathfrak{a}_{2}\int\left(\Upsilon_{1}\times\Upsilon_{2},\zeta \times\Upsilon_{3}\right)\rho d\rho,\]
where
\[z^{(N)}=\left(z_{1}^{(N)},z_{2}^{(N)},z_{3}^{(N)}\right),\quad \zeta=\left(\zeta_{1},\zeta_{2},\zeta_{3}\right),\] \[\Upsilon_{1}=\mathbf{k}+z^{(N)}+\zeta,\quad|\Upsilon_{1}|=1,\] \[\Upsilon_{2}=\Delta z^{(N)}+\Upsilon_{21},\] \[\Upsilon_{21}=\left(-\frac{z_{1}^{(N)}}{\rho^{2}}-2\frac{h_{1}}{ \rho}\partial_{\rho}z_{3}^{(N)},-\frac{z_{2}^{(N)}}{\rho^{2}},\kappa\left(1+ z_{3}^{(N)}\right)+2\frac{h_{1}}{\rho}\partial_{\rho}z_{1}^{(N)}-2\frac{h_{1}h_{3}}{ \rho^{2}}z_{1}^{(N)}\right),\] \[\Upsilon_{3}=\Delta\zeta+\Upsilon_{31},\] \[\Upsilon_{31}=\left(-\frac{\zeta_{1}}{\rho^{2}}-2\frac{h_{1}}{ \rho}\partial_{\rho}\zeta_{3},-\frac{\zeta_{2}}{\rho^{2}},\kappa\zeta_{3}+2 \frac{h_{1}}{\rho}\partial_{\rho}\zeta_{1}-2\frac{h_{1}h_{3}}{\rho^{2}}\zeta _{1}\right),\]
which satisfy
\[\begin{split}&|\partial_{\rho}\Upsilon_{1}|\leqslant C\left(\left| \partial_{\rho}z^{(N)}\right|+|\partial_{\rho}\zeta|\right),\\ &|\Upsilon_{21}|\leqslant C\frac{1}{\rho^{2}}\left(\left|\partial_ {\rho}z^{(N)}\right|+\left|z^{(N)}\right|\right),\\ &|\partial_{\rho}\Upsilon_{2}|\leqslant C\left(\left|\partial_{ \rho}^{3}z^{(N)}\right|+\rho^{-3}|z^{(N)}|+\frac{1}{\rho^{2}}|\partial_{\rho}z ^{(N)}|+|z^{(N)}|\right),\\ &|\Upsilon_{31}|\leqslant C\frac{1}{\rho^{2}}\left(|\zeta|+| \partial_{\rho}\zeta|\right).\end{split} \tag{3.12}\]
By Proposition 2.1, we get
\[\begin{split}|\mathcal{E}_{6}|=& 2\mathfrak{a}_{2}\int\left( \partial_{\rho}\left(\Upsilon_{1}\times\Upsilon_{2}\right),\zeta\times \partial_{\rho}\zeta\right)\rho d\rho-2\mathfrak{a}_{2}\int\left(\Upsilon_{1} \times\Upsilon_{2},\zeta\times\Upsilon_{31}\right)\rho d\rho\\ \leqslant& Ct^{2\nu}\left(\|S\|_{H^{1}}^{2}+\|S\|_{ H^{1}}^{2}\|\nabla S\|_{L^{2}}\right).\end{split} \tag{3.13}\]
For \(\mathcal{E}_{7}\), since
\[\begin{split}\mathcal{E}_{7}=&-2\mathfrak{a}_{2} \int\left(S\times\Delta U^{(N)},U^{(N)}\times\Delta S\right)dy\\ =&-2\mathfrak{a}_{2}\int\left(\zeta\times\Upsilon_{2 },\left(\mathbf{k}+z^{(N)}\right)\times\Upsilon_{3}\right)\rho d\rho,\end{split}\]
we get
\[\left|\mathcal{E}_{7}\right|\leqslant Ct^{2\nu}\left\|S\right\|_{H^{1}}^{2}. \tag{3.14}\]
Similarly, we have the estimates:
\[\begin{split}&|\mathcal{E}_{9}|\leqslant Ct^{2\nu}\left(\|S\|_{H^{1}}^{2}+\|S\|_{H^{1}}^{2}\| \nabla S\|_{L^{2}}\right),\\ &|\mathcal{E}_{10}|\leqslant Ct^{2\nu}\left\|S\right\|_{H^{1}}^{ 2}.\end{split} \tag{3.15}\]
Combining (3.10), (3.13), (3.14) and (3.15), we get
\[\left|\frac{d}{dt}J_{1}(t)\right|\leqslant C\frac{1}{t}\left(\|S\|_{H^{1}}^{2} +\|S\|_{H^{1}}^{2}\|\nabla S\|_{L^{2}}\right)+Ct^{2N-2\nu}. \tag{3.16}\]
#### 3.1.2. Control the \(L^{2}\) norm
Now we consider the control of the following norm:
\[J_{0}(t)=\int_{\mathbb{R}^{2}}|S|^{2}dy.\]
By (3.4), we obtain
\[\begin{split}& t^{1+2\nu}\frac{d}{dt}\int|S|^{2}dy\\ =&-2\left(\frac{1}{2}+\nu\right)t^{2\nu}\int(S,S)dy+2 \mathfrak{a}_{1}\int\left(U^{(N)}\times\Delta S,S\right)dy+2\int\left(R^{N},S \right)dy\\ &-2\mathfrak{a}_{2}\int\left(U\times\left(U\times\Delta S\right),S\right)dy-2\mathfrak{a}_{2}\int\left(U^{(N)}\times\left(S\times\Delta U^{(N )}\right),S\right)dy.\end{split}\]
That is,
\[t^{1+2\nu}\frac{d}{dt}J_{0}(t)=\sum_{i=11}^{15}\mathcal{E}_{i},\]
where
\[\mathcal{E}_{11} =-(1+2\nu)t^{2\nu}J_{0}(t),\] \[\mathcal{E}_{12} =2\mathfrak{a}_{1}\int\left(U^{(N)}\times\Delta S,S\right)dy,\] \[\mathcal{E}_{13} =2\int\left(R^{(N)},S\right)dy\] \[\mathcal{E}_{14} =-2\mathfrak{a}_{2}\int\left(U\times\left(U\times\Delta S\right),S\right)dy=2\mathfrak{a}_{2}\int\left(U\times\Delta S,U^{(N)}\times S\right)dy\] \[\mathcal{E}_{15} =2\mathfrak{a}_{2}\int\left(U^{(N)}\times\left(S\times\Delta U^ {(N)}\right),S\right)dy=2\mathfrak{a}_{2}\int\left(S\times\Delta U^{(N)},S \times U^{(N)}\right)dy.\]
For \(\mathcal{E}_{12}\), we recall (3.11) and decompose \(U^{(N)}\) and \(S\) in the basis \(\{f_{1},f_{2},Q\}\), then \(\mathcal{E}_{12}\) can be rewritten as
\[\mathcal{E}_{12}=\sum_{i=16}^{18}\mathcal{E}_{i},\]
where
\[\mathcal{E}_{16} =-4\mathfrak{a}_{1}\int_{\mathbb{R}^{+}}\frac{h_{1}}{\rho} \zeta_{2}\partial_{\rho}\zeta_{3}\rho d\rho,\] \[\mathcal{E}_{17} =-2\mathfrak{a}_{1}\int_{\mathbb{R}^{+}}\left(\partial_{\rho}z^{ (N)}\times\partial_{\rho}\zeta,\zeta\right)\rho d\rho,\] \[\mathcal{E}_{18} =2\mathfrak{a}_{1}\int_{\mathbb{R}^{+}}\left(z^{(N)}\times l, \zeta\right)\rho d\rho.\]
Here \(l\) is given by
\[l=\left(-\frac{1}{\rho^{2}}\zeta_{1}-\frac{2h_{1}}{\rho}\partial_{\rho}\zeta_ {3},\,-\frac{1}{\rho^{2}}\zeta_{2},\,\kappa(\rho)\zeta_{3}+\frac{2h_{1}}{\rho }\partial_{\rho}\zeta_{1}-\frac{2h_{1}h_{3}}{\rho^{2}}\partial_{\rho}\zeta_{ 1}\right),\]
which satisfies
\[|l|\leqslant C\frac{1}{\rho^{2}}\left(|\zeta|+|\partial_{\rho}\zeta|\right).\]
Thus,
\[|\mathcal{E}_{18}|\leqslant Ct^{2\nu}\|S\|_{H^{1}}^{2}. \tag{3.17}\]
For \(\mathcal{E}_{16}\), since
\[2\left(\zeta,\mathbf{k}+z^{(N)}\right)+|\zeta|^{2}=0, \tag{3.18}\]
we get
\[|\partial_{\rho}\zeta_{3}|\leqslant C\left(\left|\partial_{\rho}z^{(N)} \right|\left|\zeta\right|+\left|z^{(N)}\right|\left|\partial_{\rho}\zeta \right|+\left|\partial_{\rho}\zeta\right|\left|\zeta\right|\right).\]
Thus,
\[|\mathcal{E}_{16}|\leqslant C\left(t^{2\nu}\|S\|_{H^{1}}^{2}+\|\nabla S\|_{L^ {2}}^{3}\right). \tag{3.19}\]
For \(\mathcal{E}_{17}\), let \(e_{0}=\mathbf{k}+z^{(N)}\) and \(\zeta=\zeta^{\perp}+\mu e_{0}\), where \(\mu=(\zeta,e_{0})\). By (3.18), we get
\[|\mu|\leqslant C|\zeta|^{2},\quad|\mu_{\rho}|\leqslant C|\zeta|\left| \partial_{\rho}\zeta\right|.\]
Thus, \(\mathcal{E}_{17}\) can be rewritten as
\[\mathcal{E}_{17}=-2\mathfrak{a}_{1}\int_{\mathbb{R}_{+}}\left(\partial_{\rho} \zeta^{\perp}\times\zeta^{\perp},\partial_{\rho}e_{0}\right)\rho d\rho+O \left(\|S\|_{H^{1}}^{2}\|\nabla S\|_{L^{2}}\right). \tag{3.20}\]
Let \(\{e_{1},e_{2}\}\) be a set of smooth orthogonal basis of the tangent space \(T_{e_{0}}\mathbb{S}^{2}\) that satisfies \(e_{2}=e_{0}\times e_{1}\), then \(\left(\partial_{\rho}\zeta^{\perp}\times\zeta^{\perp},\partial_{\rho}e_{0}\right)\) can be rewritten as
\[\left(\partial_{\rho}\zeta^{\perp}\times\zeta^{\perp},\partial_{\rho}e_{0} \right)=\left(\zeta^{\perp},\partial_{\rho}e_{0}\right)\left[\left(\zeta^{\perp },e_{2}\right)\left(\partial_{\rho}e_{0},e_{1}\right)-\left(\zeta^{\perp},e_{1 }\right)\left(\partial_{\rho}e_{0},e_{2}\right)\right],\]
which gives
\[\left|\int_{\mathbb{R}_{+}}\left(\partial_{\rho}\zeta^{\perp}\times\zeta^{ \perp},\partial_{\rho}e_{0}\right)\rho d\rho\right|\leq\left\|\partial_{\rho}z^ {(N)}\right\|_{L^{\infty}}^{2}J_{0}(t)\leq t^{2\nu}J_{0}(t). \tag{3.21}\]
Combining (3.17), (3.19), (3.20) and (3.21), we get
\[|\mathcal{E}_{12}|\leq C\left(t^{2\nu}\|S\|_{H^{1}}^{2}+\|\nabla S\|_{L^{2}}^ {3}\right)+2|\mathfrak{a}_{1}|t^{2\nu}J_{0}(t)+O\left(\|S\|_{H^{1}}^{2}\|\nabla S \|_{L^{2}}\right). \tag{3.22}\]
For \(\mathcal{E}_{13}\), by Proposition 2.1, we get
\[|\mathcal{E}_{13}|\leq Ct^{N+\nu+\frac{1}{2}}\|\nabla S\|_{L^{2}}. \tag{3.23}\]
For \(\mathcal{E}_{14}\), combining
\[\mathcal{E}_{14}= 2\mathfrak{a}_{2}\int\left(U\times\Delta S,U^{(N)}\times S \right)dy\] \[= 2\mathfrak{a}_{2}\int\left(\Upsilon_{1}\times\Upsilon_{3},\left( \mathbf{k}+z^{(N)}\right)\times\zeta\right)\rho d\rho\] \[= -2\mathfrak{a}_{2}\int\left(\partial_{\rho}z^{(N)}\times \partial_{\rho}\zeta,\left(\mathbf{k}+z^{(N)}\right)\times\zeta\right)\rho d\rho\] \[-2\mathfrak{a}_{2}\int\left(\Upsilon_{1}\times\partial_{\rho} \zeta,\partial_{\rho}\left[\left(\mathbf{k}+z^{(N)}\right)\times\zeta\right] \right)\rho d\rho\] \[+2\mathfrak{a}_{2}\int\left(\Upsilon_{1}\times\Upsilon_{31}, \left(\mathbf{k}+z^{(N)}\right)\times\zeta\right)\rho d\rho,\]
and (3.12), we get
\[|\mathcal{E}_{14}|\leq Ct^{2\nu}\left(\|S\|_{H^{1}}^{2}+\|S\|_{H^{1}}^{2}\| \nabla S\|_{L^{2}}\right). \tag{3.24}\]
For \(\mathcal{E}_{15}\), similarly, by
\[\mathcal{E}_{15}= 2\mathfrak{a}_{2}\int\left(S\times\Delta U^{(N)},S\times U^{(N) }\right)dy\] \[= 2\mathfrak{a}_{2}\int\left(\zeta\times\Upsilon_{2},\zeta\times \left(\mathbf{k}+z^{(N)}\right)\right)\rho d\rho\] \[= -2\mathfrak{a}_{2}\int\left(\partial_{\rho}\zeta\times\partial_{ \rho}z^{(N)},\zeta\times\left(\mathbf{k}+z^{(N)}\right)\right)\rho d\rho\] \[-2\mathfrak{a}_{2}\int\left(\zeta\times\partial_{\rho}z^{(N)}, \partial_{\rho}\left[\zeta\times\left(\mathbf{k}+z^{(N)}\right)\right] \right)\rho d\rho\] \[+2\mathfrak{a}_{2}\int\left(\zeta\times\Upsilon_{21},\zeta\times \left(\mathbf{k}+z^{(N)}\right)\right)\rho d\rho,\]
we can obtain
\[|\mathcal{E}_{15}|\leq Ct^{2\nu}\|S\|_{H^{1}}^{2}+2|\mathfrak{a}_{2}|t^{2\nu}J _{0}(t). \tag{3.25}\]
Combining (3.22), (3.23), (3.24) and (3.25), we get
\[\left|\frac{d}{dt}J_{0}(t)\right|\leq C\left(\frac{1}{t}\|S\|_{H^{1}}^{2}+t^{- 1-2\nu}\|S\|_{H^{1}}^{2}\|\nabla S\|_{L^{2}}+t^{2N-2\nu}\right). \tag{3.26}\]
#### 3.1.3. Control of the weighted \(L^{2}\) norm
By (3.4), we can calculate \(\frac{d}{dt}\|yS(t)\|_{L^{2}}^{2}\) by
\[t^{1+2\nu}\frac{d}{dt}\left\|\left|y|S(t)\right\|_{L^{2}}^{2}\right.\] \[= -4\mathfrak{a}_{1}\int y_{i}\left(U^{(N)}\times\partial_{i}S,S \right)dy-2\mathfrak{a}_{1}\int|y|^{2}\left(\partial_{i}U^{(N)}\times\partial _{i}S,S\right)dy\] \[-2(1+2\nu)t^{2\nu}\left\|\left|y|S(t)\right|_{L^{2}}^{2}+2\int|y |^{2}\left(R^{(N)},S\right)dy\right.\] \[-2\mathfrak{a}_{2}\int\left(U\times\left(U\times\Delta S\right), \left|y\right|^{2}S\right)dy\] \[-2\mathfrak{a}_{2}\int\left(U^{(N)}\times\left(S\times\Delta U^ {(N)}\right),\left|y\right|^{2}S\right)dy\] \[= -4\mathfrak{a}_{1}\int y_{i}\left(U^{(N)}\times\partial_{i}S,S \right)dy-2\mathfrak{a}_{1}\int|y|^{2}\left(\partial_{i}U^{(N)}\times\partial _{i}S,S\right)dy\] \[-2(1+2\nu)t^{2\nu}\left\|\left|y|S(t)\right|_{L^{2}}^{2}+2\int|y |^{2}\left(R^{(N)},S\right)dy\right.\] \[-2\mathfrak{a}_{2}\int|y|^{2}\left(\partial_{i}U^{(N)}\times \partial_{i}S,U^{(N)}\times S\right)dy-2\mathfrak{a}_{2}\int|y|^{2}\left(U^{( N)}\times\partial_{i}S,\partial_{i}U^{(N)}\times S\right)dy\] \[-4\mathfrak{a}_{2}\int y_{i}\left(U^{(N)}\times\partial_{i}S,U^{ (N)}\times S\right)dy-2\mathfrak{a}_{2}\int|y|^{2}\left|U^{(N)}\times\partial _{i}S\right|^{2}dy\] \[-2\mathfrak{a}_{2}\int|y|^{2}\left(\partial_{i}S\times\partial_{ i}U^{(N)},U^{(N)}\times S\right)dy+2\mathfrak{a}_{2}\int|y|^{2}\left|S\times \partial_{i}U^{(N)}\right|^{2}dy\] \[+4\mathfrak{a}_{2}\int y_{i}\left|S\times\partial_{i}U^{(N)} \right|^{2}dy-2\mathfrak{a}_{2}\int|y|^{2}\left(S\times\partial_{i}U^{(N)}, \partial_{i}U^{(N)}\times\partial_{i}S\right)dy,\]
where \(\partial_{j}\) denotes \(\partial_{y_{j}}\). Here and below we use the convention of implicit summation over repeated indices.
Thus, we obtain
\[\left|\frac{d}{dt}\left\|\left|y|S(t)\right|_{L^{2}}^{2}\right|\leqslant C \left(\left\|\left|y|S(t)\right|_{L^{2}}^{2}+t^{-4\nu}\|S\|_{H^{1}}^{2}+t^{2N- 4\nu}\right). \tag{3.27}\]
#### 3.1.4. Control of the higher order derivatives
In addition to the assumption (3.5), we also assume
\[\|S(t)\|_{H^{3}}+\left\|\left|y|S(t)\right|_{L^{2}}^{2}\leqslant t^{\frac{2N}{ 5}}. \tag{3.28}\]
We next obtain the control of the \(\dot{H}^{3}\) norm of solutions by estimating \(\|\nabla S_{t}\|_{L^{2}}\). More precisely, we consider the functional
\[J_{3}(t)=t^{2+4\nu}\int\left|\nabla s_{t}(x,t)\right|^{2}dx+t^{1+2\nu}\int \kappa\left(t^{-\frac{1}{2}-\nu}x\right)\cdot\left|s_{t}(x,t)\right|^{2}dx,\]
where \(s(x,t)\) is defined by
\[s(x,t)=e^{\alpha(t)R}S\left(\lambda(t)x,t\right).\]
Let \(s_{t}(x,t)=e^{\alpha(t)R}\lambda^{2}(t)g(\lambda(t)x,t)\), then \(J_{3}\) can be expressed by a functional of \(g\):
\[J_{3}(t)=\int|\nabla g(y,t)|^{2}\,dy+\int\kappa(\rho)|g(y,t)|^{2}dy.\]
Now we calculate \(\frac{d}{dt}J_{3}(t)\). Note that \(g(y,t)\) satisfies
\[t^{1+2\nu}g_{t}+\alpha_{0}t^{2\nu}Rg-\left(\nu+\frac{1}{2}\right)t ^{2\nu}\left(2+y\cdot\nabla\right)g\] \[= \mathfrak{a}_{1}\left(S+U^{(N)}\right)\times\Delta g+\mathfrak{a }_{1}g\times\left(\Delta U^{N}+\Delta S\right)\] \[+\mathfrak{a}_{1}\left(U^{(N)}\times\Delta U^{(N)}-R^{(N)} \right)\times\Delta S\] \[+\mathfrak{a}_{1}S\times\Delta\left(U^{(N)}\times\Delta U^{(N)}- R^{(N)}\right)\] \[-\mathfrak{a}_{2}\left(U^{(N)}\times\Delta U^{(N)}-R^{(N)} \right)\times\left[\left(S+U^{(N)}\right)\times\Delta S\right]\] \[-\mathfrak{a}_{2}g\times\left[\left(S+U^{(N)}\right)\times\Delta S \right]-\mathfrak{a}_{2}\left(S+U^{(N)}\right)\times\left(g\times\Delta S\right)\] \[-\mathfrak{a}_{2}\left(S+U^{(N)}\right)\times\left[\left(U^{(N) }\times\Delta U^{(N)}-R^{(N)}\right)\times\Delta S\right]\] \[-\mathfrak{a}_{2}\left(S+U^{(N)}\right)\times\left[\left(S+U^{(N )}\right)\times\Delta g\right]\] \[-\mathfrak{a}_{2}g\times\left[\left(S+U^{(N)}\right)\times\Delta U ^{(N)}\right]\] \[-\mathfrak{a}_{2}g\times\left[\left(S+U^{(N)}\right)\times\Delta U ^{(N)}\times\Delta U^{(N)}-R^{(N)}\right)\right]\] \[-\mathfrak{a}_{2}g\times\left[\left(S+U^{(N)}\right)\times\Delta \left(U^{(N)}\times\Delta U^{(N)}-R^{(N)}\right)\right]\] \[-\mathfrak{a}_{2}\left(U^{(N)}\times\Delta U^{(N)}-R^{(N)} \right)\times\left(S+\Delta U^{(N)}\right)\] \[-\mathfrak{a}_{2}U^{(N)}\times\left(g\times\Delta U^{(N)}\right)\] \[-\mathfrak{a}_{2}U^{(N)}\times\left[S\times\Delta\left(U^{(N)} \times\Delta U^{(N)}-R^{(N)}\right)\right]+t^{2+4\nu}r_{t}^{(N)}. \tag{3.29}\]
Thus,
\[t^{1+2\nu}\frac{d}{dt}J_{3}(t)= (2+4\nu)t^{2\nu}\|\nabla g\|_{L^{2}}^{2}\] \[+\left(\frac{1}{2}+\nu\right)t^{2\nu}\int\left(2\kappa-\rho \kappa^{\prime}\right)|g|^{2}dy+\sum_{i=16}^{32}\mathcal{E}_{i}, \tag{3.30}\]
where
\[\mathcal{E}_{16}= -2\mathfrak{a}_{1}\int\left(g\times\Delta\chi^{(N)},\Delta g \right)dy+2\mathfrak{a}_{1}\int\kappa\left(\chi^{(N)}\times\Delta g,g\right)dy,\] \[\mathcal{E}_{17}= -2\mathfrak{a}_{1}\int\left(\left(U^{(N)}\times\Delta U^{(N)}-R^ {(N)}\right)\times\Delta S,\Delta g\right)dy\] \[+2\mathfrak{a}_{1}\int\kappa\left(\left(U^{(N)}\times\Delta U^{ (N)}-R^{(N)}\right)\times\Delta S,g\right)dy\] \[-2\mathfrak{a}_{1}\int\kappa\left(\Delta\left(U^{(N)}\times \Delta U^{(N)}-R^{(N)}\right)\times S,g\right)dy,\] \[\mathcal{E}_{18}= -2\mathfrak{a}_{1}\int\left(g\times\Delta S,\Delta g\right)dy,\]
\[\mathcal{E}_{19}= 2\mathfrak{a}_{1}\int\kappa\left(S\times\Delta g,g\right)dy,\] \[\mathcal{E}_{20}= -2t^{2+4\nu}\int(r_{t},\Delta g)dy+2t^{2+4\nu}\int\kappa(r_{t},g)dy,\] \[\mathcal{E}_{21}= 2\mathfrak{a}_{2}\int\left(\left(U^{(N)}\times\Delta U^{(N)}-R^{ (N)}\right)\times\left[\left(S+U^{(N)}\right)\times\Delta S\right],\Delta g \right)dy\] \[-2\mathfrak{a}_{2}\int\kappa\left(\left(U^{(N)}\times\Delta U^{(N )}-R^{(N)}\right)\times\left[\left(S+U^{(N)}\right)\times\Delta S\right],g \right)dy,\] \[\mathcal{E}_{22}= 2\mathfrak{a}_{2}\int\left(g\times\left[\left(S+U^{(N)}\right) \times\Delta S\right],\Delta g\right)dy,\] \[\mathcal{E}_{23}= 2\mathfrak{a}_{2}\int\left(\left(S+U^{(N)}\right)\times\left(g \times\Delta S\right),\Delta g\right)dy\] \[-2\mathfrak{a}_{2}\int\kappa\left(\left(S+U^{(N)}\right)\times \left(g\times\Delta S\right),g\right)dy,\] \[\mathcal{E}_{24}= 2\mathfrak{a}_{2}\int\left(\left(S+U^{(N)}\right)\times\left[ \left(U^{(N)}\times\Delta U^{(N)}-R^{(N)}\right)\times\Delta S\right],\Delta g \right)dy\] \[-2\mathfrak{a}_{2}\int\kappa\left(\left(S+U^{(N)}\right)\times \left[\left(U^{(N)}\times\Delta U^{(N)}-R^{(N)}\right)\times\Delta S\right],g \right)dy,\] \[\mathcal{E}_{25}= 2\mathfrak{a}_{2}\int\left(\left(S+U^{(N)}\right)\times\left[ \left(S+U^{(N)}\right)\times\Delta g\right],\Delta g\right)dy\] \[-2\mathfrak{a}_{2}\int\kappa\left(\left(S+U^{(N)}\right)\times \left[\left(S+U^{(N)}\right)\times\Delta g\right],g\right)dy,\] \[\mathcal{E}_{26}= 2\mathfrak{a}_{2}\int\left(g\times\left[\left(S+U^{(N)}\right) \times\Delta U^{(N)}\right],\Delta g\right)dy,\] \[\mathcal{E}_{27}= 2\mathfrak{a}_{2}\int\left(S\times\left[\left(U^{(N)}\times \Delta U^{(N)}-R^{(N)}\right)\times\Delta U^{(N)}\right],\Delta g\right)dy\] \[-2\mathfrak{a}_{2}\int\kappa\left(S\times\left[\left(U^{(N)} \times\Delta U^{(N)}-R^{(N)}\right)\times\Delta U^{(N)}\right],g\right)dy,\] \[\mathcal{E}_{28}= 2\mathfrak{a}_{2}\int\left(S\times\left(g\times\Delta U^{(N)} \right),\Delta g\right)dy-2\mathfrak{a}_{2}\int\kappa\left(S\times\left(g \times\Delta U^{(N)}\right),g\right)dy,\] \[\mathcal{E}_{29}= 2\mathfrak{a}_{2}\int\left(S\times\left[\left(S+U^{(N)}\right) \times\Delta\left(U^{(N)}\times\Delta U^{(N)}-R^{(N)}\right)\right],\Delta g \right)dy\] \[-2\mathfrak{a}_{2}\int\kappa\left(S\times\left[\left(S+U^{(N)} \right)\times\Delta\left(U^{(N)}\times\Delta U^{(N)}-R^{(N)}\right)\right],g \right)dy,\] \[\mathcal{E}_{30}= 2\mathfrak{a}_{2}\int\left(\left(U^{(N)}\times\Delta U^{(N)}-R^ {(N)}\right)\times\left(S+\Delta U^{(N)}\right),\Delta g\right)dy\] \[-2\mathfrak{a}_{2}\int\kappa\left(\left(U^{(N)}\times\Delta U^{(N )}-R^{(N)}\right)\times\left(S+\Delta U^{(N)}\right),g\right)dy,\] \[\mathcal{E}_{31}= 2\mathfrak{a}_{2}\int\left(U^{(N)}\times\left(g\times\Delta U^{( N)}\right),\Delta g\right)dy\] \[-2\mathfrak{a}_{2}\int\kappa\left(U^{(N)}\times\left(g\times \Delta U^{(N)}\right),g\right)dy,\] \[\mathcal{E}_{32}= 2\mathfrak{a}_{2}\int\left(U^{(N)}\times\left[S\times\Delta\left( U^{(N)}\times\Delta U^{(N)}-R^{(N)}\right)\right],\Delta g\right)dy\]
\[-2\mathfrak{a}_{2}\int\kappa\left(U^{(N)}\times\left[S\times\Delta\left(U^{(N)} \times\Delta U^{(N)}-R^{(N)}\right)\right],g\right)dy.\]
For \(\mathcal{E}_{j}\), \(j=16,19,20\), if \(N\) sufficiently large, then there exists \(t_{0}=t_{0}(N)>0\) such that for \(t\leq t_{0}\), the following estimates hold:
\[\begin{split}|\mathcal{E}_{16}|\leq& Ct^{2\nu}\|g \|_{H^{1}}^{2},\\ |\mathcal{E}_{19}|\leq& C\|g\|_{H^{1}}^{2}\|S\|_{H^{ 3}}\leq Ct^{2\nu}\|g\|_{H^{1}}^{2},\\ |\mathcal{E}_{20}|\leq& C\left(t^{2\nu}\|g\|_{H^{1} }^{2}+t^{2N+3+4\nu}\right).\end{split} \tag{3.31}\]
For \(\mathcal{E}_{17}\), we have
\[\begin{split}|\mathcal{E}_{17}|\leq& C\left(\left\| \Delta\chi^{(N)}\right\|_{W^{2,\infty}}+\left\|R^{(N)}\right\|_{H^{3}}\right) \|g\|_{H^{1}}\|S\|_{H^{3}}\\ &+C\left\|\langle y\rangle^{-1}\nabla\Delta^{2}\chi^{(N)}\right\| _{L^{\infty}}\|\nabla g\|_{L^{2}}\|\langle y\rangle S\|_{L^{2}}.\end{split}\]
Thus,
\[|\mathcal{E}_{17}|\leq Ct^{2\nu}\left(\|g\|_{H^{1}}\|S\|_{H^{3}}+\|\nabla g\| _{L^{2}}\|\langle y\rangle S\|_{L^{2}}\right). \tag{3.32}\]
Note that
\[g=\left(U^{(N)}+S\right)\times\Delta S+S\times\Delta U^{(N)}+R^{(N)}, \tag{3.33}\]
and by the bootstrap hypothesis (3.28), we get
\[\begin{split}&\|g\|_{L^{2}}\leq C\left(\|S\|_{H^{2}}+\left\|R^{(N)} \right\|_{L^{2}}\right),\\ &\|\nabla g\|_{L^{2}}\leq C\left(\|S\|_{H^{3}}+\left\|\nabla R^{ (N)}\right\|_{L^{2}}\right).\end{split} \tag{3.34}\]
Therefore, combining (3.31) and (3.32), we get
\[\begin{split}&|\mathcal{E}_{16}|+|\mathcal{E}_{17}|+|\mathcal{E}_ {19}|+|\mathcal{E}_{20}|\\ \leq& Ct^{2\nu}\left[\|S\|_{H^{3}}^{2}+\left(\|S\|_{H ^{3}}+t^{N+1+2\nu}\right)\|\langle y\rangle S\|_{L^{2}}\right]+Ct^{2N+1+4\nu}. \end{split} \tag{3.35}\]
For \(\mathcal{E}_{18}\), note that
\[\begin{split}& g\times\Delta S=\left(U^{(N)}+S,\Delta S\right) \Delta S-|\Delta S|^{2}\left(U^{(N)}+S\right)\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\left(S\times \Delta U^{(N)}+R^{(N)}\right)\times\Delta S,\\ &\Delta g=\left(U^{(N)}+S\right)\times\Delta^{2}S+Y,\end{split} \tag{3.36}\]
where
\[Y=2\left(\partial_{j}U^{(N)}+\partial_{j}S\right)\times\Delta\partial_{j}S+S \times\Delta^{2}U^{(N)}+2\partial_{j}S\times\Delta\partial_{j}U^{(N)}+\Delta R ^{(N)}.\]
Thus, \(\mathcal{E}_{18}\) can be rewritten as
\[\mathcal{E}_{18}=\mathcal{E}_{18,1}+\mathcal{E}_{18,2}+\mathcal{E}_{18,3},\]
where
\[\begin{split}&\mathcal{E}_{18,1}=-2\mathfrak{a}_{1}\int\left(U^{(N)} +S,\Delta S\right)\left(S,\Delta g\right)dy,\\ &\mathcal{E}_{18,2}=2\mathfrak{a}_{1}\int|\Delta S|^{2}\left(U^{N} +S,\Delta g\right)dy=2\mathfrak{a}_{1}\int|\Delta S|^{2}\left(U^{(N)}+S,Y \right)dy,\\ &\mathcal{E}_{18,3}=-2\mathfrak{a}_{1}\int\left(\left(S\times \Delta U^{(N)}+R^{(N)}\right)\times\Delta S,\Delta g\right)dy.\end{split}\]
For \(\mathcal{E}_{18,1}\), note that
\[-\left(U^{(N)}+S,\Delta S\right)=\left(\Delta U^{(N)},S\right)+2\left(\partial_{j }U^{(N)},\partial_{j}S\right)+\left(\partial_{j}S,\partial_{j}S\right),\]
thus,
\[\mathcal{E}_{18,1}= -2\mathfrak{a}_{1}\int\left[\left(\Delta U^{(N)},S\right)+2\left( \partial_{j}U^{(N)},\partial_{j}S\right)+\left(\partial_{j}S,\partial_{j}S \right)\right]\left(\Delta\partial_{k}S,\partial_{k}g\right)dy\] \[-2\mathfrak{a}_{1}\int\partial_{k}\left[\left(\Delta U^{(N)},S \right)+2\left(\partial_{j}U^{(N)},\partial_{j}S\right)+\left(\partial_{j}S, \partial_{j}S\right)\right]\left(\Delta S,\partial_{k}g\right)dy,\]
which gives
\[|\mathcal{E}_{18,1}|\leqslant C\|S\|_{H^{3}}^{2}\|g\|_{H^{1}}\leqslant Ct^{2 \nu}\|S\|_{H^{3}}^{2}. \tag{3.37}\]
For \(\mathcal{E}_{18,2}\), by (3.36), we get
\[\|Y\|_{L^{2}}\leqslant C\left(\|S\|_{H^{3}}+t^{N}\right).\]
Thus,
\[|\mathcal{E}_{18,2}|\leqslant Ct^{2\nu}\|S\|_{H^{3}}^{2}. \tag{3.38}\]
Finally, \(\mathcal{E}_{18,3}\) satisfies
\[|\mathcal{E}_{18,3}|\leqslant C\|g\|_{H^{1}}\left(\|S\|_{H^{3}}^{2}+t^{N}\|S\| _{H^{3}}\right)\leqslant Ct^{2\nu}\|S\|_{H^{3}}^{2}+Ct^{3N}. \tag{3.39}\]
Combining (3.37), (3.38) and (3.39), we get
\[|\mathcal{E}_{18}|\leqslant C\left(t^{2\nu}\|S\|_{H^{3}}^{2}+t^{3N}\right), \tag{3.40}\]
For \(\mathcal{E}_{21}\), note that
\[\left|\left(S+U^{(N)}\right)\times\Delta S\right|^{2}=|\Delta S|^{2}-\left| \left(S+U^{(N)},\Delta S\right)\right|^{2},\]
\[\left|\partial_{j}\left[\left(S+U^{(N)}\right)\times\Delta S\right]\right|^{2}\] \[= \left|\partial_{j}S+\partial_{j}U^{(N)}\right|^{2}|\Delta S|^{2} +|\Delta\partial_{j}S|^{2}-\left|\left(\partial_{j}S+\partial_{j}U^{(N)}, \Delta S\right)\right|^{2}\] \[-\left|\left(S+U^{(N)},\Delta\partial_{j}S\right)\right|^{2}-2 \left(\partial_{j}S+\partial_{j}U^{(N)},\Delta\partial_{j}S\right)\left(S+U^ {(N)},\Delta S\right),\]
and
\[\left(S+U^{(N)},\Delta S\right)= -\left(S+U^{(N)},\Delta U^{(N)}\right)-\left|\partial_{j}U^{(N)} +\partial_{j}S\right|^{2},\] \[\left(S+U^{(N)},\Delta\partial_{j}S\right)= -\left(\Delta S+\Delta U^{(N)},\partial_{j}S\right)-\Delta\left( \partial_{j}U^{(N)},S\right)\] \[-2\left(\partial_{k}U^{(N)}+\partial_{k}S,\partial_{jk}^{2}S \right). \tag{3.41}\]
This combined with (3.28) gives
\[\left\|\left(S+U^{(N)}\right)\times\Delta S\right\|_{H^{1}} \leqslant C\|S\|_{H^{2}},\] \[\left\|\partial_{j}\left[\left(S+U^{(N)}\right)\times\Delta S \right]\right\|_{H^{1}}\leqslant C\|S\|_{H^{2}}. \tag{3.42}\]
Thus,
\[|\mathcal{E}_{21}|\leqslant Ct^{2\nu}\|g\|_{H^{1}}\|S\|_{H^{3}}. \tag{3.43}\]
For \(\mathcal{E}_{22}\), since
\[g\times\left[\left(U^{(N)}+S\right)\times\Delta S\right]=\left(S\times\Delta U^{( N)}+R^{(N)}\right)\times\left[\left(U^{(N)}+S\right)\times\Delta S\right], \tag{3.44}\]
we obtain that \(\mathcal{E}_{22}\) can be written as
\[\mathcal{E}_{22}=2\mathfrak{a}_{2}\int\left(\left(S\times\Delta U^{(N)}+R^{(N )}\right)\times\left[\left(U^{(N)}+S\right)\times\Delta S\right],\Delta g \right)dy.\]
By (3.42), we can obtain the estimate of \(\mathcal{E}_{22}\):
\[|\mathcal{E}_{22}|\leq C\|g\|_{H^{1}}\left(\|S\|_{H^{3}}^{2}+t^{N}\|S\|_{H^{3} }\right)\leq Ct^{2\nu}\|S\|_{H^{3}}^{2}+Ct^{3N}. \tag{3.45}\]
For \(\mathcal{E}_{23}\), by (3.36), we rewrite \(\mathcal{E}_{23}\) as
\[\mathcal{E}_{23}=\mathcal{E}_{23,1}+\mathcal{E}_{23,2}+\mathcal{E}_{23,3},\]
where
\[\mathcal{E}_{23,1} =2\mathfrak{a}_{2}\int\left(\left(S+U^{(N)}\right)\times\Delta S,\Delta g\right)\left(U^{(N)}+S,\Delta S\right)dy,\] \[\mathcal{E}_{23,2} =-2\mathfrak{a}_{2}\int\left(\left(S+U^{(N)}\right)\times\left[ \left(S\times\Delta U^{(N)}+R^{(N)}\right)\times\Delta S\right],\Delta g \right)dy,\] \[\mathcal{E}_{23,3} =-2\mathfrak{a}_{2}\int\kappa\left(\left(S+U^{(N)}\right)\times \left(g\times\Delta S\right),g\right)dy.\]
For \(\mathcal{E}_{23,1}\), we have
\[\mathcal{E}_{23,1}= -2\mathfrak{a}_{2}\int\left(\left(S+U^{(N)}\right)\times\Delta S,\Delta g\right)\] \[\qquad\qquad\cdot\left[\left(\Delta U^{(N)},S\right)+2\left( \partial_{j}U^{(N)},\partial_{j}S\right)+\left(\partial_{j}S,\partial_{j}S \right)\right]dy\] \[= 2\mathfrak{a}_{2}\int\left(\partial_{k}\left[\left(S+U^{(N)} \right)\times\Delta S\right],\partial_{k}g\right)\] \[\qquad\qquad\cdot\left[\left(\Delta U^{(N)},S\right)+2\left( \partial_{j}U^{(N)},\partial_{j}S\right)+\left(\partial_{j}S,\partial_{j}S \right)\right]dy\] \[\qquad+2\mathfrak{a}_{2}\int\left(\left(S+U^{(N)}\right)\times \Delta S,\partial_{k}g\right)\] \[\qquad\qquad\cdot\partial_{k}\left[\left(\Delta U^{(N)},S\right) +2\left(\partial_{j}U^{(N)},\partial_{j}S\right)+\left(\partial_{j}S,\partial_ {j}S\right)\right]dy.\]
Thus
\[|\mathcal{E}_{23,1}|\leq C\|S\|_{H^{3}}^{2}\|g\|_{H^{1}}\leq Ct^{2\nu}\|S\|_{H ^{3}}^{2}. \tag{3.46}\]
For \(\mathcal{E}_{23,2}\), by (3.36), we get
\[|\mathcal{E}_{23,2}|\leq Ct^{2\nu}\|S\|_{H^{3}}^{2}. \tag{3.47}\]
Finally, \(\mathcal{E}_{23,3}\) can be estimated as
\[|\mathcal{E}_{23,3}|\leq C\|g\|_{H^{1}}\left(\|S\|_{H^{3}}^{2}+t^{N}\|S\|_{H^{ 3}}\right)\leq Ct^{2\nu}\|S\|_{H^{3}}^{2}+Ct^{3N}. \tag{3.48}\]
Combining (3.46), (3.47) and (3.48), we obtain
\[|\mathcal{E}_{23}|\leq C\left(t^{2\nu}\|S\|_{H^{3}}^{2}+t^{3N}\right), \tag{3.49}\]
For \(\mathcal{E}_{i}\), \(i=24,25,\cdots,28,31\), note that
\[\mathcal{E}_{25}= -2\mathfrak{a}_{2}\int\kappa\left(\left(S+U^{(N)}\right)\times \left[\left(S+U^{(N)}\right)\times\Delta g\right],g\right)dy,\]
\[\mathcal{E}_{31}\leqslant -2\mathfrak{a}_{2}\int\kappa\left(U^{(N)}\times\left(g\times\Delta U ^{(N)}\right),g\right)dy,\]
we get
\[\begin{split}&|\mathcal{E}_{24}|\leqslant C\|g\|_{H^{1}}\|S\|_{H^{3 }}^{2},\\ &|\mathcal{E}_{25},\mathcal{E}_{26}|\leqslant C\|g\|_{H^{1}}^{2} \|S\|_{H^{3}}\leqslant Ct^{2\nu}\|g\|_{H^{1}}^{2},\\ &|\mathcal{E}_{27}|\leqslant Ct^{2\nu}\|g\|_{H^{1}}\|S\|_{H^{3} }+Ct^{3N},\\ &|\mathcal{E}_{28}|\leqslant Ct^{2\nu}\|g\|_{H^{1}}^{2},\\ &|\mathcal{E}_{31}|\leqslant Ct^{2\nu}\|g\|_{L^{2}}^{2}.\end{split} \tag{3.50}\]
For \(\mathcal{E}_{i}\), \(i=29,30,32\), we get
\[\begin{split}|\mathcal{E}_{29},\mathcal{E}_{30},\mathcal{E}_{32 }|\leqslant& C\left(\left\|\Delta\chi^{(N)}\right\|_{W^{2,\infty}} +\left\|R^{(N)}\right\|_{H^{3}}\right)\|g\|_{H^{1}}\|S\|_{H^{3}}\\ &+C\left\|\langle y\rangle^{-1}\nabla\Delta^{2}\chi^{(N)}\right\| _{L^{\infty}}\|\nabla g\|_{L^{2}}\|\langle y\rangle S\|_{L^{2}}.\end{split}\]
Thus,
\[|\mathcal{E}_{29},\mathcal{E}_{30},\mathcal{E}_{32}|\leqslant Ct^{2\nu} \left(\|g\|_{H^{1}}\|S\|_{H^{3}}+\|\nabla g\|_{L^{2}}\|\langle y\rangle S\|_{ L^{2}}\right). \tag{3.51}\]
Combining (3.35), (3.40), (3.43), (3.45), (3.49), (3.50) and (3.51), we get
\[\left|\frac{d}{dt}J_{3}(t)\right|\leqslant C\frac{1}{t}\left[\|S\|_{H^{3}}^{2} +\left(\|S\|_{H^{3}}+t^{N+1+2\nu}\right)\|\|y|S\|_{L^{2}}\right]+Ct^{2N+2\nu}. \tag{3.52}\]
#### 3.1.5. The proof Proposition 3.1
To prove Proposition 3.1, it is sufficient to prove the bootstrap hypotheses (3.5) and (3.28) imply (3.2) and (3.3).
According to the bootstrap hypothesis (3.28) and the estimates (3.16) and (3.26), we obtain that for any \(N\) sufficiently large and \(t_{0}\) sufficiently small,
\[\sum_{i=0}^{1}\left|\frac{d}{dt}J_{i}(t)\right|\leqslant C\frac{1}{t}\|S\|_{H^ {1}}^{2}+Ct^{2N-2\nu},\quad\forall t\leqslant t_{0}. \tag{3.53}\]
Note that for any \(c_{0}>0\) sufficiently large, we have
\[\|S\|_{H^{1}}^{2}\leqslant J_{1}+c_{0}J_{0}.\]
Let
\[J(t)=J_{1}(t)+c_{0}J_{0}(t),\]
Then (3.53) can be written as
\[\left|\frac{d}{dt}J(t)\right|\leqslant C\frac{1}{t}J(t)+Ct^{2N-2\nu}. \tag{3.54}\]
Integrating the two sides of (3.54), where the zero initial condition is satisfied at \(t_{1}\), we obtain that for sufficiently large \(N\),
\[J(t)\leqslant\frac{C}{N}t^{2N+1-2\nu},\quad\forall t\in[t_{1},t_{0}]. \tag{3.55}\]
Thus,
\[\|S\|_{H^{1}}^{2}\leqslant\frac{C}{N}t^{2N+1-2\nu},\quad\forall t\in[t_{1},t _{0}], \tag{3.56}\]
For \(\left\|y|S(t)\right\|_{L^{2}}\), by (3.27) and (3.56), we get
\[\left|\frac{d}{dt}\left\|y|S(t)\right\|_{L^{2}}^{2}\right|\leqslant C\frac{1} {t}\left(\left\|y|S(t)\right\|_{L^{2}}^{2}+t^{2N+1-6\nu}\right). \tag{3.57}\]
Integrating both sides of (3.57), we obtain that for \(N\) sufficiently large,
\[\left\|{|y|S(t)}\right\|_{L^{2}}^{2}\leq\frac{C}{N}t^{2N+1-6\nu},\quad\forall t \in[t_{1},t_{0}], \tag{3.58}\]
thus,
\[\left\|{|x|s(t)}\right\|_{L^{2}}^{2}\leq t^{\frac{N}{2}},\quad\forall t\in[t_{1 },t_{0}]. \tag{3.59}\]
Next, we consider \(\left\|{\nabla\Delta s(t)}\right\|_{L^{2}(\mathbb{R}^{2})}\). By (3.33) and (3.28), we obtain that for any \(j=1,2\),
\[\left\|{\partial}_{j}g-\left(U^{(N)}+S\right)\times\Delta{\partial}_{j}S \right\|_{L^{2}}\leq C\left(\left\|S\right\|_{H^{2}(\mathbb{R}^{2})}+t^{N+1+2 \nu}\right). \tag{3.60}\]
Note that \(\left|{U^{(N)}+S}\right|=1\). Thus,
\[\left|{\left(U^{(N)}+S\right)\times\Delta{\partial}_{j}S}\right|^{2}=\left|{ \Delta{\partial}_{j}S}\right|^{2}-\left(U^{(N)}+S,\Delta{\partial}_{j}S \right)^{2}.\]
This combined with (3.28) and (3.41) gives
\[\left\|{\Delta{\partial}_{j}g}\right\|_{L^{2}}^{2}-\left\|{\left(U^{(N)}+S \right)\times\Delta{\partial}_{j}S}\right\|_{L^{2}}^{2}\leq C\|S\|_{H^{2}( \mathbb{R}^{2})}^{2}. \tag{3.61}\]
Now we consider the functional \(\tilde{J}_{3}(t)=J_{3}+c_{1}J_{0}(t)\). By (3.34), (3.60) and (3.61), we obtain that for \(c_{1}>0\) sufficiently large, there exists \(c_{2}>0\) such that
\[c_{2}\|S\|_{H^{3}}^{2}-Ct^{2N+1+2\nu}\leq\tilde{J}_{3}(t)\leq C\left(\|S\|_{H^ {3}(\mathbb{R}^{2})}^{2}+t^{2N+1+2\nu}\right). \tag{3.62}\]
By (3.52), (3.53) and (3.58), we get
\[\begin{split}\left|\frac{d}{dt}\tilde{J}_{3}(t)\right|\leq& C\left[\frac{1}{t}\left(\|S\|_{H^{3}(\mathbb{R}^{2})}^{2}+\|y|S\|_{L^{2}( \mathbb{R}^{2})}\right)+t^{2N-2\nu}\right]\\ \leq& C\frac{1}{t}\tilde{J}_{3}(t)+Ct^{2N-6\nu}. \end{split} \tag{3.63}\]
Integrating the two sides of (3.63) with respect to \(t\) from \(t_{1}\) to \(t\), and noting that
\[\tilde{J}_{3}(t_{1})=t_{1}^{2+4\nu}\int\left|{\nabla r^{(N)}(x,t_{1})}\right|^ {2}dx+t_{1}^{1+2\nu}\int\kappa\left(t^{-\frac{1}{2}+\nu}x\right)\left|r^{(N)}( x,t_{1})\right|^{2},\]
we get
\[\left|{\tilde{J}_{3}(t_{1})}\right|\leq Ct_{1}^{2N+1+2\nu}.\]
Thus,
\[\tilde{J}_{3}(t)\leq Ct^{2N+1-6\nu},\quad\forall t\in[t_{1},t_{0}].\]
This combined with (3.62) gives
\[\|S\|_{H^{3}(\mathbb{R}^{2})}^{2}\leq Ct^{2N+1-6\nu},\quad\forall t\in[t_{1}, t_{0}],\]
thus,
\[\|s\|_{H^{3}(\mathbb{R}^{2})}\leq t^{\frac{N}{2}},\quad\forall t\in[t_{1},t_{0 }].\]
This completes the proof of Proposition 3.1.
### Proof of the main theorem
Now, we start to prove the main theorem of this paper. Fix \(N\) such that Proposition 3.1 holds. Select the sequence \(\{t^{j}\}\), \(0<t^{j}<t_{0}\), satisfying \(t^{j}\to 0\) as \(j\to\infty\). Let \(u_{j}(x,t)\) be a solution of the following problem:
\[\left\{\begin{array}{rl}&\partial_{t}u_{j}=\mathfrak{a}_{1}u_{j}\times \Delta u_{j}-\mathfrak{a}_{2}u_{j}\times(u_{j}\times\Delta u_{j}),\quad t\geq t ^{j},\\ &u_{j}|_{t=t^{j}}=u^{(N)}(t^{j}).\end{array}\right. \tag{3.64}\]
By Proposition 3.1, for any \(j\), there exists \(u_{j}-u^{(N)}\in C([t^{j},t_{0}],H^{3})\) satisfying
\[\left\|u_{j}(t)-u^{(N)}(t)\right\|_{H^{3}}+\left\|\langle x\rangle\left(u_{j} (t)-u^{(N)}(t)\right)\right\|_{L^{2}}\leq 2t^{\frac{N}{2}},\quad\forall t\in \left[t^{j},t_{0}\right]. \tag{3.65}\]
Thus, the sequence \(u_{j}(t_{0})-u^{(N)}(t_{0})\) is compactness in \(H^{2}\). This ensures that we can select a subsequence and pass the limit such that \(u_{j}(t_{0})-u^{(N)}(t_{0})\) converges to some \(1\)-equivariant function \(w\in H^{3}\) in \(H^{2}\), where \(\|w\|_{H^{3}}\leq\delta^{2\nu}\) and \(\left|u^{(N)}(t_{0})+w\right|=1\).
For the Cauchy problem:
\[\left\{\begin{array}{rl}&\partial_{t}u=\mathfrak{a}_{1}u\times\Delta u- \mathfrak{a}_{2}u\times(u\times\Delta u),\quad t\geq t_{0},\\ &u|_{t=t_{0}}=u^{(N)}(t_{0})+w,\end{array}\right. \tag{3.66}\]
by the classical local well-posedness theory, (3.66) exists a unique solution \(u\in C((t^{*},t_{0}],\dot{H}^{1}\cap\dot{H}^{3})\) for some \(0\leq t^{*}<t_{0}\). According to the \(H^{1}\) continuity of the Landau-Lifshitz flow, we have \(u_{j}\to u\) in \(C((t^{*},t_{0}],\dot{H}^{1})\). This combined with (3.65) gives
\[\left\|u(t)-u^{(N)}(t)\right\|_{H^{3}}\leq 2t^{\frac{N}{2}},\quad\forall t\in \left(t^{*},t_{0}\right]. \tag{3.67}\]
Thus, \(t^{*}=0\). This combined with Proposition 2.1 gives Theorem 1.1.
## Acknowledgements
This work was supported by National Natural Science Foundation of China (Grant Nos. 12231016 and 12071391) and Guangdong Basic and Applied Basic Research Foundation (Grant No. 2022A1515010860).
|
2309.08316 | How to Handle Different Types of Out-of-Distribution Scenarios in
Computational Argumentation? A Comprehensive and Fine-Grained Field Study | The advent of pre-trained Language Models (LMs) has markedly advanced natural
language processing, but their efficacy in out-of-distribution (OOD) scenarios
remains a significant challenge. Computational argumentation (CA), modeling
human argumentation processes, is a field notably impacted by these challenges
because complex annotation schemes and high annotation costs naturally lead to
resources barely covering the multiplicity of available text sources and
topics. Due to this data scarcity, generalization to data from uncovered
covariant distributions is a common challenge for CA tasks like stance
detection or argument classification. This work systematically assesses LMs'
capabilities for such OOD scenarios. While previous work targets specific OOD
types like topic shifts or OOD uniformly, we address three prevalent OOD
scenarios in CA: topic shift, domain shift, and language shift. Our findings
challenge the previously asserted general superiority of in-context learning
(ICL) for OOD. We find that the efficacy of such learning paradigms varies with
the type of OOD. Specifically, while ICL excels for domain shifts, prompt-based
fine-tuning surpasses for topic shifts. To sum up, we navigate the
heterogeneity of OOD scenarios in CA and empirically underscore the potential
of base-sized LMs in overcoming these challenges. | Andreas Waldis, Yufang Hou, Iryna Gurevych | 2023-09-15T11:15:47Z | http://arxiv.org/abs/2309.08316v3 | # Bridging Topic, Domain, and Language Shifts:
###### Abstract
Language models (LMs) excel in in-distribution (ID) scenarios where train and test data are independent and identically distributed. However, their performance often degrades in real-world applications like argument mining. Such degradation happens when new topics emerge, or other text domains and languages become relevant. To assess LMs' generalization abilities in such out-of-distribution (OOD) scenarios, we simulate such distribution shifts by deliberately withholding specific instances for testing, as from the _social media_ domain or the topic _Solar Energy_.
Unlike prior studies focusing on specific shifts and metrics in isolation, we comprehensively analyze OOD generalization. We define three metrics to pinpoint generalization flaws and propose eleven classification tasks covering topic, domain, and language shifts. Overall, we find superior performance of prompt-based fine-tuning, notably when train and test splits primarily differ semantically. Simultaneously, in-context learning is more effective than prompt-based or vanilla fine-tuning for tasks when training data embodies heavy discrepancies in label distribution compared to testing data. This reveals a crucial drawback of gradient-based learning: it biases LMs regarding such structural obstacles.
## 1 Introduction
Previous evaluations (Wang et al., 2019, 2019) of fine-tuned language models (LMs) (Devlin et al., 2019; Liu et al., 2019; Radford et al., 2019; He et al., 2021) assume training and testing data are independent and identically distributed - referred to as in-distribution (ID). However, out-of-distribution (OOD) evaluations address more practical scenarios when these assumptions do not, introducing specific distribution shifts between training and testing data. For example, we expect LMs to generalize to new topics, text domains, and languages when used in argument mining tasks (Slonim et al., 2021). Although OOD generalization is well-studied in NLP, existing evaluation studies have primarily focused on a single type of distribution shift (K et al., 2020; Yang et al., 2022; Yuan et al., 2023), such as text domains by training and testing on reviews from different product groups (Blitzer et al., 2007) or statements from social media or debating platforms (Hardalov et al., 2021). Owing to this reliance on single shift types, OOD generalization encompassing multiple such shift types (as shown in Figure 1) remains understudied. As a result, methodological and analytical advancements have been limited; for instance, existing methods do not necessarily transfer to other shift types because of their overreliance on shift-specific features (Chen et al., 2021; Liang et al., 2022).
To address this research gap, we consider tasks covering multiple types of distribution shifts to evaluate OOD generalization comprehensively and fine-grained evaluation metrics allowing the detection of crucial generalization flaws. Starting with the latter and keeping known generalization
Figure 1: Example of the topic (_Solar Energy_ vs. _Universal Health Care_ from Shnarch et al. (2018)), domain (_social media_ vs. _debating platform_ from Mohammad et al. (2016); Walker et al. (2012)), and language shifts (_German_ vs. _French_ from Vamvas and Sennrich (2020))
issues in mind (Reimers and Gurevych, 2017; Mosbach et al., 2021), we define three relevant metrics (SS 2): absolute performance (_Applicability_), such as \(F_{1}\) score; correlation between dev loss and performance (_Reliability_); deviations across multiple runs (_Stability_). Next, and unlike previous work (Yang et al., 2022; Yuan et al., 2023), we go beyond solely considering domain shifts and introduce eleven diverse OOD classification tasks covering topic, domain, and language shifts (SS 3). These tasks originate from fields in which such shifts are commonly treated: argument mining, stance detection, sentiment analysis, and text entailment (Blitzer et al., 2007; Vamvas and Sennrich, 2020; Slonim et al., 2021; Hardalov et al., 2021; Yang et al., 2022). During our experiments (SS 4), we explore and analyze a variety of LMs and different learning paradigms, such as vanilla (**FT**) and prompt-based fine-tuning (**P+FT**), or in-context learning (**ICL**) using ChatGPT. Overall, we make the following contributions:
**1)** We introduce eleven diverse OOD classification tasks that cover topic, domain, and language distribution shifts. With this, we establish for the first time a basis for comprehensively evaluating the variety of OOD scenarios beyond domain shifts.
**2)** We propose three fine-grained evaluation metrics and show that they allow us to identify crucial generalization flaws, such as the misalignment of loss and performance.
**3)** We conduct an extensive evaluation of the introduced tasks. We demonstrate the superior performance of P+FT in terms of Stability (\(+2.3\)) and Reliability (\(+5.8\)). Additionally, ICL with ChatGPT lags behind gradient-based methods on average but excels particularly when distribution shifts induce structural generalization obstacles, such as differences in label distributions.
**4)** We carry out an in-depth analysis and discover that P+FT and ICL provide more robust predictions and P+FT retains significantly more semantics in its encoder layers than FT.
Unlike studies focusing exclusively on domain shifts (Yuan et al., 2023), fine-tuning methods outperform ICL using ChatGPT on average when considering different shifts - even with minimal trainable parameters. While recognizing the out-of-the-box effectiveness for domain shifts, it is essential to note that information from testing data in simulated OOD scenarios might already have been incorporated during the pre-training of such large LMs. Consequently, assessing the challenges posed by distribution shifts in the context of specific LMs is essential when comparing them and drawing conclusions regarding real-world OOD performance with truly unseen data.
## 2 Generalization
In the following, we define the investigated generalization scenarios (SS 2.1) and elaborate on metrics for fine-grained generalization measurement (SS 2.2).
### Generalization Scenarios
To comprehensively analyze out-of-distribution (OOD) generalization, we compare OOD and in-distribution (ID) scenarios. While we assume that train and test instances are independent and identically distributed in the ID setting, we deliberately introduce different covariate distribution shifts for OOD. More precisely, we withhol instances of specific **topics** (like _Nuclear Energy_), **domains** (such as _social media_), and **languages** (like _French_) for testing. By considering these shift types, we aim to examine crucial capabilities of language models (LMs) for real-world scenarios, such as generalization towards upcoming topics, changing text domains, and different languages. Ideally, no gap between ID and ODD exists, and training data does not bias LMs apart from relevant features to solve the tasks. Such unwanted bias can originate from semantic features about specific topics or differently distributed labels. However, we anticipate noticeable performance gaps based on prior work (Stab et al., 2018; Kumar et al., 2022).
### Generalization Success
Although generalization success is often assessed using a single metric (like \(F_{1}\) score), previous works tend to ignore known stability issues, such as apparent deviations regarding randomness (Mosbach et al., 2021). To identify such generalization flaws and provide a comprehensive evaluation, we define three fine-grained measurements that consider multiple runs with different folds and seeds:
_Applicability_ evaluates how well LMs perform on tasks measured as the average performance (\(\mu_{F_{1}}\)) across the multiple runs.
_Reliability_ quantifies how consistent predictions of LMs are, measuring the average Kendall correlation \(\mu_{\tau}\) between loss and \(F_{1}\) for dev instances.
Ideally, we expect an inverse linear correlation with \(\mu_{\tau}\) equal to -1. However, given that we determine final labels as the class with the highest probability, we allow dev performance and loss to increase simultaneously. When class probabilities change from \((95\%,5\%)\) to \((90\%,10\%)\), the cross-entropy changes from \(0.074\) to \(0.15\). While predicting the same class (\(\hat{y}=k_{0}\)), we infer that the LM is becoming less sure about the prediction. This becomes particularly relevant for OOD generalization, where overfitting to distributional properties of training data, such as unique vocabulary, is likely to introduce uncertainty when processing test data.
_Stability_examines the impact of data and randomness on the _Applicability_ and _Reliability_ properties. As recommended by Reimers and Gurevych (2017), we measure the standard deviation of these two properties (\(\sigma_{F_{1}}\) and \(\sigma_{\tau}\)) based on a set of multiple runs.
## 3 Out-of-Distribution Tasks
In this section, we outline the selection (SS 3.1) and composition (SS 3.2) of the tasks used to verify OOD capabilities comprehensively. Additionally, we characterize (SS 3.3) the induced distribution shifts regarding semantic, surface, and label properties.
### Task Selection
Our primary focus lies on datasets from argument mining, stance detection, and sentiment analysis, which meet our criteria of having annotations for at least one type of shift: topic, domain, or language. These fields traditionally explore the influence of these properties in their studies. Moreover, in contrast to English-only datasets, multilingual datasets often provide multiple annotations -- such as language and domain (_x-review_) or topic and language (_x-stance_). This enables us to formulate two tasks to address both types of shifts. In total, we consider eleven tasks (Table 1) that cover shifts in topic, domain, and/or language:
Argument Quality (_arg-qua_)Toledo et al. (2019) provides 9,100 argument pairs annotated whether the first or second has the higher quality, covering **22 topics**.
Argument Similarity (_arg-sim_)Reimers et al. (2019) annotated 3,595 arguments pairs of **28 topics**, whether they are similar or not.
Argument Classification (_arg-cls_)Stab et al. (2018) annotated arguments for their argumentative stance (_pro_, _con_, _neutral_) regarding one of **eight topics**.
Evidence Classification (_evi-cls_)Shnarch et al. (2018) presented 5,785 sentences annotated as relevant or not for one out of **118 topics**.
Sentiment Classification (_review_)Blitzer et al. (2007) collected 8,000 reviews annotated as positive or negative for **four domains** (Amazon product groups).
Multi-Dataset Stance Detection (_stance_)Following Hardalov et al. (2021), we use the _semeval_Mohammad et al. (2016), _emergent_Ferreira and Vlachos (2016), _iac_ dataset Walker et al. (2012) to evaluate stance detection across **three domains** (social media, news, and debating). All of them are annotated with the same labels (_pro_, _con_, _neutral_).
Multi-Dataset Entailment (_entail_)Following Yang et al. (2022), we consider three medium-sized datasets (_rte_Wang et al. (2018), _SciTail_Khot et al. (2018), _hans_McCoy et al. (2019)) to evaluate textual-entailment across **three domains**.
Multi-Lingual Stance Detection (_x-stance_)This dataset (Vamvas and Sennrich (2020) includes around 63,000 comments annotated either as _favor_ or _against_ regarding **12 topics** and covering **three languages** (_de_, _fr_, _it_).
Multi-Lingual Sentiment Classification (_x-review_)Prettenhofer and Stein (2010) presents a set of 43,000 positive or negative reviews covering **four languages** (_de_, _en_, _fr_, _jp_) and **three domains** (Amazon product groups).
### Task Composition
We enforce distribution shifts for OOD evaluation by composing train/dev/test splits which include instances with distinct distributional properties, such as unique topics or text domains (Figure 1). We utilize multiple folds to ensure each distinct distributional property (like a unique topic) is tested precisely once. To ensure a fair comparison, we create the same number of ID folds with the same-sized train/dev/test splits as for OOD.
### Distribution Shifts
The nature of induced distribution shifts influences the difficulties and, consequently, the amount of
generalization expected to solve a specific OOD classification task. We describe induced shifts in terms of semantic, surface, and label properties in Table 1. For all metrics, a higher value indicates increased difficulty. We start analyzing the semantic separability of train and test instances. Following Sun et al. (2022), we first embed1 all instances. Subsequently, we employ k-means clustering Lloyd (1982); MacQueen (1967) to identify two clusters. We then measure the alignment between the identified clusters and the train/test assignments using the adjusted rand index Hubert and Arabie (1985). A higher score indicates a more significant semantic divergence between train and test instances.
Footnote 1: We follow Reimers and Gurevych (2019) and embed instance with _paraphrase-multilingual-mpnetbase-v2_.
Next, we examine surface-level text features to identify generalization challenges of the text beyond semantics. To this end, we calculate the differences in average readability Flesch (1948) and word count (\(\Delta\) Flesch, \(\Delta\) Words) between train and test instances. Lastly, we assess distributional disparities between the class labels of train and test instances using Kullback-Leibler (KL) divergence Boyd and Vandenberghe (2004). A higher KL value denotes more significant imbalances, adding an extra layer of complexity, as LMs tend to overfit the training label distribution.
Based on these metrics, we categorize tasks into different groups. First, _arg-qua_, _arg-sim_, and _stance_ exhibit high semantic differences with separability scores ranging from \(75.8\) to \(86.7\). Next, _review_, _stance_, _entail_, and _x-review_ embody surface-level difficulties due to the varying readability (\(\Delta\) Flesch) or text lengths (\(\Delta\) Word Count). Finally, _evi-cls_, _stance_, and _entail_ exhibit unequal label distributions, resulting in high KL divergence values, thereby posing additional challenges. Among these tasks, _stance_ appears to be the most challenging, featuring distinct semantic domains, text structure variations, and unequal class distributions.
## 4 Experiments
In this section, we start by outlining our experimental setup (SS 4.1) before discussing results (SS 4.2) and conclude with an in-depth analysis (SS 4.3).
### Setup
ModelsWe primarily experiment with base-sized LMs, including **BERT**Devlin et al. (2019), **RoBERTa**Liu et al. (2019), and **DeBERTa-v3**He et al. (2021) and their multilingual counterparts Devlin et al. (2019); Conneau et al. (2020); He et al. (2021). For additional experiments, we consider **ALBERT**Lan et al. (2020), **DeBERTa**He et al. (2021), **ELECTRA**Clark et al. (2020), and gpt-3.5-turbo Ouyang et al. (2022) denoted as **ChatGPT**.
Learning ParadigmsWe assess the generalization capabilities of LMs under various learning paradigms. These include linear probing (**LP**), vanilla fine-tuning (**FT**), prompting (\(P\)), prompt-based fine-tuning (**P+FT**), and in-context learning (**ICL**). In LP and FT, we use newly initialized classification heads atop the LM, which remains either frozen (LP) or trainable (FT)2. For P and P+FT, we rely on the pre-trained MLM head and keep the LM frozen (P) or trainable (P+FT). Finally, we employ in-context learning (**ICL**) to verify the capabilities of large LMs. Further details on the ICL and P+FT paradigm can be found in Appendix SS A.5 and SS A.4, respectively.
Footnote 2: We use [SEP] to concatenate the input with its topic, if available
EvaluationWe conduct evaluations on all tasks and learning paradigms and take LP and \(P\) as a lower bound and ID fine-tuning (FT-ID) as an upper bound. To account for data variability and randomness, we use three different seeds and employ a multifold setup, either three- or four-fold. Using these runs, we use fine-grained performance reporting including average _Stability_ (\(\mu_{F_{1}}\)), _Reliability_ (\(\mu_{\tau}\)), and the _Stability_ (\(\sigma_{F_{1}}\),\(\sigma_{\tau}\)) - as defined in SS 2.2.
\begin{table}
\begin{tabular}{l|c|c|c c|c} \hline \hline & **Shift** & **Separability** & \(\Delta\)**Flesch** & \(\Delta\)**Words** & **KL** \\ \hline _arg-qua_ & Top. & 78.6 & 1.5 & 2.2 & 0.1 \\ _arg-sim_ & Top. & 75.8 & 4.6 & 0.27 & 0.4 \\ _arg-cls_ & Top. & 28.7 & 2.0 & 0.6 & 1.6 \\ _evi-cls_ & Top. & 56.3 & 2.4 & 0.7 & 7.1 \\ \hline _review_ & Dom. & 52.7 & 6.5 & 60.5 & 0.0 \\ _stance_ & Dom. & 86.7 & 2.7 & 60.8 & 70.8 \\ _entail_ & Dom. & 40.4 & 5.1 & 31.2 & 12.8 \\ _x-stance_ & Lang./Top. & 0.05/19.8 & 16.6/1.3 & 6.6/0.3 & 0.6/0.4 \\ _x-review_ & Lang./Dom. & 0.07/72.4 & 11.0/1.8 & 60.0/6.5 & 0.0/0.0 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Distribution shift characteristics between train and test splits of the eleven tasks - averaged across all folds: Separability Sun et al. (2022), differences in Flesch (Flesch (Flesch (1948)) score and word number between train and test instances, and how the class distribution of these splits differ (KL divergence).
### Results
In the following, we report and discuss the results of evaluating the proposed OOD tasks for different LMs and learning paradigms. First, we present these results aggregated before entering the detailed level. Eventually, we focus on model specialties, parameter-efficient tuning methods, and compare gradient-based methods with in-context learning using ChatGPT covering the English-only tasks.
Aggregated ResultsFigure 2 shows the aggregated results of evaluating all eleven tasks for three LMs in different evaluation scenarios. Overall, we note superior generalization capabilities of DeBERTa-v3 compared to BERT and RoBERTa. Comparing ID and OOD vanilla fine-tuning (FT) using all three generalization measurements reveals a fine-grained picture of their generalization gap and potential flaws. OOD shows degraded _Applicability_ (\(F_{1}\) score), lower _Reliability_ (correlation \(\tau\) between loss and \(F_{1}\)), and lower _Stability_ (higher \(\sigma_{F_{1}}\) and \(\sigma_{\tau}\)) than ID. In particular, we see the misalignment of loss and performance - a violation of a fundamental generalization assumption - attributed crucially to these gaps. Simultaneously, we observe for DeBERTa-v3 and RoBERTa that prompt-based fine-tuning (P+FT) not only improves the absolute performances (_Applicability_) but also leads to better _Reliability_ and fewer deviations (_Stability_).
Detailed ResultsAfter finding fundamental generalization flaws considering FT on an aggregated level, we continue to analyze results in more detail considering additional learning paradigms (Table 2). Comparing them, LP provides better results than P for all LMs, while both underperform the others. As in the aggregated results, P+FT performs better than FT regarding all metrics for DeBERTa-v3 and RoBERTa, where DeBERTa-v3 provides \(2.3\) better _Applicability_ (\(\mu_{F_{1}}\)), \(5.8\) higher _Reliability_ (\(\mu_{\tau}\)), and better _Stability_ with \(0.2\) (\(\sigma_{F_{1}}\)) and \(5.0\) (\(\sigma_{\tau}\)). At the same time, it outperforms FT in ten out of eleven tasks and reaches ID performance for two tasks (_arg-sim_ and _review_). In contrast, BERT using P+FT performs slightly worse compared to FT. Note that we will discuss the role of the selected LM in more detail in the next paragraph.
Regarding the different tasks, we see _stance_ and _entail_ are the most difficult OOD tasks because of the most significant gap between OOD results (FT
\begin{table}
\begin{tabular}{l|c c c c|c c c c|c c|c c} \hline \hline & **arg-qua** & **arg-sim** & **arg-cls** & **evi-cls** & **review** & **stance** & **entail** & **x-stance** & **x-review** & **_Applicability_ & _Reliability_ \\ & _Top._ & _Top._ & _Top._ & _Top._ & _Top._ & _Dom._ & _Dom._ & _Lung/Top._ & _Lung/Top._ & _Lung/Dom._ & \(\mu_{F_{1}}\pm\sigma_{F1}\) & \(\mu_{\tau}\pm\sigma_{\tau}\) \\ \hline \(\mathbf{LP}_{\text{BERT}}\) & 48.4 & 57.1 & 42.7 & 65.6 & 81.0 & 27.9 & 46.3 & 52.5/56.7 & 67.5/73.3 & \(56.3\pm 0.8\) & \(-58.4\pm 6.2\) \\ \(\mathbf{P}_{\text{BERT}}\) & 40.5 & 50.4 & 40.1 & 49.2 & 72.9 & 25.0 & 41.2 & 34.5/48.6 & 45.6/54.5 & \(45.7\pm 0.2\) & - \\ \(\mathbf{FT}_{\text{BERT}}\) & 75.5 & 68.4 & 57.5 & 74.7 & 89.3 & 31.1 & 50.7 & 62.0/63.9 & 77.7/84.4 & \(66.8\pm 0.9\) & \(-56.8\pm 12.3\) \\ \(\mathbf{P+FT}_{\text{BERT}}\) & 76.2 & 66.0 & 59.8 & 75.7 & 89.3 & 28.5 & 48.0 & 59.5/63.6 & 79.6/83.9 & \(66.4\pm 1.1\) & \(-61.7\pm 12.4\) \\ \hline \(\mathbf{FLD}_{\text{BERT}}\) & 87.9 & 76.4 & 67.3 & 78.9 & 90.4 & 61.1 & 93.6 & 67.6 & 87.0 & \(78.9\pm 0.4\) & \(-96.1\pm 6.5\) \\ \hline \(\mathbf{LP}_{\text{BERT}-3}\) & 53.0 & 70.0 & 55.1 & 67.9 & 88.6 & 23.4 & 58.0 & 55.4/59.7 & 78.7/83.6 & \(63.0\pm 0.5\) & \(-64.3\pm 4.3\) \\ \(\mathbf{P}_{\text{BERT}-3}\) & 54.2 & 58.6 & 40.3 & 57.2 & 61.9 & 26.5 & 54.6 & 51.1/51.2 & 49.5/52.0 & \(50.6\pm 1.0\) & \(-57.5\pm 8.1\) \\ \(\mathbf{FT}_{\text{BERT}-4}\) & 78.4 & 75.4 & 64.0 & 77.3 & 93.4 & 29.6 & 55.6 & **69.8**/69.3 & 91.3/90.9 & \(72.3\pm 1.1\) & \(-72.6\pm 13.4\) \\ \(\mathbf{P+FT}_{\text{BERT}-5}\) & **78.5** & **79.1** & **74.6** & **78.6** & **94.2** & **33.0** & **60.2** & 69.7/69.9 & **11.8**/ **91.4** & \(74.6\pm 0.9\) & \(-78.4\pm 8.4\) \\ \(\mathbf{FT}_{\text{ID}-\text{BERT}-5}\) & 89.0 & 78.4 & 75.2 & 80.6 & 93.9 & 63.3 & 95.4 & 72.2 & 92.1 & \(82.2\pm 0.4\) & \(-97.7\pm 6.5\) \\ \hline \(\mathbf{LP}_{\text{RoBERTa}}\) & 51.8 & 55.3 & 41.6 & 62.5 & 85.7 & 28.7 & 39.2 & 55.1/57.5 & 82.8/82.5 & \(58.4\pm 0.6\) & \(-56.3\pm 6.2\) \\ \(\mathbf{P}_{\text{RoBERTa}}\) & 48.3 & 55.3 & 42.9 & 51.8 & 80.5 & 24.0 & 40.9 & 42.4/48.7 & 67.2/73.4 & \(52.3\pm 0.0\) & - \\ \(\mathbf{FT}_{\text{RoBERTa}}\) & 70.9 & 73.0 & 56.9 & 77.5 & 92.2 & 30.0 & 51.3 & 62.6/66.8 & 89.6/90.1 & \(69.1\pm 2.5\) & \(-69.7\pm 10.4\) \\ \(\mathbf{FT}_{\text{RoBERTa}}\) & 77.6 & 74.3 & 66.0 & 77.9 & 92.0 & 29.1 & 52.4 & 67.4/67.5 & 82.7/90.0 & \(71.3\pm 0.5\) & \(-75.5\pm 8.1\) \\ \(\mathbf{FT}_{\text{RoBERTa}}\) & 84.0 & 79.4 & 71.0 & 80.9 & 92.9 & 64.7 & 94.1 & 66.3 & 91.0 & \(80.5\pm 1.9\) & \(-96.6\pm 4.7\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results for BERT, DeBERTa-v3, and RoBERTa using linear probing (**LP**), prompting (**P**), fine-tuning (**FT**), and prompt-based fine-tuning (**P+FT**), as well as ID fine-tuning (**FT**-**ID**). We report average _Applicability_ (\(\mu_{F_{1}}\)), _Reliability_ (\(\mu_{\tau}\)), _Stability_ (\(\sigma_{F_{1}},\sigma_{\tau}\)). Best OOD performance within one LM are underlined, **bold** highlights best OOD performance across LMs, and \(\dagger\) indicates better OOD than ID performance.
Figure 2: Analysis of generalization gaps between ID and OOD (FT) and OOD prompt-based fine-tuning (P+FT) across eleven tasks (§ 3) on three models regarding _Applicability_ (\(F_{1}\)), _Reliablity_ (\(\tau\)), and _Stability_ - deviation of \(F_{1}\) and \(\tau\).
and P+FT) and FT-ID. Further, a small performance gap between LP and P is a good indicator for identifying such cases. These difficulties correlate with their previously measured high KL divergences (Table 1). Nevertheless, P+FT with DeBERTa-v3 provides improvements of \(3.4\) (_stance_) and \(4.6\) (_entail_) compared to FT. Overall, we observe higher gains of P+FT and bigger differences between ID and OOD for topic and domain than language shifts.
Model SpecialitiesPreviously, we found that P+FT is highly effective with DeBERTa-v3 and RoBERTa but not with BERT. To gain more insights into the influence of model specialties like pre-training objectives, we evaluate additional LMs on the English-only tasks involving topic and domain shift. From Figure 3, we note that mixed pre-training (token and sentence objectives) make LMs less suited for P+FT as ALBERT exhibits no clear gains (\(0.3\)). Next, considering DeBERTa and ELECTRA does not provide ground to confirm that the ELECTRA-style objective of DeBERTa-v3 attributes to its superiority. Instead, since DeBERTa outperforms RoBERTa, we see the disentangled attention as a crucial factor in the success of DeBERTa-v3. By comparing with DeBERTa, we assume the significantly more extensive vocabulary of 120k tokens is another reason for the success of DeBERTa-v3. Overall, DeBERTa-v3 is the best choice for P+FT due to its token-level-only pre-training objectives and its various other specialties.
Parameter-Efficient TuningNext, we turn to parameter-efficient tuning - **LoRA**Hu et al. (2022), **P-Tuning**Liu et al. (2021), and **Prompt-Tuning**Lester et al. (2021) - to evaluate the importance of full-model tuning. Following Table 3, we see that LoRA with \(r=4\) performing on par as full-parameter tuning and outperforms P-Tuning and Prompt-Tuning on most tasks. These results indicate that OOD fine-tuning still leaves a large potential of LMs unexploited.
In-Context LearningConcluding, we evaluate in Figure 4 the English-only tasks using in-context learning (**ICL**) with ChatGPT3. On average, ICL does not reach the performance level of gradient-based (FT and P+FT) approaches. Regarding the different types of shifts, ICL underperforms them on topic shifts while performing on par or better for tasks exhibiting a domain shift. From the results of these latter tasks, we distinguish between two situations where ICL contemporary outperforms gradient-based learning. First, if we assume that a task was already well covered within the enormous pre-training of ChatGPT, such as for sentiment analysis (_review_). In addition, ICL succeeds when the nature of the distribution shifts prevents gradient-based paradigms from adequately learning - like for _stance_ or _entail_ where the shifts induce heavy differences in class distributions (high **KL** divergence). This is also visible when considering the relatively small gains of prompt-based fine-tuning (P+FT) over the prompting-only paradigm (P) - from Table 2\(+6.5\) for _stance_ and \(+5.6\) for _entail_ with DeBERTa-v3 compared to \(+34.3\) for _arg-cls_.
Footnote 3: Please find details in the Appendix (§ A.5)
tions. However, prompt-based fine-tuning (P+FT) relies less on surface correlations and shows similar patterns as FT-ID. These results provide further evidence that P+FT leads to better generalization capabilities. In addition, ICL using _ChatGPT_ shows fewer, but not none, deviations from the dataset average. Therefore, we assume a general bias of LMs regarding such surface features.
Model InsightsWhile we previously considered LMs as black boxes, we next focus on the inners of LMs tuned using FT and P+FT. From Table 4, we see that P+FT predicts with a similar certainty as ID fine-tuning (FT-ID) and a higher one than FT. Thus, we see P+FT being less confused by the distribution shift than FT. Further, we note a more evident correlation between certainty and surface features (Flesch score or word counts) for FT than P+FT. For example, FT tends to be less confident for input with a higher length or a lower Flesch score. This observation aligns with the previous analysis regarding the dependence on surface features.
Next, we analyze token-level attributions of LMs. Following Kobayashi et al. (2020), we calculate the attribution between input and classification tokens - _CLS_ for FT and FT-ID, and _MASK_ for P+FT. Note that we consider only the attribution of the input text and make embeddings of _CLS_ and _MASK_ comparable using the Euclidean norm. We see that, on average, FT-ID and FT have a higher average attribution than P+FT. Therefore, attributions are more evenly distributed for P+FT. We can observe this pattern already when we compare the token attribution of the pre-trained model (_raw_ vs. _P+raw_). Apparent differences are also visible when we compare how attributions of correct or wrong predicted instances differ. While P+FT shows maximum 0.4 differences (P+FT), this rises to 1.0 for FT-ID. With these results, we assume LMs applied in prompt-based or vanilla fine-tuning fundamentally differ in how inputs are processed. An analysis of the layer-wise embeddings of the _CLS_ and _MASK_ tokens further supports this assumption. As in Figure 6, we observe that by relying on the _MASK_ token as classification proxy, P+FT does not touch semantic information (about topics) until the last layers. In contrast, FT eliminates this information across all layers during training.
## 5 Related Work
Out-of-Distribution GeneralizationMitigating generalization gaps between OOD and ID is a broadly studied aspect in _Computer Vision_Zhou et al. (2023), covering data augmentation Liu et al. (2019); Yao et al. (2022), representation learning Khosla et al. (2012); Laskin et al. (2020); Zhang et al. (2022), optimizing model structure Wortsman et al. (2022); Balaji et al. (2018), or specialized benchmark datasets Koh et al. (2021); Yang et al. (2022). Simultaneously, generalization for OOD scenarios is less studied in NLP. Works primarily focus on robustness of PLMs Hendrycks et al. (2019); Jin et al. (2020); Zhou et al. (2020); Wang et al. (2021) or OOD detection Zhou et al. (2021); Koner et al. (2021); Cho et al. (2023). Similar to _Computer Vision_Tseng et al. (2020), work in NLP consider single types of distribution shift like either domains Blitzer et al. (2007); Yang et al. (2022); Yuan et al. (2023), languages K et al. (2020); Conneau et al. (2020), or topics Stab et al. (2018); Allaway and McKeown (2020). Such approaches
\begin{table}
\begin{tabular}{l|c c c|c c} \hline \hline & **FT-ID** & **FT** & **P+FT** & **raw** & **P+raw** \\ \hline Average Certainty & 97.6 & 95.9 & 97.8 & - & - \\ Certainty\(\times\)-Flesch & 5.1 & 8.6 & 4.1 & - & - \\ Certainty\(\times\)-Word Count & -10.3 & -13.2 & -6.3 & - & - \\ \hline Average Attribution & 16.2 & 15.5 & 13.0 & 16.3 & 13.2 \\ Correct Attribution & 16.4 & 15.8 & 13.1 & - & - \\ Wrong Attribution & 15.2 & 14.9 & 12.7 & - & - \\ \hline \hline \end{tabular}
\end{table}
Table 4: analysis of models certainty and attribution for FT-ID, FT, and P+FT of DeBERTa-v3. With _raw_ and _P+raw_, we analyze how the model attributes input tokens after pre-training.
Figure 5: Comparison of the correlation of correct and wrong predictions with word count and Flesch score across different learning paradigms.
Figure 6: Overview of the T-SNE van der Maaten and Hinton (2008) reduced embeddings of the classification proxy tokens for every second lay. We use the _CLS_ token for FT and _MASK_ for P+FT.
(Liang et al., 2022; Xu et al., 2018; Peng et al., 2018; Rietzler et al., 2020) often rely on the specific distribution shift (like the topic shift) to optimize the training procedure or the model architecture. Therefore, they are not universally applicable to OOD tasks covering other distribution shifts - like generalization towards upcoming topics and unseen languages.
PromptingA common practice in NLP is to tune pre-trained language models by providing a natural language input and optimize them to predict an arbitrary label (Devlin et al., 2019; Liu et al., 2019; He et al., 2021; Radford et al., 2019). Thus, we use pre-training abilities for processing the input but not for final predictions. Prompting (Liu et al., 2021) allows us to rely upon acquired competencies during pre-training for both stages. To do so, we reformulate a task as a cloze to solve it with a textual label. This allows the use of language models without any further tuning by embedding inputs in task-specific templates for zero- or few-shot setting (Brown et al., 2020; Chowdhery et al., 2022; Wang et al., 2022; Biderman et al., 2023). Further, prompting allowed base-sized language models to reach comparable performance to their large-sized counterparts (Schick and Schutze, 2021, 2021) with fine-tuning them with the same limited data as in few-shoot settings. Apart from such few-shot scenarios, little work examined prompt-based fine-tuning when using complete datasets. As an expectation, (Raman et al., 2023) shows their robustness against adversarial attacks.
Unlike previous work, we comprehensively address OOD generalization with a heterogeneous set of eleven tasks covering topic, domain, and language shifts. At the same time, we point out the importance of fine-grained performance measurement to identify crucial generalization flaws.
## 6 Discussion
We discuss the reported results and the insights of the conducted in-depth analysis in the following. The exhaustive evaluations across the proposed OOD tasks demonstrate the superior performance and robustness of prompt-based learning methods (P+FT and ICL) compared to traditional fine-tuning (FT).4 We see relying on the pre-trained MLM head and avoiding freshly initialized layers allows for a more focused learning process and significantly contributes to the success of P+FT. This is also evident when considering the fact that P+FT tuning preserves more semantic information in the encoder layers during P+FT tuning. However, even such approaches still leave latent potentials of LMs untapped, as LoRA-based tuning using a fraction of all parameters archives almost as good results as training all parameters of LM. Therefore, we still see their impossibility to not bias regarding these structural obstacles within training data as one crucial drawback when applying them for OOD scenarios. This is particularly visible in the small performance gains when comparing linear probing and fine-tuning for tasks embodying heavy discrepancies between training and testing labels. For such scenarios, including tasks with domain shifts, ICL using a large LM outperforms P+FT while underperforming on average.
Footnote 4: These findings extend to ID scenarios — see Appendix § B.1.
## 7 Conclusion
With this work, we comprehensively address OOD generalization from multiple perspectives. We emphasize the necessity of utilizing fine-grained performance metrics, as they expose key weaknesses in generalization. We lay a foundation for comprehensively evaluating OOD generalization by proposing eleven OOD classification tasks covering text, domain, and language shifts. Finally, we show that more recent methods, like prompt-based fine-tuning, narrow the generalization gap between ID and OOD evaluation and provide more robust and confident predictions.
OutlookWith our findings and the increasing prevalence of large LMs, it is indispensable to work towards assessing challenges induced by deliberate distribution shifts in the context of specific LMs. For instance, despite the strong out-of-the-box performance of ICL using ChatGPT, we anticipate that certain aspects (like semantic or task-specific information) of testing instances in a simulated OOD scenario have been part of the pre-training of these large LMs. Quantifying such aspects enables us to compare LMs and draw more profound and practical conclusions about the suitability of LMs to handle truly unseen data in future applications.
## Acknowledgements
We thank Cecilia Liu, Thy Thy Tran, and Kexin Wang for their valuable feedback. This work has been funded by the Hasler Foundation Grant No. 21024.
### Ethical Considerations and Limitations
### Higher Input Length
By embedding the input into a prompt, we sacrifice potential input tokens. Since the used tasks have relatively short inputs, this is not crucial for this work. However, this can be an essential limitation for other tasks when inputs get longer.
### Efficiency
We always refer to efficient fine-tuning when discussing efficient methods in this work. Therefore, we did not consider efficient methods to make inferences on larger LMs more feasible. We see this as another crucial and essential aspect of real-world applications. Simultaneously, we think performance-wise and efficiency-wise will alternate in the future. Therefore, we keep that for future work.
### Large Language Models
We show the competitive performance of ChatGPT compared to gradient-based approaches by only relying on four demonstrations and without any tuning. Simultaneously, we need to assume that the pre-training corpus of ChatGPT leaks crucial aspects - like broadly covers controversially discussed topics like _Nuclear Energy_ or includes instances of popular datasets (like _RTE_(Wang et al., 2018) or _SemEval2016_(Mohammad et al., 2016)) word-by-word. When we have in mind that we use OOD to verify generalization capabilities required for upcoming scenarios, we need to examine the performance of ChatGPT carefully and whether it was able to learn the task or just remembered some semantic aspects of the pre-training.
|
2309.10559 | Instrument for the assessment of road user automated vehicle acceptance:
A pyramid of user needs of automated vehicles | This study proposed a new methodological approach for the assessment of
automated vehicle acceptance (AVA) from the perspective of road users inside
and outside of AVs pre- and post- AV experience. Users can be drivers and
passengers, but also external road users, such as pedestrians,
(motor-)cyclists, and other car drivers, interacting with AVs. A pyramid was
developed, which provides a hierarchical representation of user needs.
Fundamental user needs are organized at the bottom of the pyramid, while
higher-level user needs are at the top of the pyramid. The pyramid
distinguishes between six levels of needs, which are safety trust, efficiency,
comfort and pleasure, social influence, and well-being. Some user needs
universally exist across users, while some are user-specific needs. These needs
are translated into operationalizable indicators representing items of a
questionnaire for the assessment of AVA of users inside and outside AVs. The
formulation of the questionnaire items was derived from established technology
acceptance models. As the instrument was based on the same model for all road
users, the comparison of AVA between different road users is now possible. We
recommend future research to validate this questionnaire, administering it in
studies to contribute to the development of a short, efficient, and
standardized metric for the assessment of AVA. | Sina Nordhoff, Marjan Hagenzieker, Esko Lehtonen, Michael Oehl, Marc Wilbrink, Ibrahim Ozturk, David Maggi, Natacha Métayer, Gaëtan Merlhiot, Natasha Merat | 2023-09-19T12:11:48Z | http://arxiv.org/abs/2309.10559v1 | # Instrument for the assessment of road user automated
###### Abstract
This study proposed a new methodological approach for the assessment of automated vehicle acceptance (AVA) from the perspective of road users inside and outside of AVs pre- and post- AV experience. Users can be drivers and passengers, but also external road users, such as pedestrians, (motor-)cyclists, and other car drivers, interacting with AVs. A pyramid was developed, which provides a hierarchical representation of user needs. Fundamental user needs are organized at the bottom of the pyramid, while higher-level user needs are at the top of the pyramid. The pyramid distinguishes between six levels of needs, which are safety trust, efficiency, comfort and pleasure, social influence, and well-being. Some user needs universally exist across users, while some are user-specific needs. These needs are translated into operationalizable indicators representing items of a questionnaire for the assessment of AVA of users inside and outside AVs. The formulation of the questionnaire items was derived from established technology acceptance models. As the instrument was based on the same model for all road users, the comparison of AVA between different road users is now possible. We recommend future research to validate this questionnaire, administering it in studies to contribute to the development of a short, efficient, and standardized metric for the assessment of AVA.
**Keywords:** Automated vehicles (AVs); automated vehicle acceptance (AVA); multi-user phenomenon; standardized questionnaire; pyramid
Introduction
The field of automated vehicle acceptance (AVA) has gained enormous interest in the past few years. Establishing user acceptance of automated vehicles (AVs) is of utmost importance because if AVs are not accepted, the safety, efficiency, and equity benefits of road automation will not be realised, and the large investments in this technology will not materialize (Nordhoff, Van Arem, & Happee, 2016; Van Der Laan, Heino, & De Waard, 1997).
AVA is a multi-user phenomenon. It covers drivers of automated passenger vehicles, truck drivers, passengers, safety drivers, or external road users, such as pedestrians, (motor-)cyclists, and other car drivers, communicating and interacting with AVs on public roads (Kaye, Li, Oviedo-Trespalacios, & Pooyan Afghari, 2022; Merat & Lee, 2012). Vulnerable road users, such as pedestrians and (motor-)cyclists, have been disproportionally involved in fatal accidents (WHO, 2022). For this reason, it is important to consider the (safety) needs and preferences of not only users inside but also users outside AVs. In line with Maslow and Lewis (1987), the present paper argues that user groups share some fundamental and basic needs towards AVs, such as the need for safety, efficiency, and comfort. However, each user group also has unique needs. For example, passengers in AVs may be prone to motion sickness, while this aspect may be less relevant for other external user groups.
The field of AVA has flourished from the application of technology acceptance models in recent years. However, several limitations can be identified, which provide important imperatives for this work.
First, previous studies have mainly investigated AVA of AV users in isolation, with a main focus on drivers of automated passenger vehicles or passengers of automated shuttles (Kaye, Lewis, Forward, & Delhomme, 2020; Madigan, Louw, Wilbrink, Schieben, & Merat, 2017). Recently, studies have started to investigate the acceptance of passengers of automated passenger vehicles (Pascale et al., 2021). In addition, other works on external road users integrate some dimensions of UTAUT1/2 into their studies (Deb et al., 2017; Koustanai et al., 2022). Deb et al. (2017) have shown that safety and interactions are key factors influencing the willingness of pedestrians to cross in front of an AV. Koustanai et al. (2022) has shown that trust had a direct effect on the behavioral intention to share the road with an AV, while perceived behavioral control, reliability, perceived safety, attitudes and experience had indirect effects on behavioral intention.
Second, a common way to investigate technology acceptance has been the use of technology acceptance models, such as the Unified Theory of Acceptance and Use of Technology (UTAUT1/2) (Venkatesh, Morris, Davis, & Davis, 2003; Venkatesh, Thong, & Xu, 2012), which is a synthesis of eight influential
technology acceptance models, including the Technology Acceptance Model (TAM) (Davis, 1985). These models posit that the behavioral intention to use technology is directly influenced by cognitive domain-specific and emotional-affective factors. Cognitive domain-specific factors include the perceived usefulness (i.e., performance expectancy), ease of use (i.e., effort expectancy), and conditions supporting the use of AVs (i.e., facilitating conditions). Emotional affective components include the perceived enjoyment of AVs (i.e., hedonic motivation), and the support of the use of AVs in the individual's social networks (Venkatesh et al., 2012). These models were specifically developed for the investigation of technology acceptance in general. When they are administered in studies investigating technology acceptance, the factors of these models are translated into measurable or operationalizable questionnaire items distributed to and rated by respondents. The wording of the items has to be adjusted to the context of road vehicle automation every time researchers aim to implement the models in their studies. The items have not been translated into measurable items for multiple road users being pivotal for AVA. Validity issues may be the result, i.e., to what extent can researchers warrant that they measure what they intend to measure when the translation to the specific research context deviates to a large extent from the original meaning? To compare AVA between different road users, we need instruments based on the same model for all the road users.
Third, another limitation of common technology acceptance models is that they do not theorize relationships between perceived safety and trust and behavioral intention, respectively. Perceived safety and trust are pivotal for AVA for all road users as they have to put their lives into the hands of a robot. If they don't feel safe and trust the AV, they may be less likely to accept and interact with them.
Fourth, no standardized instrument for measuring AVA before and after the experience with AVs exists. AVA can be measured before and after experience with AVs. Schade and Schlag (2003) defined acceptance as the _"respondents' attitudes, including their behavioral responses, after the introduction of a measure, and acceptability as the prospective judgement before such future introduction"_. According to this definition, the term 'acceptance' is applied when respondents had actual experience with AVs, where acceptability is assessed prior to experience with automated vehicles. Typically, researchers investigating AVA applied the term,acceptance' for research studies surveying respondents with and without physical experience with AVs. Another definition of acceptance is proposed by Adell (2010) who defined acceptance as the,,_degree to which an individual intends to use a system and, when available, incorporates his system in his / her driving"_ (p. 477).
The main objective of the present paper is to develop a standardized instrument for the assessment of AVA of road users pre- and post- AV experience. The instrument consists of a standard part that can be implemented in studies across user groups. It also consists of a variable part accounting for the unique needs of each user group. The user needs were derived from the literature, and organized in a pyramid, which serves as hierarchical representation of these user needs.
## 2 Literature review
### Safety
One of the most proclaimed AV benefits pertains to safety: AVs are expected to improve traffic safety (Pyrialakou, Gkartzonikas, Gatlin, & Gkritza, 2020). Safety has an objective and subjective dimension (Nilsen et al., 2004). The objective dimension of AV safety has been commonly investigated in simulation studies by the number of AV crashes in relation to mileage and safety-critical AV behavior (Kalra & Paddock, 2016). The safety of AVs is particularly important for truck drivers and fleet owners who are ultimately responsible for third-party goods (Othman, 2021). The subjective dimension captures the individual's subjective feelings (Nordhoff, Stapel, He, Gentner, & Happee, 2021; Xu et al., 2018). Recently, the attention of scientific scholars has shifted from the objective dimension to the consideration of perceived safety for AVA. Studies mainly investigated the perceived safety of users inside rather than users outside AVs (Pammer, Gauld, McKerral, & Reeves, 2021; Parkin et al., 2022; Pyrialakou et al., 2020; Vlakveld, van der Kint, & Hagenzieker, 2020). In our previous study with users of partially automated cars (Nordhoff et al., 2021), perceived safety influenced automation use indirectly through trust. In other studies, perceived safety did have a direct impact on the intention to use AVs (Montoro et al., 2019; Xu et al., 2018). Whether external road users are and feel safe around AVs will depend to a large extent on how they communicate and interact with AVs. External road users can communicate via internal and external communication means with AVs. Pedestrians reported to rely on vehicle kinematics, such as vehicle speed or gap distance, to inform their decision to cross the road in front of AVs (Wang et al., 2021). To address the lack of hand gestures and eye contact by human drivers in driverless vehicles, external Human Machine Interfaces (eHMIs) as external communication displays located on the outside of AVs indicating vehicle intent have been proposed. Studies currently count to around 70 eHMI concepts, which were mainly designed from the perspective of pedestrians (Berge, Hagenzieker, Farah, & de Winter, 2022; Dey et al., 2020). It is unclear whether eHMIs serve as 'gimmicks' or 'necessity' for enabling safe interactions between AVs and external road users (de Winter & Dodou, 2022). External road users, such as pedestrians and cyclists, may rely more on implicit communication forms, such as vehicle kinematics, rather than eye contact or body gestures in their interactions with human drivers (de Winter & Dodou, 2022; Fridman et al., 2017). To inform their crossing decisions in front of an AV, (**Error! Bookmark not defined_motor-**)cyclists preferred to receive
instructions from the AV (e.g., go ahead) to the status of the AV (Pammer et al., 2021). Currently, driverless automated shuttle services (e.g., Waymo, Cruise) operate on public roads in e.g., San Francisco, without any external communication interfaces. It is plausible that external communications interfaces are not needed to enable safe and acceptable interactions with external road users. A study with cyclists / motorcyclists conducted by Pammer et al. (2021) revealed that respondents expected 'fewer crashes' and'reduced severity of crashes' to be a perceived benefit of AVs. In this study, cycling near an AV was considered the least unsafe scenario, followed by walking and driving near an AV. Xing, Zhou, Han, Zhang, and Lu (2022) observed that vulnerable road users had more positive perceptions of AV safety in 2019 rather than 2017 (increase by around 10% to 30%). In the study of Berge et al. (2022), respondents mentioned the potential of on-bike eHMIs to increase the safety of cyclists. We posit here that (perceived) safety is a fundamental human need at the bottom of the pyramid as shown by Figure 1, which can be translated into measurable indicators for safety for all road users ("arrive more safely", "feel safer") (see Table 1).
### Trust
Trust in technology has been considered a fundamental factor impacting how humans interact with technology (Lee & See, 2004). Previous studies supported the role of trust as positive predictor of the behavioral intention to use AVs (Benleulmi & Ramdani, 2022; Du, Zhu, & Zheng, 2021; Foroughi et al., 2023; Kaur & Rampersad, 2018; Kenesei et al., 2022; Kettles & Van Belle, 2019; Meyer-Waarden & Cloarec, 2022; Panagiotopoulos & Dimitrakopoulos, 2018; Waung, McAuslan, & Lakshmanan, 2021; Xu et al., 2018; Zhang et al., 2020). Reliability of automation is a key factor impacting trust, with an increase in reliability contributing to an increase in trust (Carsten & Martens, 2019). Drivers failing to monitor automation (i.e., complacency) has been associated with overtrust in automation (Banks, Eriksson, O'Donoghue, & Stanton, 2018; Nordhoff et al., 2023; Wilson, Yang, Roady, Kuo, & Lenne, 2020). Failing to monitor automation is not only a concern for users inside AVs: Research indicates that pedestrians intentionally stepped in front of an AV to test its capabilities and limitations (Madigan et al., 2019), reported an intention to bully AVs (Liu, Du, Wang, & Da Young, 2020), or showed other types of aggressive behaviors towards AVs (Haue, Merlhiot, Koustanar, Barre, & Moneger), such as choosing shorter gap distances in comparison to conventional vehicles (Dommes et al., 2021). Scholars observed that cyclists / motorcyclists had higher trust in human drivers than general trust in AVs, but reported a higher trust in AVs rather than human drivers to have their own personal safety as a priority (Pammer et al., 2021). In the study of Hagenzieker et al. (2020), cyclists indicated to have more confidence in human-driven than automated cars. They were more confident of being noticed by the AV rather than traditional car when they had priority over the car, while they were more confident of being noticed by the traditional car when they did not have
priority over the car. The need of being noticed or detected by an AV may not only be relevant for external road users: Passengers of automated shuttle services may want to be noticed by someone in an remote control room. In the study of Vlakveld et al. (2020), cyclists were more inclined to slow down in conflict situations at intersections with an AV rather than a traditional car approaching. In line with Parkin et al. (2022), we posit that trust represents a fundamental basic human need, which can be hierarchically organized at the bottom of the pyramid as shown by Figure 1. The need for trust can be translated into operationalizable indicators for all road users to be administered in questionnaires for the assessment of AVA ("I can trust the AV", "more attentive driver", "become complacent", "AV is reliable", "feel comfortable trusting life to beloved others", "fear loss of control", "being detected by AV") (see Table 1).
### Efficiency
Studies have revealed that efficiency, such as performance expectancy (or the perceived usefulness), and facilitating conditions (or the support of facilitating conditions supporting the use of AVs), influenced the behavioral intention to use AVs (Lehtonen et al., 2022; Nordhoff et al., 2020). The effect of perceived ease of use (i.e., effort expectancy) on the intention to use automated cars was ambiguous, with some studies reporting positive (Chen, Li, Gan, Fu, & Yuan, 2020; Madigan et al., 2016), or no effects (Benleulmi & Ramdani, 2022; Kettles & Van Belle, 2019; Madigan et al., 2017; Nordhoff et al., 2020). Reductions in travel time, travel costs, and fuel or energy consumption are other key aspects of efficiency, and key expected benefits of travelling with an AV (Merat & Lee, 2012; Szimba & Hartmann, 2020). These aspects may be particularly important for truck drivers, especially fleet owners, who perceive the AV as an opportunity to create a mobile workplace, promoting productivity by performing work-related tasks (Frohlich et al., 2018). The acceptance of this new workplace by professional truck drivers is still unclear. Studies have shown that a large proportion of truck drivers is unaware of their built-in AV technology (Richardson, Doubek, Kuhn, & Stumpf, 2017). Another aspect of efficiency pertains to travel cost savings (e.g., fuel consumption and insurance costs), which mainly arise with a higher penetration rates of AVs (Xie & Liu, 2022). (Motor-)cyclists rated'shorter travel times' unlikely to be a perceived benefit of AVs, and were undecided about the decrease in traffic congestion as a result of AVs (Pammer et al., 2021). As shown by Figure 1, we posit that efficiency follows safety and trust as basic human need. In other words, we propose the hypothesis that once manufacturers satisfy the need for safety and trust, users will strive for the satisfaction of the efficiency of AVs following the reasoning of Maslow and Lewis (1987). As fundamental need, efficiency can be translated into operationalizable indicators ("better driver", "more useful", "make travelling easier", "reach destination faster", "reduce travel time in congestion", "reduce travel costs", "help with parking / on (congested) motorways / in urban traffic", "better for the environment") (see Table 1).
### Comfort & pleasure
Comfort is another key factor impacting AVA (Peng et al., 2023). Peng et al. (2023) proposed in their conceptual framework that comfort is directly influenced by trust, and perceived safety. Motion comfort is a key aspect of comfort. Insufficient levels of motion comfort can lead to motion sickness (de Winkel, Irmak, Happee, & Shyrokau, 2023; Irmak, de Winkel, Pool, Bulthoff, & Happee, 2021), decrease in cognitive task performance, an increase in subjective workload, or discomfort. Ease of use, physical comfort, and engagement in secondary tasks were suggested as additional factors impacting comfort during automated driving (Peng et al., 2023). Other studies have revealed that respondents expressed an interest to use AVs while being impaired from alcohol, drug, medication use or tiredness (Cunningham, Regan, Horberry, Weeratunga, & Dixit, 2019; Lehtonen et al., 2022; Payre, Cestac, & Delhomme, 2014). This reflects our recent study in which drivers of partially automated cars reported to travel tired, impaired and in inclement weather conditions (Nordhoff et al., 2023). In the study of Lehtonen et al. (2022) travelling in darkness was a positive predictor of travelling more with an AV. Therefore, we postulate that 'travelling tired or impaired', 'travelling in inclement weather and visibility conditions' represents a need or preference of AV drivers. Similarly, it could be posited that 'travelling tired or impaired', and 'travelling in inclement weather and visibility conditions' also represents a need of external road users. External road users may be more prone to travelling tired or impaired or in inclement weather and visibility conditions with AVs on public roads given the programmed cautiousness of AVs (see Nordhoff et al., 2023). An online survey with truck drivers revealed that the expected driving pleasure was a primary motive for choosing the profession, with some truck drivers fearing the loss of driving pleasure due to automating the driving task (Richardson et al., 2017). We posit that the need for comfort represents a need that exists universally across user groups. It hierarchically follows the need for efficiency, which implies that the need for comfort will appear once the need for efficiency has been satisfied. The need for comfort and pleasure can be translated into measurable indicators for all road users ("arrive more comfortably", "more enjoyable", "driving tired or impaired", "using AV in adverse weather conditions", "use travel time for leisure activities", "use travel time for non-leisure activities", "reduce motion sickness"). The indicator "reduce motion sickness" is only applicable for AV passengers (see Table 1).
### Social influence
The role of social influence for technology adoption has been acknowledged by technology acceptance models (Ajzen, 1991; Venkatesh et al., 2012). Studies investigating AVA have shown that social influence did impact the behavioral intention to use AVs (Chen et al., 2020; Nordhoff et al., 2020; Zhang et al., 2019). We posit that the need for social appreciation from user's important social networks hierarchically follows
the need for efficiency. It can be translated into operationalizable indicators for all road users ("People who are important to be would think that I should use an AV", "I would drive an expensive AV, because I can") (see Table 1).
### Well-being
The topic of mental health has entered the transportation arena, with the World Health Organization (WHO) acknowledging the role of mobility for the prevention and treatment of mental disorders (M. Conceicao et al., 2022; WHO, 2019). Mental health is defined as a state of well-being, enabling the realization of one's own abilities, coping with the normal stresses of life, working productively and fruitfully, and contributing to the community (WHO, 2004). Mental health has been commonly measured by affective states (emotions and mood), well-being and satisfaction with life or travel, stress and mental health disorders (M. A. Conceicao et al., 2023). Mobility also has an impact on other dimensions of mental well-being, such as social inclusion, stress, workload, driving anxiety, or even mental disorders such as depression (M. Conceicao et al., 2022). AVs can have a positive impact on the mental and physical well-being of drivers: Automated passenger vehicles can directly reduce mental workload, stress, and aggressive driving, making driving more relaxing and increasing drivers' situational awareness as drivers are no longer required to perform most of the tactical and operational parts of driving (Nordhoff et al., 2023). Conversely, AVs can have a negative impact on its passengers: AV drivers disengaged the automation as a result of passengers' discomfort and lack of trust in the system due to the automation's erratic, harsh, and unpredictable behavior (Nordhoff & De Winter, under review). This study also revealed that other road users interacting with the partially automated vehicles (i.e., Tesla Autopilot, FSD Beta) were confused and angry at the behavior of the automation. Scientific studies provide scientific evidence for the mental health (e.g., loneliness, depression, and anxiety), and physical health issues (e.g., back disorders, heart disease, obesity) of professional drivers due to the demanding and irregular work schedules, and the difficulty to maintain a healthy lifestyle (Dahl et al., 2009; Ji-Hyland & Allen, 2022; Sousa & Ramos, 2018). We posit that well-being hierarchically follows the need for social influence, and arises when the lower-level needs are satisfied. It can be translated into operationalizable indicators for all road users ("better awareness of surroundings", "make driving less stressful"; "make driving more relaxing", "arrive less tired", "reduce aggression on the road") (see Table 1).
## 3 AVA pyramid
The present study organizes the AVA road user needs and preferences hierarchically in the form of a pyramid as shown in Figure 1. The AVA pyramid displays user needs and preferences ordered from basic, fundamental needs at the bottom to higher-level user needs and preferences at the top of the pyramid. The
pyramid assumes that higher-level needs (i.e., needs at higher levels of the pyramid) arise with the satisfaction of the needs at lower levels of the pyramid.
These needs and preferences are translated into operationalizable indicators as shown by Table 1, which shows the applicability of these indicators per road users.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Need**} & \multirow{2}{*}{**Indicator**} & \multicolumn{5}{|c|}{**Road users**} \\ \cline{3-8} & & **Drivers** & **Passengers** & **Truck drivers** & **Other drivers** & **Pedestrians** & **(Motor-) cyclists** \\ \hline \multirow{2}{*}{**Safety**} & Arrive safer & X & X & X & X & X & X \\ \cline{2-8} & Feel safer & X & X & X & X & X & X \\ \hline \multirow{5}{*}{**Trust**} & Trust the AV & X & X & X & X & X & X \\ \cline{2-8} & More attentive & X & X & X & X & X & X \\ \cline{2-8} & Become complacent & X & X & X & X & X & X \\ \cline{2-8} & AV is reliable & X & X & X & X & X & X \\ \cline{2-8} & Feel comfortable & & & & & & \\ \cline{2-8} & trusting life of loves & X & X & _NA_ & X & X & X \\ \cline{2-8} & ones to AV & & & & & & \\ \cline{2-8} & Fear loss of control & X & X & X & X & X & X \\ \cline{2-8} & Being detected by AV & _NA_ & _NA_ & X & X & X & X \\ \hline \end{tabular}
\end{table}
Table 1: Overview of general indicators and relevance per road user
Figure 1: AVA pyramid displaying road user needs and preferences ordered from basic fundamental needs at the bottom to higher-level user needs and preferences at the top of the pyramid.
\begin{tabular}{|l|l|c|c|c|c|c|c|} \hline & Better driver & X & X & X & X & X & X \\ \hline \multicolumn{2}{|l|}{More useful} & X & X & X & X & X & X \\ \hline \multicolumn{2}{|l|}{Make travelling} & X & X & X & X & X & X \\ \multicolumn{2}{|l|}{easier} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \hline \multirow{4}{*}{**Efficiency**} & Reach destination & \multirow{4}{*}{X} & X & X & X & X & X \\ & faster & & & X & X & X & X & X \\ \cline{2-8} & Cope & with & & & X & X & X & _NA_ & X \\ \cline{2-8} & congestion & & X & X & X & X & _NA_ & X \\ \cline{2-8} & Reduce travel costs & & X & X & X & X & _NA_ & X \\ \cline{2-8} & Help with parking / & & & & & & & \\ on (congested) & & X & & & X & & _NA_ & _NA_ \\ \cline{2-8} & motorways / in & & & & & & & \\ urban traffic & & & & & & & \\ \hline \multicolumn{2}{|l|}{Better for the environment} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \hline \multirow{4}{*}{**Comfort \& pleasure**} & Arrive & more & & & & & \\ comfortably & & X & & X & X & X & X & X \\ \cline{2-8} & More enjoyable & & X & X & X & X & X & X \\ \cline{2-8} & Driving tired or impaired & & X & X & X & X & X & X \\ \cline{2-8} & Using AV in & & & & & & & \\ adverse weather & & X & & X & X & X & X & X \\ \cline{2-8} & Use travel time for & & & & & & & \\ leisure activities & & X & & X & X & X & X & X \\ \cline{2-8} & Use travel time for & & & & & & & \\ non-leisure activities & & X & & X & X & X & X & X \\ \cline{2-8} & Reduce motion & & & & & & & \\ sickness & & & X & & & & & \\ \hline \multicolumn{2}{|l|}{**Social influence**} & Social influence & \multirow{4}{*}{X} & X & X & _NA_ & X & X & X \\ \cline{2-8} & Better awareness of & & & & & & & \\ surroundings & & X & & X & X & X & X & X \\ \cline{2-8} & Make driving less & & & & & & & \\ stressful & & X & & X & & X & X & X \\ \cline{2-8} & Make driving more & & & & & & & \\ relaxing & & X & & X & & & \\ \cline{2-8} & Arrive less tired & & X & & X & X & X & X \\ \cline{2-8} & Reduce aggression & & & & & & & \\ on the road & & & & & & & \\ \hline \multirow{4}{*}{**Aceptance**} & Shift from train or & & & & & & & \\ airplane to car on & & & & & & & \\ longer trips & & & & & & & \\ \cline{2-8} & Plan to use & & & & & & \\ \cline{2-8} & Intend to use & X & X & X & X & X & X \\ \cline{2-8} & Buy AV as next car & & X & & X & X & _NA_ & _NA_ \\ \cline{2-8} & Make more daily & & & & & & & \\ trips with AV & & & & & & & \\ \cline{2-8} & Make more long- & & & & & & & \\ distance trips with AV & & & & & & & & \\ \cline{2-8} & & & & & & & \\ \cline{2-8} & & & & & & & & \\ \cline{2-8} & & & & & & & \\ \cline{2-8} & & & & & & & \\ \hline \end{tabular}
## 4 The road ahead
This study proposed a new methodological approach for the assessment of AVA from the perspective of road users inside and outside AVs. A pyramid was developed, which provides a hierarchical organization of user needs. The pyramid posits that safety and trust in AVs represent basic and fundamental needs at the bottom of the pyramid. The need for safety and trust exists universally across road users inside and outside of AVs. After the need for safety and trust is fulfilled, the need for efficiency arises as the third-lowest layer of the pyramid. After the satisfaction of the need for efficiency, users may want to strive for the realization of the need for comfort and pleasure as the fourth-lowest layer of the pyramid. Social influence - the social appreciation of the use of AVs in the networks of AV users - represents the need that users strive to achieve after their fulfilment of their need for comfort and pleasure. At the top of the pyramid is the need for a user's well-being, which represents the highest-level need of AV users, and reflects how AV users feel in their interaction with AVs.
This paper translated these needs into operationalizable indicators or questionnaire items that can be administered in studies for the assessment of AVA pre- and post- AV experience. The questionnaire captures the most important needs of road users. We recommend future studies to validate the questionnaire, and contribute to the development of an efficient and standardized metric for the assessment of AVA.
|
2309.11647 | Potential and limitations of random Fourier features for dequantizing
quantum machine learning | Quantum machine learning is arguably one of the most explored applications of
near-term quantum devices. Much focus has been put on notions of variational
quantum machine learning where parameterized quantum circuits (PQCs) are used
as learning models. These PQC models have a rich structure which suggests that
they might be amenable to efficient dequantization via random Fourier features
(RFF). In this work, we establish necessary and sufficient conditions under
which RFF does indeed provide an efficient dequantization of variational
quantum machine learning for regression. We build on these insights to make
concrete suggestions for PQC architecture design, and to identify structures
which are necessary for a regression problem to admit a potential quantum
advantage via PQC based optimization. | Ryan Sweke, Erik Recio, Sofiene Jerbi, Elies Gil-Fuster, Bryce Fuller, Jens Eisert, Johannes Jakob Meyer | 2023-09-20T21:23:52Z | http://arxiv.org/abs/2309.11647v1 | # Potential and limitations of random Fourier features for
###### Abstract
Quantum machine learning is arguably one of the most explored applications of near-term quantum devices. Much focus has been put on notions of variational quantum machine learning where parameterized quantum circuits (PQCs) are used as learning models. These PQC models have a rich structure which suggests that they might be amenable to efficient dequantization via random Fourier features (RFF). In this work, we establish necessary and sufficient conditions under which RFF does indeed provide an efficient dequantization of variational quantum machine learning for regression. We build on these insights to make concrete suggestions for PQC architecture design, and to identify structures which are necessary for a regression problem to admit a potential quantum advantage via PQC based optimization.
## 1 Introduction
In recent years, the technique of using parameterized quantum circuits (PQCs) to define a model class, which is then optimized over via a classical optimizer, has emerged as one of the primary methods of using near-term quantum devices for machine learning tasks [1, 2]. We will refer to this approach as _variational quantum machine learning_ (variational QML), although it is often also referred to as hybrid quantum/classical optimization. While a large amount of effort has been invested in both understanding the theoretical properties of variational QML, and experimenting on trial datasets, it remains unclear whether variational QML on near-term quantum devices can offer any meaningful advantages over state-of-the-art classical methods.
One approach to answering this question is via _dequantization_. In this context, the idea is to use insights into the structure of PQCs, and the model classes that they define, to design quantum-inspired classical methods which can be proven to match the performance of variational QML. Ultimately, the goal is to understand when and why variational QML can be dequantized, in order to better identify the PQC architectures, optimization algorithms and problem types for which one might obtain a meaningful quantum advantage via variational QML.
In order to discuss notions of dequantization of variational QML, we note that for typical applications variational QML consists of two distinct phases. Namely, a _training_ stage and an _inference_ stage. In the training stage, one uses the available training data to identify an optimal PQC model, and in the inference stage one uses the identified model to make predictions on previously unseen data, or in the case of generative modelling, to generate new samples from the unknown data distribution.
A variety of works have recently proposed dequantization methods for _inference_ with PQC models. The first such work was Ref. [15], which used insights into the functional analytic structure of PQC model classes to show that, given a trained quantum model, one can sometimes efficiently extract a purely classical model - referred to as a _classical surrogate_ - which performs inference just as well as the PQC model. More recently, Ref. [13] used insights from shadow tomography to show that it is sometimes possible to extract a classical shadow of a trained PQC model - referred to as a _shadow model_ - which is again guaranteed to perform as well as the PQC model for inference. Interestingly, however, Ref. [13] also proved that, under reasonable complexity-theoretic assumptions, there exist PQC models whose inference _cannot_ be dequantized by any efficient method - i.e., _all_ efficient methods for the dequantization of PQC inference possess some fundamental limitations.
In this work, we are concerned with dequantization of the _training_ stage of variational QML - i.e., the construction of efficient classical learning algorithms which can be proven to match the performance of variational QML in learning from data. To this end, we start by noting that Ref. [13] constructed a learning problem which admits an efficient variational QML algorithm but, again under complexity-theoretic assumptions, cannot be dequantized by any efficient classical learning algorithm. As such, we know that all methods for the dequantization of variational QML must posses some fundamental limitations, and for any given method we would like to understand its domain of applicability.
With this in mind, one natural idea is to ask whether the effect of _noise_ allows direct efficient classical simulation of the the PQC model, and, therefore, of the entire PQC training process and subsequent inference. Indeed, a series of recent works has begun to address this question, and to delineate the conditions under which noise renders PQC models classically simulatable [14, 15, 16, 17]. Another recent result has shown that the presence of _symmetries_, often introduced to improve PQC model performance for symmetric problems [18, 19], can also constrain PQC models in a way which allows for efficient classical simulation [1]. Another natural idea, inspired by the dequantization of inference with PQC models, is to "train the surrogate model". More specifically, Ref. [15] used the insight that the class of functions realizable by PQC models is a subset of trigonometric polynomials with a specific set of frequencies [15], to "match" the trained PQC model to the closest trigonometric polynomial with the correct frequency set (which is then the classical surrogate for inference). This, however, immediately suggests the following approach to dequantizing the _training_ stage of variational QML - simply directly optimize from data over the "PQC-inspired" model class of trigonometric polynomials with the appropriate frequency set. Indeed, this is in some sense what happens during variational QML! This idea was explored numerically in the original work on classical surrogates for inference [15]. Unfortunately, for typical PQC architectures the number of frequencies in the corresponding frequency set grows exponentially with the problem size, which prohibits efficient classical optimization over the relevant class of trigonometric polynomials. However, for the PQC architectures for which numerical experiments were possible, direct classical optimization over the PQC-inspired classical model class yielded trained models which outperformed those obtained from variational QML.
Inspired by Ref. [15], the subsequent work of Ref. [16] introduced a method for addressing the efficiency bottleneck associated with exponentially growing frequency sets. The authors of Ref. [16] noticed that all PQC models are linear models with respect to a trigonometric polynomial feature map. As such, one can optimize over all PQC-inspired models via kernel ridge regression, which will be efficient if one can efficiently evaluate the kernel defined from the feature map. While naively evaluating the appropriate kernel classically will be inefficient - again due to the exponential growth in the size of the frequency set - the clever insight of Ref. [16] was to see that one can gain efficiency improvements by using the technique of _random Fourier features_[14] to _approximate_ the PQC-inspired kernel. Using this technique, Ref. [16] obtained a variety of theoretical results concerning the sample complexity required for RFF-based regression with the PQC-inspired kernel to yield a model whose performance matches that of variational QML. However, the analysis of Ref. [16] applied only to PQC architectures with _universal_ parameterized circuit blocks - i.e., PQC models which can realize _any_ trigonometric polynomial with the appropriate frequencies. This contrasts with the PQC models arising from practically relevant PQC architectures, which due to depth constraints, can only realize a subset of trigonometric polynomials.
In light of the above, the idea of this work is to further explore the potential and limitations of RFF-based linear regression as a method for dequantizing the training stage of variational QML, with the goal of providing an analysis
which is applicable to practically relevant PQC architectures. In particular, we identify a collection of necessary and sufficient requirements - of the PQC architecture, the regression problem, and the RFF procedure - for RFF-based linear regression to provide an efficient classical dequantization method for PQC based regression. This allows us to show clearly that:
1. RFF-based linear regression _cannot_ be a generic dequantization technique. At least, there exist regression problems and PQC architectures for which RFF-based linear regression can _not_ efficiently dequantize PQC based regression. As mentioned before, we already knew from the results of Ref. [13] that _all_ dequantization techniques must posses some limitations, and our results shed light on the specific nature of these limitations for RFF based dequantization.
2. There exist practically relevant PQCs, and regression problems, for which RFF-based linear regression can be _guaranteed_ to efficiently produce output models which perform as well as the best possible output of PQC based optimization. In other words, there exist problems and PQC architectures for which PQC dequantization via RFF is indeed possible.
Additionally, using the necessary and sufficient criteria that we identify, we are able to provide concrete recommendations for PQC architecture design, in order to mitigate the possibility of dequantization via RFF. Moreover, we are able to identify a necessary condition on the structure of a regression problem which ensures that dequantization via RFF is _not_ possible. This, therefore, provides a guideline for the identification of problems which admit a potential quantum advantage (or at least, cannot be dequantized via RFF-based linear regression).
This paper is structured as follows: We begin in Section 2 by providing all the necessary preliminaries and background material. Following this, we proceed in Section 3 to motivate and present RFF-based linear regression with PQC-inspired kernels as a method for the dequantization of variational QML. Given this, we then go on in Section 4 to provide a detailed theoretical analysis of RFF-based linear regression with PQC-inspired kernels. Finally, we conclude in Section 5 with a discussion of the consequences of the previous analysis, and with an overview of natural directions for future research.
## 2 Setting and preliminaries
Here we provide the setting and required background material.
### Statistical learning framework
Let \(\mathcal{X}\) denote a set of all possible data points, and \(\mathcal{Y}\) a set of all possible labels. In this work, we will set \(\mathcal{X}\,:=\,[0,2\pi)^{d}\subset\mathbb{R}^{d}\) for some integer \(d\) and \(\mathcal{Y}=\mathds{R}\). We assume the existence of some unknown probability distribution \(P\) over \(\mathcal{X}\times\mathcal{Y}\), which we refer to as a regression problem. Additionally, we assume a parameterized class of functions \(\mathcal{F}=\{f_{0}:\,\mathcal{X}\rightarrow\mathcal{Y}\,|\,\theta\in\Theta\}\), which we call hypotheses. Given access to some finite dataset \(S=\{(x_{i},y_{i})\sim P\}_{i=1}^{n}\) the goal of the regression problem specified by \(P\) is to identify the optimal hypothesis \(f_{0}\)-, i.e., the hypothesis which minimizes the _true risk_, defined via
\[R(f)\,:=\,\operatorname*{\mathbb{E}}_{(x,y)\sim P}\left[\mathcal{L}(y,f(x)) \right], \tag{1}\]
where \(\mathcal{L}:\,\mathcal{Y}\times\mathcal{Y}\rightarrow\mathds{R}\) is some loss function. In this work, we will consider only the quadratic loss defined via
\[\mathcal{L}(y,y^{\prime})\,:=(y-y^{\prime})^{2}. \tag{2}\]
We also define the _empirical risk_ with respect to the dataset \(S\) as
\[\hat{R}(f)\,:=\frac{1}{n}\sum_{i=1}^{n}\mathcal{L}(y_{i},f(x_{i})). \tag{3}\]
### Linear and kernel ridge regression
Linear ridge regression and kernel ridge regression are two popular classical learning algorithms. In linear ridge regression we consider _linear_ functions \(f_{w}(x)=\langle w,x\rangle\), and given a dataset \(S=\{(x_{i},y_{i})\}_{i=1}^{n}\), we proceed by minimizing the empirical risk, regularized via the 2-norm, i.e.,
\[\hat{R}_{\lambda}(f_{w})\,:=\frac{1}{n}\sum_{i=1}^{n}\,(y_{i}-\langle w,x_{i} \rangle)^{2}+\lambda|w|_{2}^{2}. \tag{4}\]
With this regularization, which is added to prevent over-fitting, minimizing Eq. (4) becomes a convex quadratic problem, which admits the closed form solution
\[w=\big{(}\hat{X}^{T}\hat{X}+\lambda n\mathds{1}\big{)}^{-1}\,\hat{X}^{T}\hat{Y}, \tag{5}\]
where \(\hat{X}\) is the \(n\times d\) "data matrix" with \(x_{i}\) as rows, and \(\hat{Y}\) is the \(n\) dimensional "target vector" with \(y_{i}\) as the \(i\)'th component [14]. Linear ridge regression requires \(\mathcal{O}(nd)\) space and \(\mathcal{O}(nd^{2}+d^{3})\) time. As linear functions are often not sufficiently expressive, a natural approach is to consider linear functions in some higher dimensional feature space. More specifically, one assumes a feature map \(\phi:\,\mathds{R}^{d}\to\mathds{R}^{D}\), and then considers linear functions of the form \(f_{\phi}(x)=\langle v,\phi(x)\rangle\), where \(v\) is an element of the feature space \(\mathds{R}^{D}\). Naively, one could do linear regression at a space and time cost of \(\mathcal{O}(nD)\) and \(\mathcal{O}(nD^{2}+D^{3})\), respectively. However, often we would like to consider \(D\) extremely large (or infinite) and this is therefore infeasible. The solution is to use "the kernel trick" and consider instead a kernel function \(K:\,\mathcal{X}\times\mathcal{X}\to\mathds{R}\) which satisfies
\[K(x,x^{\prime})=\langle\phi(x),\phi(x^{\prime})\rangle, \tag{6}\]
but which can ideally be evaluated more efficiently than by explicitly constructing \(\phi(x)\) and \(\phi(x^{\prime})\) and taking the inner product. Given such a function, we know that the minimizer of the regularized empirical risk is given by
\[f_{d}(x) =\sum_{i=1}^{n}\alpha_{i}K(x_{i},x) \tag{7}\] \[=\left\langle\sum_{i=1}^{n}\alpha_{i}\phi(x_{i}),\phi(x)\right\rangle\] (8) \[=\langle v,\phi(x)\rangle, \tag{9}\]
where
\[\alpha=\big{(}\hat{K}+n\lambda\mathds{1}\big{)}^{-1}\,\hat{Y}, \tag{10}\]
with \(\hat{K}\) the kernel matrix (or Gram matrix) with entries \(\hat{K}_{i,j}=K(x_{i},x_{j})\). Solving Eq. (10) is known as _kernel ridge regression_. If one assumes that evaluating \(K(x,x^{\prime})\) requires constant time, then kernel ridge regression has space and time cost \(\mathcal{O}(n^{2})\) and \(\mathcal{O}(n^{3})\), respectively. We note that in practice one hardly ever specifies the feature map \(\phi\), and instead works directly with a suitable kernel function \(K\).
### Random Fourier features
For many applications, in which \(n\) can be extremely large, a space and time cost of \(\mathcal{O}(n^{2})\) and \(\mathcal{O}(n^{3})\), respectively, prohibits the implementation of kernel ridge regression. This has motivated the development of methods which can bypass these complexity bottlenecks. The method of _random Fourier features_ (RFF) is one such method [14]. To illustrate this method we follow the presentation of Ref. [14], and start by assuming that the kernel \(K:\,\mathcal{X}\times\mathcal{X}\to\mathds{R}\), has _an integral representation_. More specifically, we assume there exists some probability space \((\Phi,\pi)\) and some function \(\psi:\,\mathcal{X}\times\Phi\to\mathds{R}\) such that for all \(x,x^{\prime}\in\mathcal{X}\) one has that
\[K(x,x^{\prime})=\int_{\Phi}\psi(x,v)\psi(x^{\prime},v)\,\mathrm{d}\pi(v). \tag{11}\]
The method of random Fourier features is then based around the idea of approximating \(K(x,x^{\prime})\) by using Monte-Carlo type integration to perform the integral in Eq. (11). More specifically, one uses
\[K(x,x^{\prime})\approx\langle\tilde{\phi}_{M}(x),\tilde{\phi}_{M}(x^{\prime})\rangle \tag{12}\]
where \(\tilde{\phi}_{M}:\ \mathcal{X}\rightarrow\mathbb{R}^{M}\) is a randomized feature map of the form
\[\tilde{\phi}_{M}(x)=\frac{1}{\sqrt{M}}\big{(}\psi(x,v_{1}),...,\psi(x,v_{M}) \big{)}, \tag{13}\]
where \(v_{1},...,v_{m}\) are \(M\) features sampled randomly from \(\pi\). Using the approximation in Eq. (12) allows one to replace kernel ridge regression via \(K\) with linear regression with respect to the random feature map \(\tilde{\phi}_{M}\). This yields a learning algorithm with time and space cost \(\mathcal{O}(nM)\) and \(\mathcal{O}(nM^{2}+M^{3})\), respectively, which is more efficient than kernel ridge regression whenever \(M<n\). Naturally, the quality (i.e., true risk) of the output solution will depend heavily on how large \(M\) is chosen. However, we postpone until later a detailed discussion of this issue, which is central to the results and observations of this work.
We stress that for implementation of linear regression with random Fourier features, it is critical that one is able to sample from the probability measure \(w\mapsto\pi(w)\). We note though that for _shift-invariant_ kernels, i.e., kernels of the form \(K(x,x^{\prime})=\overline{K}(x-x^{\prime})\) for some function \(\overline{K}:\mathcal{X}\rightarrow\mathbb{R}\), the integral representation can often be easily derived [14, 13]. Specifically, if the Fourier transform of \(\overline{K}\) exists, then Bochner's theorem ensures that by taking the Fourier transform of \(\overline{K}\), and considering the probability space \(\Phi=\mathcal{X}\times[0,2\pi)\), we can obtain
\[K(x,x^{\prime})=\overline{K}(x,x^{\prime}) =\frac{1}{2\pi}\int_{\omega\in\mathcal{X}}\int_{\gamma\in[0,2\pi) }\sqrt{2}\cos(\langle\omega,x\rangle+\gamma)\sqrt{2}\cos\big{(}(\omega,x^{ \prime})+\gamma\big{)}q(\omega)\,\mathrm{d}\omega\,\mathrm{d}\gamma \tag{14}\] \[:=\int_{\Phi}\psi(x,v)\psi(x^{\prime},v)\,\mathrm{d}\pi(v) \tag{15}\]
where \(\omega\mapsto q(\omega)\) is the Fourier transform of \(\overline{K}\), which is guaranteed to be a well-defined probability distribution, and \(\pi=q\times\mu\), where \(\mu\) is the uniform measure over \([0,2\pi)\). Indeed, this special case of shift-invariant kernels, in which the measure \(\pi\) is proportional to the Fourier transform of \(\overline{K}\), is the reason for the name _random Fourier features_.
### PQC models for variational QML
As discussed in Section 1, variational QML is based on the classical optimization of models defined via parameterized quantum circuits (PQCs) [13]. In the context of regression, one begins by fixing a parameterized quantum circuit \(C\), whose gates can be depend on both data points \(x\in\mathcal{X}\) and components of a vector \(\theta\in\Theta\) of variational parameters, where typically \(\Theta=[0,2\pi)^{\zeta}\) for some \(c\). For each data point \(x\) and each vector of variational parameters \(\theta\), this circuit realizes the unitary \(U(x,\theta)\). Given this, we then choose an observable \(O\), and define the associated PQC model class \(\mathcal{F}_{(C,O)}\) as the set of all functions \(f_{\theta}:\ \mathcal{X}\rightarrow\mathbb{R}\) defined via
\[f_{\theta}(x)=\langle 0|U^{\dagger}(x,\theta)OU(x,\theta)|0\rangle \tag{16}\]
for all \(x\in\mathcal{X}\), i.e.,
\[\mathcal{F}_{(C,O)}=\{f_{\theta}(\cdot)=\langle 0|U^{\dagger}(\cdot,\theta)OU( \cdot,\theta)|0\rangle\,|\,\theta\in\Theta\}. \tag{17}\]
One then proceeds by using a classical optimization algorithm to optimize over the variational parameters \(\theta\). In this work, we consider an important sub-class of PQC models in which the classical data \(x\) enters only via Hamiltonian time evolutions, whose duration is controlled by a single component of \(x\). To be more precise, for \(x=(x_{1},...,x_{d})\in\mathbb{R}^{d}\), we assume that each gate in the circuit \(C\) which depends on \(x\) is of the form
\[V_{(j,k)}(x_{j})=e^{-i\Pi_{k}^{(j)}x_{j}}, \tag{18}\]
for some Hamiltonian \(H_{k}^{(j)}\). We stress that we exclude here more general encoding schemes, such as those which allow for time evolutions parameterized by functions of \(x\), or time evolutions of parameterized linear combinations
of Hamiltonians. We denote by \(\mathbf{D}^{(j)}=\{H_{k}^{(j)}\,|\,k\in[L_{j}]\}\) the set of all \(L_{j}\) Hamiltonians which are used to encode the component \(x_{j}\) at some point in the circuit, and we call the tuple
\[\mathbf{D}\,:=\,\left(\mathbf{D}^{(1)},...,\mathbf{D}^{(d)}\right) \tag{19}\]
the _data-encoding strategy_. It is by now well known that these models admit a succinct "classical" description [14, 15, 16] given by
\[f_{\theta}(x)=\sum_{\omega\in\tilde{\Omega}_{D}}c_{\omega}(\theta)\mathrm{e}^ {i(\omega,x)}, \tag{20}\]
where
1. the set of frequency vectors \(\tilde{\Omega}_{D}\subseteq\mathbb{R}^{d}\) is completely determined by the data-encoding strategy. We describe the construction of \(\tilde{\Omega}_{D}\) from \(\mathbf{D}\) in Appendix A.
2. the frequency coefficients \(c_{\omega}(\theta)\) depend on the trainable parameters \(\theta\), but in a way which usually does not admit a concise expression.
As described in Ref. [14], we know that \(\omega_{0}\,:=\,(0,...\,,0)\in\tilde{\Omega}_{D}\) and that the non-zero frequencies in \(\tilde{\Omega}_{D}\) come in mirror pairs - i.e \(\omega\in\tilde{\Omega}_{D}\) implies \(-\omega\in\tilde{\Omega}_{D}\). Additionally, one has \(c_{\omega}(\theta)=c_{-\omega}^{*}(\theta)\) for all \(\omega\in\tilde{\Omega}_{D}\) and all \(\theta\), which ensures the function \(f_{\theta}\) evaluates to a real number. As a result, we can perform an arbitrary splitting of pairs to redefine \(\tilde{\Omega}_{D}\,:=\,\Omega_{D}\cup(-\Omega_{D})\), where \(\Omega_{D}\cap(-\Omega_{D})=\{\omega_{0}\}\). It will also be convenient to define \(\Omega_{D}^{+}\,:=\,\Omega_{D}\setminus\{\omega_{0}\}\). Given this, by defining
\[a_{\omega}(\theta) \,:=\,c_{\omega}(\theta)+c_{-\omega}(\theta), \tag{21}\] \[b_{\omega}(\theta) \,:=\,i(c_{\omega}(\theta)-c_{-\omega}(\theta)) \tag{22}\]
for all \(\omega\in\Omega_{D}^{+}\), and writing \(\Omega_{D}=\{\omega_{0},\omega_{1},...,\omega_{|\Omega_{D}^{+}|}\}\), we can rewrite Eq. (20) as
\[f_{\theta}(x) =c_{\omega_{0}}(\theta)+\sum_{i=1}^{|\Omega_{D}^{+}|}\left(a_{ \omega_{i}}(\theta)\cos(\langle\omega_{i},x\rangle)+b_{\omega_{i}}(\theta) \sin(\langle\omega_{i},x\rangle)\right) \tag{23}\] \[=\langle c(\theta),\phi_{D}(x)\rangle \tag{24}\]
where
\[c(\theta) \,:=\,\sqrt{|\Omega_{D}|}\left(c_{\omega_{0}}(\theta),a_{\omega_{ 1}}(\theta),b_{\omega_{1}}(\theta),...,a_{\omega_{|\Omega_{D}^{+}|}}(\theta),b _{\omega_{|\Omega_{D}^{+}|}}(\theta)\right), \tag{25}\] \[\phi_{D}(x) \,:=\,\frac{1}{\sqrt{|\Omega_{D}|}}\Bigg{(}1,\cos(\langle\omega_ {1},x\rangle),\sin(\langle\omega_{1},x\rangle),...,\cos(\langle\omega_{|\Omega _{D}^{+}|},x\rangle),\sin(\langle\omega_{|\Omega_{D}^{+}|},x\rangle)\Bigg{)}, \tag{26}\]
and the normalization constant has been chosen to ensure that \(\langle\phi_{D}(x),\phi_{D}(x)\rangle=1\), which will be required shortly. The formulation in Eq. (24) makes it clear that \(f_{\theta}\) is a _linear_ model in \(\mathbb{R}^{|\tilde{\Omega}_{D}|}\), taken with respect to the feature map \(\phi_{D}:\,\mathcal{X}\rightarrow\mathbb{R}^{|\tilde{\Omega}_{D}|}\). We can now define the model class of all linear models realizable by the parameterized quantum circuit with data-encoding strategy \(\mathbf{D}\), variational parameter set \(\Theta\) and observable \(O\) via
\[\mathcal{F}_{(\Theta,\mathbf{D},O)} =\{f_{\theta}(\cdot)=\langle 0|U^{\dagger}(\theta,\cdot)OU( \theta,\cdot)|0\rangle\,|\,\theta\in\Theta\} \tag{27}\] \[=\{f_{\theta}(\cdot)=\langle c(\theta),\phi_{D}(\cdot)\rangle\,| \,\theta\in\Theta\}. \tag{28}\]
In what follows, we will use a tuple \((\Theta,\mathbf{D},O)\) to represent a PQC architecture, as we have done above. We note that for all \(f\in\mathcal{F}_{(\Theta,\mathbf{D},O)}\) we have that \(|f|_{\infty}\leq|O|_{\infty}\).
It is critical to note that due to the constraints imposed by the circuit architecture \((\Theta,\mathbf{D},O)\), the class \(\mathcal{F}_{(\Theta,\mathbf{D},O)}\) may not contain _all_ possible linear functions with respect to the feature map \(\phi_{\mathbf{D}}\). Said another way, the circuit architecture
gives rise to an _inductive bias_ in the set of functions which can be realized. However, for the analysis that follows, it will be very useful for us to define the set of _all_ linear functions \(\mathcal{F}_{\mathcal{D}}\) with respect to the feature map \(\phi_{\mathcal{D}}\), i.e.,
\[\mathcal{F}_{\mathcal{D}}=\left\{\,f_{b}(\cdot)=\langle v,\phi_{\mathcal{D}}( \cdot)\rangle\,|\,v\in\mathbb{R}^{|\hat{\Omega}_{\mathcal{D}}|}\,\right\}. \tag{29}\]
As shown in Figure. 1, we stress that for any architecture \((\Theta,\mathcal{D},O)\), we have that
\[\mathcal{F}_{(\Theta,\mathcal{D},O)}\subset\mathcal{F}_{\mathcal{D}}. \tag{30}\]
The inclusion is strict due to the fact that for all \(f\in\mathcal{F}_{(\Theta,\mathcal{D},O)}\) we know that \(|f|_{\infty}\leq|O|_{\infty}\), whereas \(\mathcal{F}_{\mathcal{D}}\) contains functions of arbitrary infinity norm. However, we note that if one defines the set
\[\mathcal{F}_{(\mathcal{D},O)}=\left\{f\in\mathcal{F}_{\mathcal{D}}\,|\,|f|_{ \infty}\leq|O|_{\infty}\right\}, \tag{31}\]
then one has \(\mathcal{F}_{(\Theta,\mathcal{D},O)}\subseteq\mathcal{F}_{(\mathcal{D},O)}\) for all architectures \((\Theta,\mathcal{D},O)\), and as proven in Ref. [13], there exist _universal_ architectures for which \(\mathcal{F}_{(\Theta,\mathcal{D},O)}=\mathcal{F}_{(\mathcal{D},O)}\). As such, one can in some sense think of \(\mathcal{F}_{(\mathcal{D},O)}\) as the "closure" of \(\mathcal{F}_{(\Theta,\mathcal{D},O)}\).
### PQC feature map and PQC-kernel
Given the observation from Section 2.4 that all PQC models are linear in some high-dimensional feature space fully defined by the data-encoding strategy, we can very naturally associate to each data-encoding strategy both a feature map and an associated kernel function, which we call the _PQC-kernel_:
Definition 1: (PQC feature map and PQC-kernel) Given a data-encoding strategy \(\mathcal{D}\), we define the PQC feature map \(\phi_{\mathcal{D}}:\,\mathcal{X}\to\mathbb{R}^{|\hat{\Omega}_{\mathcal{D}}|}\) via Eq. (26), i.e.,
\[\phi_{\mathcal{D}}(x)\,:=\frac{1}{\sqrt{|\Omega_{\mathcal{D}}|}}\left(1,\cos( \langle\omega_{1},x\rangle),\sin(\langle\omega_{1},x\rangle),...\,,\cos( \omega_{|\Omega_{\mathcal{D}}^{c}|},x\rangle),\sin(\omega_{|\Omega_{ \mathcal{D}}^{c}|},x)\right) \tag{32}\]
for \(\omega_{i}\in\Omega_{\mathcal{D}}^{+}\). We then define the PQC-kernel \(K_{\mathcal{D}}\) via
\[K_{\mathcal{D}}(x,x^{\prime})\,:=\langle\phi_{\mathcal{D}}(x),\phi_{\mathcal{ D}}(x^{\prime})\rangle. \tag{33}\]
It is crucial to stress that the classical PQC-kernel defined in Definition 1 is fundamentally _different_ from the so called "quantum kernels" often considered in QML - see, for example, Ref. [10, 11] - which are defined from a data-parameterized unitary \(U(x)\) via \(K(x,x^{\prime})=\operatorname{Tr}[\rho(x)\rho(x^{\prime})]\) with \(\rho(x)=U(x)|0\rangle\langle 0|U^{\dagger}(x)\). Additionally, we note that the feature map \(\phi_{\mathcal{D}}\) defined in Definition 1 is _not_ the unique feature map with the property that all functions in \(\mathcal{F}_{(\Theta,\mathcal{D},O)}\) are linear with respect to the feature map. Indeed, we will see in Section 4.3 that any "re-weighting" of \(\phi_{\mathcal{D}}\) will preserve this property.
## 3 Potential of RFF-based linear regression for dequantizing variational QML
Let us now imagine that we have a regression problem to solve. More precisely, imagine that we have a dataset \(S\), with \(n\) elements drawn from some distribution \(P\), as per Section 2.1. One option is for us to use hybrid quantum classical optimization. More specifically, we choose a PQC circuit architecture \((\Theta,\mathcal{D},O)\) - consisting of data-encoding gates, trainable gates and measurement operator - and then variationally optimize over the trainable parameters. We can summarize this as follows.
Algorithm 1: (Variational QML) Choose a PQC architecture \((\Theta,\mathcal{D},O)\) and optimize over the parameters \(\theta\in\Theta\). The output is some linear function \(f_{b}\in\mathcal{F}_{(\Theta,\mathcal{D},O)}\).
We note that Algorithm 1 essentially performs a variational search (typically via gradient based optimization) through \(F_{(\Theta,D,O)}\), which as per Lemma 4, is some parameterized subset of \(\mathcal{F}_{D}\), the set of all linear functions with respect to the feature map \(\phi_{D}\). But this begs the question: Why run Algorithm 1, when we could just do classical linear regression with respect to the feature map \(\phi_{D}\)? More specifically, instead of running Algorithm 1, why not just run the following purely _classical_ algorithm:
**Algorithm 2** (Classical linear regression over \(\mathcal{F}_{D}\)): Given a PQC architecture \((\Theta,\mathbf{D},O)\), construct \(\mathbf{D}\) and the feature map \(\phi_{\mathbf{D}}\), and then perform linear regression with respect to the feature map \(\phi_{\mathbf{D}}\). The output is some \(f_{v}\in\mathcal{F}_{D}\).
Unfortunately, Algorithm 2 has the following shortcomings:
1. **Exponential complexity**: Recall that \(\phi_{D}:\ \mathcal{X}\to\mathbb{R}^{|\tilde{\Omega}_{D}|}\). As such, the space and time complexity of Algorithm 2 is \(\mathcal{O}(n|\tilde{\Omega}_{D}|)\) and \(\mathcal{O}(n|\tilde{\Omega}_{D}|^{2}+|\tilde{\Omega}_{D}|^{3})\), respectively. Unfortunately, as detailed in Table 1 of Ref. [13] and discussed in Section 4.4, the Cartesian product structure of \(\tilde{\Omega}_{D}\) results in a curse of dimensionality which leads to \(|\tilde{\Omega}_{D}|\) scaling _exponentially_ in the number of data components \(d\) (which gives the size of the problem). For example, if one uses a data encoding strategy consisting only of Pauli Hamiltonians, and if each component is encoded via \(L\) encoding gates, then one obtains \(|\tilde{\Omega}_{D}|=\mathcal{O}\left(L^{d}\right)\).
2. **Potentially poor generalization**: As we have noted in Eq. (30), and illustrated in Figure. 1, due to the constrained depth/expressivity of the trainable parts of any PQC architecture which uses the data-encoding strategy \(\mathbf{D}\), we have that \[\mathcal{F}_{(\Theta,\mathbf{D},O)}\subset\mathcal{F}_{\mathbf{D}},\] (34) i.e., that \(\mathcal{F}_{(\Theta,\mathbf{D},O)}\) is a _subset_ of \(\mathcal{F}_{\mathbf{D}}\). Now, let the output of Algorithm 1 be some \(f_{\theta}\in\mathcal{F}_{(\Theta,\mathbf{D},O)}\) and the output of Algorithm 2 be some \(f_{\theta}\in\mathcal{F}_{\mathbf{D}}\). Due to the fact that linear regression is a perfect empirical risk minimizer, and the inclusion in Eq. (34), we are guaranteed that \[\hat{R}(f_{v})\leq\hat{R}(f_{\theta}),\] (35) i.e., the _empirical risk_ achieved by Algorithm 2 will always be better than the empirical risk achieved by Algorithm 1. However, it could be that Algorithm 2 "overfits" - more precisely, it could be the case that the _true risk_ achieved by \(f_{\theta}\) is better than that achieved by \(f_{\theta}\), i.e that \[R(f_{v})\geq R(f_{\theta}).\] (36) Said another way, the PQC architecture results in an _inductive bias_, which constrains the set of linear functions which are accessible to Algorithm 1. As illustrated in Figure 1, it could be the case that this inductive bias leads the output of Algorithm 1 to generalize better than that of Algorithm 2.
In light of the above, the natural question is then as follows.
**Question 1:** (Existence of efficient linear regression): _Can we modify Algorithm 2 (classical linear regression) such that it is both efficient with respect to \(d\), and with high probability outputs a function which is just as good as the output of Algorithm 1 (variational QML), with respect to the true risk?_
As we have already hinted at in the preliminaries, one possible approach - at least for addressing the issue of poor complexity - is to use random Fourier features to approximate the classical PQC-kernel \(K_{D}(x,x^{\prime})=\langle\phi_{D}(x),\phi_{D}(x^{\prime})\rangle\) which is used implicitly in Algorithm 2. Indeed, this has already been suggested and explored in Ref. [10]. More specifically Ref. [10], at a very high level, suggests the following algorithm:
**Algorithm 3:** (Classical linear regression over \(\mathcal{F}_{\mathbf{D}}\) with random Fourier features): Given a data-encoding strategy \(\mathbf{D}\), implement RFF-based \(\lambda\)-regularized linear regression using the classical PQC-kernel \(K_{\mathbf{D}}\), and obtain some function \(h_{v}\in\mathcal{F}_{\mathbf{D}}\). More specifically:
1. Sample \(M\) "features" \(v_{i}=(\omega_{i},\gamma_{i})\in\mathbb{R}^{d}\times[0,2\pi)\) from the distribution \(\pi=p_{D}\times\mu\), which as per Eq. (14) is the product distribution appearing in the integral representation of the shift-invariant kernel \(K_{D}\).
2. Construct the randomized feature map \(\tilde{\phi}_{M}(x)=\frac{1}{\sqrt{M}}\left(\psi(x,v_{1}),...,\psi(x,v_{M})\right)\), where \(\psi(x,v)=\sqrt{2}\cos((\omega,x)+\gamma)\).
3. Implement \(\lambda\)-regularized linear regression with respect to the feature map \(\tilde{\phi}_{M}\).
In the above description of the algorithm we have omitted details of how to sample frequencies from the distribution \(p_{D}\), which will depend on the kernel \(K_{D}\). We note that in order for Algorithm 3 to be efficient with respect to \(d\), it is necessary for this sampling procedure to be efficient with respect to \(d\). For simplicity, we assume for now that this is the case, and postpone a detailed discussion of this issue to Section 4.4. As discussed in Section 2.3, the space and time complexity of Algorithm 3 (post-sampling) is \(\mathcal{O}(nM)\) and \(\mathcal{O}(nM^{2}+M^{3})\), respectively, where \(M\) is the number of frequencies which have been sampled. Given this setup, the natural question is then as follows:
**Question 2:** (Efficiency of RFF for PQC dequantization) _Given a regression problem \(P\) and a circuit architecture \((\Theta,\mathcal{D},O)\), how many data samples \(n\) and frequency samples \(M\) are necessary to ensure that, with high probability, the output of Algorithm 3 (RFF) is a good approximation to the output of Algorithm 1 (variational QML), with respect to true risk?_
Said another way, Question 2 is asking when classical RFF-based regression (Algorithm 3) can be used to efficiently _dequantize_ variational QML (Algorithm 1). In Ref. [17] the authors addressed a similar question, but with two important differences:
1. It was implicitly assumed that \((\Theta,\mathcal{D},O)\) is universal - i.e., that using the PQC one can realize all functions in \(\mathcal{F}_{(\mathcal{D},O)}\). Recall however from the discussion above that we are precisely interested in the case in which \((\Theta,\mathcal{D},O)\) is not universal - i.e., the case in which \(\mathcal{F}_{(\Theta,\mathcal{D},O)}\subset\mathcal{F}_{(\mathcal{D},O)}\) due to constraints on the circuit architecture. This is for two reasons: Firstly, because this is the case for practically realizable near-term circuit architectures. Secondly, because it is in this regime in which the circuit architecture induces an _inductive bias_ which may lead to better generalization than the output of linear regression over \(\mathcal{F}_{\mathcal{D}}\).
2. Instead of considering true risk, Ref. [17] considered the complexity necessary to achieve \(|h_{v}(x)-f_{\theta}(x)|\leq\epsilon\) for all \(x\). We note that this is _stronger_ than \(|R(h_{v})-R(f_{\theta})|\leq\epsilon\), due to the later comparing the functions with respect to the data distribution \(P\). Here we are concerned with the latter, which is the typical goal in statistical learning theory.
In light of these considerations, we proceed in Section 4 to provide answers to Question 2.
**Note on recent related work:** As per the discussion above, the two motivations for introducing Algorithm 3 were the poor efficiency and potentially poor generalization associated to Algorithm 2. However, we note that in Ref. [16], which appeared recently, the authors show via a tensor network analysis that to every PQC architecture one can associate a feature map - different from the PQC feature map \(\phi_{D}\) - for which (a) all functions in the PQC model class are linear with respect to the feature map, and (b) the associated kernel _can_ be evaluated efficiently classically. As such, using this feature map, for any number of data-samples \(n\), one _can_ run Algorithm 2 classically efficiently with respect to \(d\) - i.e., there is no need to approximate the kernel via RFF! Indeed, this approach to dequantization is suggested by the authors of Ref. [16]. However as above, and discussed in Ref. [16], this does not immediately yield an efficient dequantization of variational QML, due to the potentially poor generalization of linear regression over the entire function space. In light of this, the generalization of linear regression with respect to the kernel introduced in Ref. [16] certainly deserves attention, and we hope that the methods and tools introduced in this work can be useful in that regard.
## 4 Generalization and efficiency guarantees for RFF-based linear regression
In this section, we attempt to provide rigorous answers to Question 2 - i.e., for which PQC architectures and for which regression problems does RFF-based linear regression yield an efficient dequantization of variational QML?
In particular, this section is structured as follows: We begin in Section 4.1 with a brief digression, providing definitions for some important kernel notions. With these in hand, we continue in Section 4.2 to state Theorem 1 which provides a concrete answer to Question 2. In particular, Theorem 1 provides an upper bound on both the number of data samples \(n\), and the number of frequency samples \(M\), which are sufficient to ensure that, with high probability, the output of RFF-based linear regression with respect to the kernel \(K_{D}\) is a good approximation to the best possible PQC function, with respect to true risk. As we will see, these upper bounds depend crucially on two quantities which are defined in Section 4.1, namely the operator norm of the kernel integral operator associated to \(K_{D}\), and the reproducing kernel Hilbert space norm of the optimal PQC function.
Given the results of Section 4.2, in order to make any concrete statements it is necessary to gain a deeper quantitative understanding of both the operator norm of the kernel integral operator and the RKHS norm of functions in the PQC model class. However, before doing this, we show in Section 4.3 that the PQC feature map \(\phi_{D}\) is in fact a special instance of an entire family of feature maps - which we call "re-weighted PQC feature maps" - and that Theorem 1 holds not only for \(K_{D}\), but for the kernel induced by any such re-weighted feature map. With this in hand, we then proceed in Section 4.4 to show how the re-weighting determines the distribution over frequencies \(\pi\) from which one needs to sample in the RFF procedure, and we discuss in detail for which feature maps/kernels one can and cannot _efficiently_ sample from this distribution. This immediately allows us to rule out efficient RFF dequantization for a large class of re-weighted PQC feature maps, and therefore allows us to focus our attention on only those feature maps (re-weightings) which admit efficient sampling procedures.
With this knowledge, we then proceed in Sections 4.5 and 4.6 to discuss in detail the quantitative behaviour of the kernel integral operator and the RKHS norm of PQC functions, for different re-weighted PQC kernels. This allows us to place further restrictions on the circuit architectures and re-weightings which yield efficient PQC dequantization via Theorem 1. Finally, in Section 4.7 we show that the properties we have identified in Sections 4.5 and 4.6 as _sufficient_ for the application of Theorem 1 are in some sense _necessary._ In particular, we prove _lower bounds_ on the number of frequency samples necessary to achieve a certain _average_ error via RFF, and use this to delineate rigorously when efficient dequantization via RFF is _not_ possible.
Figure 1: Illustration of the relationship between \(P_{D}\) and \(F_{(\Theta,D,O)}\), and the output of Algorithms 1 and 2. In particular, we always have that \(F_{(\Theta,D,O)}\subset\mathcal{F}_{D}\). As a consequence of this, and the fact that Algorithm 2 is a perfect empirical risk minimizer, we always have that \(\hat{R}(f_{b})\leq\hat{R}(f_{\theta})\). However, it might be the case that \(R(f_{b})\geq R(f_{\theta})\).
### Preliminary kernel theory
In order to present Theorem 1 we require a few definitions. To start, we need the notion of a _reproducing kernel Hilbert space_ (RKHS), and the associated RKHS norm.
**Definition 2**:: (RKHS and RKHS norm) Given a kernel \(K:\ \mathcal{X}\times\mathcal{X}\to\mathds{R}\) we define the associated reproducing kernel Hilbert space (RKHS) as the tuple \((\mathcal{H}_{K},\langle\cdot,\cdot\rangle_{K})\) where \(\mathcal{H}_{K}\) is the set of functions defined as the completion (including limits of Cauchy series) of
\[\operatorname{span}\{K_{x}(\cdot)\,:=\,K(x,\cdot)\,|\,x\in\mathcal{X}\}, \tag{37}\]
and \(\langle\cdot,\cdot\rangle_{K}\) is the inner product on \(\mathcal{H}_{K}\) defined via
\[\langle g,h\rangle_{\mathcal{H}_{K}}\,:=\,\sum_{i,j}\alpha_{i}\beta_{j}K(x_{i},x_{j}) \tag{38}\]
for any two functions \(g=\sum_{i}\alpha_{i}K_{x_{i}}\) and \(h=\sum_{j}\beta_{j}K_{x_{j}}\) in \(\mathcal{H}_{K}\). This inner product then induces the RKHS norm \(|\!|\cdot|\!|_{K}\) defined via
\[|\!|g|\!|_{K}\,:=\,\sqrt{\langle g,g\rangle_{\mathcal{H}_{K}}}. \tag{39}\]
It is crucial to note that for two kernels \(K_{1}\) and \(K_{2}\) it may be the case that \(\mathcal{H}_{K_{1}}=\mathcal{H}_{K_{2}}\) but \(|\!|\cdot|\!|_{K_{1}}\neq|\!|\cdot|\!|_{K_{2}}\). We will make heavy use of this fact shortly. In particular, the re-weighted PQC kernels introduced in Section 4.3 have precisely this property. In addition to the definition above, we also need a definition of the kernel integral operator associated to a kernel, which we define below.
**Definition 3**:: (Kernel integral operator) Given a kernel \(K:\ \mathcal{X}\times\mathcal{X}\to\mathds{R}\) and a probability distribution \(P_{\mathcal{X}}\) over \(\mathcal{X}\), we start by defining the space of square-integrable functions with respect to \(P_{\mathcal{X}}\) via
\[L^{2}(\mathcal{X},P_{\mathcal{X}})=\left\{f\in\mathds{R}^{\mathcal{X}}\text{ such that }\int_{\mathcal{X}}|f(x)|^{2}\,\mathrm{d}P_{\mathcal{X}}(x)<\infty\right\}. \tag{40}\]
The kernel integral operator \(T_{K}:\ L^{2}(\mathcal{X},P_{\mathcal{X}})\to L^{2}(\mathcal{X},P_{ \mathcal{X}})\) is then defined via
\[(T_{K}g)(x)=\int_{\mathcal{X}}K(x,x^{\prime})g(x^{\prime})\,\mathrm{d}P_{ \mathcal{X}}(x^{\prime}) \tag{41}\]
for all \(g\in L^{2}(\mathcal{X},P_{\mathcal{X}})\).
In addition, we note that we will mostly be concerned with the _operator norm_ of the kernel integral operator, which we denote with \(|\!|T_{K}|\) - i.e., when no subscript is used to specify the norm, we assume the operator norm.
### Efficiency of RFF for matching variational QML
Given the definitions of Section 4.1, we proceed in this section to state Theorem 1, which provides insight into the number of data samples \(n\) and the number of frequency samples \(M\) which are sufficient to ensure that, with probability at least \(1-\delta\), the true risk of the output hypothesis of RFF-based linear regression is no more than \(\epsilon\) worse than the true risk of the best possible function realizable by the PQC architecture. To this end, we require first a preliminary definition of the "best possible PQC function":
**Definition 4**:: (Optimal PQC function) Given a regression problem \(P\sim\mathcal{X}\times\mathds{R}\), and a PQC architecture \((\Theta,\mathcal{D},O)\), we define \(f^{*}_{(\Theta,\mathcal{D},O)}\), the optimal PQC model for \(P\) (with respect to _true risk_), via
\[f^{*}_{(\Theta,\mathcal{D},O)}=\operatorname*{arg\,min}_{f\in P_{(\Theta, \mathcal{D},O)}}\left[R(f)\right]. \tag{42}\]
With this in hand we can state Theorem 1. We note however that this follows via a straightforward application of the RFF generalization bounds provided in Ref. [10] to the PQC-kernel, combined with the insight that the set of PQC functions \(\mathcal{F}_{(\Theta,D,O)}\) is contained within the \(H_{K_{D}}\), the RKHS associated with the PQC-kernel. A detailed proof is provided in Appendix B.
**Theorem 1**:: (RFF vs. variational QML) _Let \(R\) be the risk associated with a regression problem \(P\sim\mathcal{X}\times\mathds{R}\). Assume the following:_
1. \(|f^{*}_{(\Theta,D,O)}|_{K_{D}}\leq C\)__
2. \(|y|\leq b\) _almost surely when_ \((x,y)\sim P\)_, for some_ \(b>0\)_._
_Additionally, define_
\[n_{0} :=\max\left\{4|T_{K_{D}}|^{2},\left(528\log\frac{1112\sqrt{2}}{ \delta}\right)^{2}\right\}, \tag{43}\] \[c_{0} :=36\left(3+\frac{2}{|T_{K_{D}}|}\right),\] (44) \[c_{1} :=8\sqrt{2}(4b+\frac{5}{\sqrt{2}}C+2\sqrt{2C}). \tag{45}\]
_Then, let \(\delta\in(0,1]\), let \(n\geq n_{0}\), set \(\lambda_{n}=1/\sqrt{n}\), and let \(\hat{f}_{M_{n},\lambda_{n}}\) be the output of \(\lambda_{n}\)-regularized linear regression with respect to the feature map_
\[\phi_{M_{n}}(x)=\frac{1}{\sqrt{M_{n}}}\big{(}\psi(x,\nu_{1}),...,\psi(x,\nu_{ M})\big{)} \tag{46}\]
_constructed from the integral representation of \(K_{D}\) by sampling \(M_{n}\) elements from \(\pi\). Then, with probability at least \(1-\delta\), one achieves either_
\[R(\hat{f}_{M_{n},\lambda_{n}})\leq R(f^{*}_{(\Theta,D,O)}) \tag{47}\]
_or_
\[R(\hat{f}_{M_{n},\lambda_{n}})-R(f^{*}_{(\Theta,D,O)})\leq\epsilon, \tag{48}\]
_by ensuring that_
\[n\geq\max\left\{\frac{c_{1}^{2}\log^{4}\frac{1}{\delta}}{\epsilon^{2}},n_{0}\right\} \tag{49}\]
_and_
\[M\geq c_{0}\sqrt{n}\log\frac{108\sqrt{n}}{\delta}. \tag{50}\]
Let us now try to unpack what insights can be gained from Theorem 1. Firstly, recall that here we consider \(\mathcal{X}\subseteq\mathds{R}^{d}\), in which case \(d\) provides the relevant asymptotic scaling parameter. With this in mind, in order to gain intuition, assume for now that \(n_{0},c_{0}\) and \(c_{1}\) are constants with respect to \(d\). Assume additionally that one can sample efficiently from \(\pi\). In this case we see via Theorem 1 that both the number of data points \(n\), and the number of samples \(M\), which are sufficient to ensure that with probability \(1-\delta\) there is a gap of at most \(\epsilon\) between the true risk of PQC optimization and the true risk of RFF, is independent of \(d\), and polylogarithmic in both \(1/\epsilon\) and \(1/\delta\). Given that the time and space complexity of RFF-based linear regression is \(\mathcal{O}(nM)\) and \(\mathcal{O}(nM^{2}+M^{3})\), respectively, in this case Theorem 1 guarantees that Algorithm 3 (RFF based linear regression) provides an efficient dequantization of variational QML.
However, in general \(n_{0},c_{0}\) and \(c_{1}\) will _not_ be constants, and one will _not_ be able to sample efficiently from \(\pi\). In particular, \(n_{0},c_{0}\) and \(c_{1}\) depend on both \(|T_{K_{D}}|\), the operator norm of the kernel integral operator, and \(C\), the upper bound on the RKHS norm of the optimal PQC function. Given the form of \(n_{0},c_{0}\) and \(c_{1}\), in order for Theorem 1 to yield a polynomial upper bound on \(n\) and \(M\), it is sufficient that \(|T_{K_{D}}|=\Omega(1/\text{poly}(d))\) and that \(C=\mathcal{O}(\text{poly}(d))\).
**Summary:** In order to use Theorem 1 to make any concrete statement concerning the efficiency of RFF for dequantizing variational QML, we need to obtain the following.
1. _Lower bounds_ on \(|T_{K_{D}}|\), the operator norm of the kernel integral operator.
2. _Upper bounds_ on \(|f^{*}_{(\Theta,D,O)}|k_{D}\), the RKHS norm of the optimal PQC function.
3. An understanding of the complexity of sampling from \(\pi\), the distribution appearing in the integral representation of \(K_{D}\).
We address the above requirements in the following sections. However, before doing that, we note in Section 4.3 below that Theorem 1 applies not only to \(K_{D}\), but to an entire family of "re-weighted" kernels, of which \(K_{D}\) is a specific instance. We will see that this is important as different re-weightings will lead to substantially different sampling complexities, as well as lower and upper bounds on \(|T_{K_{D}}|\) and \(C\), respectively.
### Re-weighted PQC kernels
As mentioned in Section 4.2, the proof of Theorem 1 is essentially a straightforward application of the RFF generalization bound from Ref. [17] to the classical PQC kernel \(K_{D}\). However, as discussed in the proof of Theorem 1 in Appendix B, the key insight that allows one to leverage the generalization bound from Ref. [17] into a bound on the difference in true risk between the output of variational QML and the output of RFF based linear regression with the PQC kernel \(K_{D}\), is the fact that \(\mathcal{F}_{(\Theta,D,O)}\) is a subset of \(\mathcal{F}_{D}\) - i.e., the fact that \(\mathcal{F}_{(\Theta,D,O)}\subset\mathcal{F}_{D}\). With this in mind, here we make the observation that the set \(\mathcal{F}_{D}\), which was defined from the feature map \(\phi_{D}\) and the kernel \(K_{D}\), is in fact invariant under certain re-weightings of the PQC feature map. As a result, Theorem 1 in fact holds not just for \(K_{D}\), but for _all_ of the appropriately re-weighted PQC kernels.
We can now make the notion of a re-weighted PQC kernel precise. For any "re-weighting vector" \(w\in\mathbb{R}^{|\Omega_{D}|}\), we define the re-weighted PQC feature map via
\[\phi_{(D,w)}(x)\,:=\frac{1}{|w|_{2}}\big{(}w_{0},w_{1}\cos((\omega_{1},x)),w_{ 1}\sin((\omega_{1},x)),...\,,w_{|\Omega_{D}^{1}|}\cos\big{(}(\omega_{|\Omega_{ D}^{1}|},x)\big{)},w_{|\Omega_{D}^{1}|}\sin\big{(}(\omega_{|\Omega_{D}^{1}|},x) \big{)}\big{)}, \tag{51}\]
along with the associated set of linear functions with respect to \(\phi_{(D,w)}\), defined via
\[\mathcal{F}_{(D,w)}=\{f_{\beta}(\cdot)=\langle v,\phi_{(D,w)}(\cdot)\rangle\, |\,\theta\in\Theta\}, \tag{52}\]
and the associated re-weighted PQC kernel
\[K_{(D,w)}(x,x^{\prime})\,:=\langle\phi_{(D,w)}(x),\phi_{(D,w)}(x^{\prime})\rangle. \tag{53}\]
Note that we recover the previous definitions when \(w\) is the vector of all \(1\)'s (in which case \(|w|_{2}=\sqrt{|\Omega_{D}|}\)). We can then make the following observation:
**Observation 1:** (Invariance of \(\mathcal{F}_{D}\) under non-zero feature map re-weighting) For a re-weighting vector \(w\in\mathbb{R}^{|\Omega_{D}|}\) satisfying \(w_{i}\neq 0\) for all \(i\in[|\Omega_{D}|]\), we have that
\[\mathcal{F}_{(D,w)}=\mathcal{F}_{D}. \tag{54}\]
Proof.: Define the matrix \(M_{w}\) as the matrix with \(w\) as its diagonal, and the matrix
\[M\,:=\left(\sqrt{|\Omega|_{D}}/|w|_{2}\right)M_{w}. \tag{55}\]
By the assumptions on \(w\), the matrix \(M\) is invertible. Now, let \(f\in\mathcal{F}_{(D,w)}\). We have that
\[f(\cdot)=\langle v,\phi_{(D,w)}(\cdot)\rangle=\langle v,M\phi_{D}(\cdot) \rangle=\langle vM,\phi_{D}(\cdot)\rangle=\langle\tilde{v},\phi_{D}(\cdot)\rangle, \tag{56}\]
i.e., \(f\in\mathcal{F}_{D}\), and, therefore, \(\mathcal{F}_{(D,w)}\subset\mathcal{F}_{D}\). Similarly, for all \(g\in\mathcal{F}_{D}\) we have that
\[g(\cdot)=\langle v,\phi_{D}(\cdot)\rangle=\langle v,M^{-1}M\phi_{D}(\cdot) \rangle=\langle vM^{-1},M\phi_{D}(\cdot)\rangle=\langle\tilde{v}|\phi_{(D,w)}( \cdot)\rangle, \tag{57}\]
i.e., \(g\in\mathcal{F}_{(D,w)}\), and hence \(\mathcal{F}_{D}\subset\mathcal{F}_{(D,w)}\)
We note that Observation 1, combined with the fact that \(\mathcal{F}_{(\Theta,D,O)}\subset\mathcal{F}_{D}\), implies that \(\mathcal{F}_{(\Theta,D,O)}\subset\mathcal{F}_{(D,w)}\). As alluded to before, we therefore see that Theorem 1 actually holds for _any_ PQC kernel \(K_{(D,w)}\), re-weighted via a re-weighting vector \(w\) with no zero elements. We note that allowing re-weighting vectors with zero elements has the effect of "shrinking" the set \(F_{(D,w)}\), which might result in the existence of functions \(f\) satisfying both \(f\in\mathcal{F}_{(\Theta,D,O)}\) and \(f\notin F_{(D,w)}\). Intuitively, this is problematic because for regression problems in which \(f\) is the optimal solution, we know that the PQC architecture \((\Theta,D,O)\) can realize \(f\), but we cannot hope for the RFF procedure \(K_{(D,w)}\) to do the same, as it is limited to hypotheses within \(F_{(D,w)}\).
In light of the above observations, from this point on we can broaden our discussion of the application of Theorem 1 to include all appropriately re-weighted PQC kernels. This insight is important because:
1. We will see that all appropriately re-weighted PQC kernels give rise to the same function set \(\mathcal{F}_{D}\), but to different RKHS norms. As a result, the RKHS norm of the optimal PQC function - which as we have seen is critical to the complexity of the RFF method - will depend on which re-weighting we choose.
2. Similarly, we will see that the operator norm of the kernel integral operator - and therefore again the complexity of RFF linear regression - depends heavily on the re-weighting chosen.
3. We will see that the re-weighting of the PQC kernel completely determines the probability distribution \(\pi\), from which it is necessary to sample in order to implement RFF-based linear regression. Therefore, once again, the efficiency of RFF will depend on the re-weighting chosen. This perspective will also allow us to see why zero elements are not allowed in the re-weighting vector. Namely, because doing this will cause the probability of sampling the associated frequency to be zero, which is problematic if that frequency is required to represent the regression function.
### RFF implementation
Recall from Section 2.3, and from our presentation of Algorithm 3 in Section 3, that when given a shift-invariant kernel \(K\), in order to implement RFF-based linear regression, one has to sample from the probability measure \(\pi=p\times\mu\), which one obtains from the Fourier transform of \(K\). As such, given a re-weighted PQC kernel \(K_{(D,w)}\), we need to:
1. Understand the structure of the probability distribution \(p\), which should depend on both \(D\) and the re-weighting vector \(w\).
2. Understand when and how - i.e., for which data-encoding strategies and which re-weightings - one can efficiently sample from \(p\).
Let us begin with point 1. To this end we start by noting that the re-weighted PQC kernels \(K_{(D,w)}\) have a particularly simple integral representation, from which one can read off the required distribution. In particular, note that
\[K_{(D,w)}(x,x^{\prime}) =\langle\phi_{(D,w)}(x),\phi_{(D,w)}(x^{\prime})\rangle \tag{58}\] \[=\frac{1}{|\mathsf{w}|_{2}^{2}}\left(w_{0}^{2}+\sum_{i=1}^{| \Omega_{D}^{2}|}w_{i}^{2}\left[\cos(\langle\omega_{i},x\rangle)\cos\bigl{(} \langle\omega_{i},x^{\prime}\rangle\bigr{)}+\sin(\langle\omega_{i},x\rangle) \sin\bigl{(}\langle\omega_{i},x^{\prime}\rangle\bigr{)}\right]\right)\] (59) \[=\frac{1}{|\mathsf{w}|_{2}^{2}}\sum_{i=0}^{|\Omega_{D}^{2}|}w_{i} ^{2}\left[\cos(\langle\omega_{i},x\rangle)\cos\bigl{(}\langle\omega_{i},x^{ \prime}\rangle\bigr{)}+\sin(\langle\omega_{i},x\rangle)\sin\bigl{(}\langle \omega_{i},x^{\prime}\rangle\bigr{)}\right]\] (60) \[=\frac{1}{|\mathsf{w}|_{2}^{2}}\sum_{i=0}^{|\Omega_{D}^{2}|}w_{i} ^{2}\left[\cos\bigl{(}\langle\omega_{i},(x-x^{\prime})\rangle\bigr{)}\right],\] (61) \[=\frac{1}{2\pi}\,\int_{\mathcal{X}}\int_{0}^{2\pi}\sqrt{2}\cos( \langle\omega,x\rangle+\gamma)\sqrt{2}\cos\bigl{(}\langle\omega,x^{\prime} \rangle+\gamma\bigr{)}q_{(D,w)}(\omega)\ \mathrm{d}\gamma\mathrm{d}\nu, \tag{62}\]
where
\[q_{(D,w)}(\omega)=\sum_{i=0}^{|\Omega_{D}^{(i)}|}\frac{w_{i}^{2}}{|w|_{2}^{2}} \delta(\omega-\omega_{i}), \tag{63}\]
and \(\delta\) is the Dirac delta function. By comparison with Eq. (14) we therefore see that \(\pi=q_{(D,w)}\times\mu\), where as before, \(\mu\) is the uniform distribution over \([0,2\pi)\). For convenience, we refer to \(q_{(D,w)}\) as the probability distribution associated to \(K_{(D,w)}\).
Let us now move on to point \(2\) - in particular, for which data encoding strategies and re-weighting vectors can we _efficiently_ sample from \(q_{(D,w)}\)? Firstly, note that sampling from the _continuous_ distribution \(q_{(D,w)}\) can be done by sampling from the _discrete_ distribution \(p_{(D,w)}\) over \(\Omega_{D}\) defined via
\[p_{(D,w)}(\omega_{i})=\frac{w_{i}^{2}}{|w|_{2}^{2}} \tag{64}\]
for all \(\omega_{i}\in\Omega_{D}\). As a result, from this point on we focus on the distribution \(p_{(D,w)}\), and when clear from the context we drop the subscript and just use \(p\) to refer to \(p_{(D,w)}\). Also, we note that we can in principle just work directly with the choice of probability distribution, as opposed to the underlying weight vector, as we know for any probability distribution over \(\Omega_{D}\) there exists an appropriate weight vector.
As \(p\) is a distribution over \(\Omega_{D}\), in order to discuss the efficiency of sampling from \(p\), it is necessary to briefly recall some facts about the sets \(\Omega_{D}\) and \(\tilde{\Omega}_{D}\). In particular, as discussed in Appendix A, given a data-encoding strategy \(\mathbf{D}=\big{(}\mathbf{D}^{(1)},...,\mathbf{D}^{(d)}\big{)}\), where \(\mathbf{D}^{(j)}\) contains the Hamiltonians used to encode the \(j\)'th data component \(x_{j}\), we know that
\[\tilde{\Omega}_{D}=\tilde{\Omega}_{D}^{(1)}\times...\times\tilde{\Omega}_{D}^ {(d)}, \tag{65}\]
where \(\tilde{\Omega}_{D}^{(j)}\subset\mathds{R}\) depends only on \(\mathbf{D}^{(j)}\) - i.e., \(\tilde{\Omega}_{D}\) has a Cartesian product structure. Additionally, as discussed in Section 2.4 we know that \(\tilde{\Omega}_{D}\,:=\Omega_{D}\cup(-\Omega_{D})\), where \(\Omega_{D}\cap(-\Omega_{D})=\{\omega_{0}\}\). Taken together, we see that
\[|\tilde{\Omega}_{D}| =\prod_{j=1}^{d}|\tilde{\Omega}_{D}^{(j)}|, \tag{66}\] \[|\Omega_{D}| =\frac{|\tilde{\Omega}_{D}|-1}{2}+1. \tag{67}\]
Now, let us define \(N_{j}\,:=|\tilde{\Omega}_{D}^{(j)}|\), and make the assumption that \(N_{j}\) is independent of \(d\)1. Furthermore, let us define \(N_{\text{min}}=\min\{N_{j}\}\). We then have that
Footnote 1: One can see from the discussion in Appendix A that \(N_{j}\) depends directly only on \(L_{j}\), the number of encoding gates in \(\mathbf{D}^{(j)}\), and the spectra of the encoding Hamiltonians in \(\mathbf{D}^{(j)}\). This assumption is therefore justified for all data-encoding strategies in which both \(L_{j}\) and the Hamiltonian spectra are independent of \(d\), which is standard practice. One can see Table 1 in Ref. [5] for a detailed list of asymptotic upper bounds on \(N_{j}\) for different encoding strategies.
\[|\tilde{\Omega}_{D}|\geq N_{\text{min}}^{d} \tag{68}\]
i.e., that the number of frequencies in \(\tilde{\Omega}_{D}\), and therefore \(\Omega_{D}\), scales _exponentially_ with respect to \(d\). From this we can immediately make our first observation:
Observation 2:Given that the number of elements in \(\Omega_{D}\) scales exponentially with \(d\), one _cannot_ efficiently store and sample from arbitrary distributions supported on \(\Omega_{D}\).
As such, we have to restrict ourselves to _structured distributions_, whose structure facilitates efficient sampling. One such subset of distributions are those which are supported only on a polynomial (in \(d\)) size subset of \(\Omega_{D}\). Another suitable set of distributions is what we call _product-induced_ distributions. Specifically, let \(\tilde{p}^{(j)}\) be an arbitrary distribution over \(\tilde{\Omega}_{D}^{(j)}\), and define the product distribution \(\tilde{p}\) over \(\tilde{\Omega}_{D}\) via
\[\tilde{p}\big{(}\omega=(\omega_{1},...,\omega_{d})\big{)}=\tilde{p}^{(1)}( \omega_{1})\times...\times\tilde{p}^{(d)}(\omega_{d}). \tag{69}\]
Note that, due to the \(d\)-independence of \(|\tilde{\Omega}_{D}^{(j)}|\) we can store and sample from \(\tilde{p}^{(j)}\) efficiently, which then allows us to sample from \(\tilde{p}\) by simply drawing \(\omega_{j}\sim\tilde{p}^{(j)}\) for all \(j\in[d]\) and then outputting \(\omega=(\omega_{1},...,\omega_{d})\). However, it may be the case that \(\omega\notin\Omega_{D}\). As such, the natural thing to do is simply output \(\omega\) if \(\omega\in\Omega_{D}\), and if not, output \(-\omega\). If one does this, then one samples from the distribution \(p\) over \(\Omega_{D}\) defined via
\[p(\omega)\,:=\left\{\begin{aligned} \tilde{p}(\omega)& \text{ if }\omega=\omega_{0},\\ \tilde{p}(\omega)+\tilde{p}(-\omega)&\text{ else,} \end{aligned}\right. \tag{70}\]
which we refer to as a _product-induced_ distribution. We can in fact however go further, and use the Cartesian product structure of \(\tilde{\Omega}_{D}\) to generalize product-induced distributions to _matrix-product-state-induced_ (MPS-induced) distributions. To do this, let us label the elements of \(\tilde{\Omega}_{D}^{(j)}\) via \(\tilde{\Omega}_{D}^{(j)}=\{\tilde{\Omega}_{kj}^{(j)}\}\) for \(k_{j}\in[N_{j}]\). We can then write any \(\omega\in\tilde{\Omega}_{D}\) via \(\omega=(\omega_{k_{1}}^{(1)},...,\omega_{k_{d}}^{(d)})\), for some indexing \((k_{1},...,k_{d})\). From this, we see that _any_ distribution \(\tilde{p}\) over \(\tilde{\Omega}_{D}\) can be naturally represented as a \(d\)-tensor - i.e., as as a tensor with \(d\) legs, where the \(j\)'th leg is \(N_{j}\) dimensional. Graphically, we have that
\[\tilde{p}[(\omega_{k_{1}}^{(1)},...,\omega_{k_{d}}^{(d)})]= \tag{71}\]
Now we can consider the subset of distributions which can be represented by a _matrix product state_(Sch11), i.e., those distributions for which
\[\tilde{p}[(\omega_{k_{1}}^{(1)},...,\omega_{k_{d}}^{(d)})]= \tag{72}\]
We refer to distributions which admit such a representation as MPS distributions [10, 11, 12, 13]. One can efficiently store such distributions whenever the bond dimension \(\chi\) is polynomial in \(d\), and as described in Refs. [10, 11, 12], one can sample from such distributions with complexity \(dN_{\max}\chi^{3}\). Note that the product distribution in Eq. (69) is a special case of an MPS distribution,with \(\chi=1\). Now, given an MPS distribution \(\tilde{p}\) over \(\tilde{\Omega}_{D}\), we define the induced distribution over \(\Omega_{D}\) via Eq. (70).
**Summary:** In order to _efficiently_ implement the RFF procedure for a kernel \(K_{(D,w)}\), it is necessary that one can sample efficiently from \(p_{(D,w)}\), the discrete probability distribution associated to the kernel. However, the number of frequency vectors \(|\Omega_{D}|\) typically scales exponentially in \(d\), and as such one _cannot_ efficiently store and sample from arbitrary probability distributions over \(\Omega_{D}\). As such, _efficiently_ implementing RFF is only possible for the subset of kernels whose associated distributions have a structure which facilitates efficient sampling. Due to the Cartesian product structure of \(\tilde{\Omega}_{D}\) one such set of distributions (amongst others) are those induced by MPS with polynomial bond dimension.
### Kernel integral operator for PQC kernels
Recall from Theorem 1 that
\[M\geq c_{0}\sqrt{n}\log\frac{108\sqrt{n}}{\delta} \tag{73}\]
frequency samples are sufficient to guarantee, with probability greater than \(1-\delta\), an error of at most \(\epsilon\) between the output of the RFF procedure and the optimal PQC model. As such, in order to fully understand the complexity of RFF-based regression, it is necessary for us to gain a better understanding of \(c_{0}\), which is given by
\[c_{0}=9\left(\frac{29}{4}+\frac{4}{|T_{K_{(D,w)}}|}\right). \tag{74}\]
In particular, in order to find the smallest number of sufficient frequency samples, it is necessary for us to obtain an upper bound on \(c_{0}\), which in turn requires a _lower bound_ on \(|T_{K_{(D,w)}}|\), the operator norm of the kernel integral operator associated with \(K_{(D,w)}\). We achieve this with the following lemma, whose proof can be found in Appendix C.
**Lemma 1:** (Operator norm of \(T_{(K_{D,w)}}\)) _Let \(K_{(D,w)}\) be the re-weighted PQC kernel defined via Eq. (53), and let \(T_{K_{(D,w)}}\) be the associated kernel integral operator (as per Definition 3). Assume that (a) the marginal distribution \(P_{X}\) appearing in the definition of the kernel integral operator is fixed to the uniform distribution, and (b) The frequency set \(\Omega_{D}\) consists only of integer vectors - i.e., \(\Omega_{D}\subset\mathbb{Z}^{d}\). Then, we have that_
\[|T_{K_{(D,w)}}| =\max_{i\in|\Omega_{D}|}\left\{\frac{1}{2}\frac{w_{i}^{2}}{|w|_{2 }^{2}}\right\} \tag{75}\] \[=\max_{\omega\in\Omega_{D}}\left\{\frac{1}{2}\hat{p}_{(D,w)}( \omega)\right\}. \tag{76}\]
In light of this, let us again drop the subscript for convenience, and define \(p_{\max}:=\max_{\omega\in\Omega_{D}}\left\{\hat{p}_{(D,w)}(\omega)\right\}\). With this in hand we can immediately make the following observation:
Observation 3:In order to achieve \(c_{0}=\mathcal{O}\left(\text{poly}(d)\right)\), which is necessary for Theorem 1 to imply that \(M=\mathcal{O}(\text{poly}(d))\) frequency samples are sufficient, we require that
\[\hat{p}_{\max}=\Omega\left(\frac{1}{\text{poly}(d)}\right), \tag{77}\]
i.e., the maximum probability of the probability distribution associated to \(K_{(D,w)}\) must decay at most inversely polynomially in \(d\).
It is important to stress that we have _not_ yet established the _necessity_ that \(\hat{p}_{\max}\) decays inversely polynomially for the RFF procedure to be efficient. In particular, we have only established that this is required for the guarantee of Theorem 1 to be meaningful. However, we will show shortly, in Section 4.7, that Eq. (77) is indeed also necessary, at least in order to obtain a small _average_ error.
In light of Observation 3, we can immediately rule out the meaningful applicability of Theorem 1 for kernels with the following associated distributions:
**The uniform distribution:** As discussed in Section 4.4, we have that \(|\tilde{\Omega}_{D}|\geq N_{\min}^{d}\), and therefore, for the uniform distribution over \(|\Omega_{D}|\) one has that
\[\hat{p}_{\max}\leq\frac{2}{N_{\min}^{d}}, \tag{78}\]
i.e., \(\hat{p}_{\max}\) scales inverse exponentially with \(d\).
**Product-induced distributions:** Consider a probability distribution \(p\) over \(\Omega_{D}\) induced by the product distribution \(\hat{p}\) over \(\tilde{\Omega}_{D}\), defined as per Eq. (69). We have that
\[\hat{p}_{\max} \leq 2\hat{p}_{\max} \text{[via Eq.~{}\eqref{eq:pmax}]} \tag{79}\] \[=2\prod_{j\in[d]}\tilde{p}_{\max}^{j} \text{[via Eq.~{}\eqref{eq:pmax}]}\] (80) \[\leq 2\left(\max_{j\in[d]}\left\{\hat{p}_{\max}^{j}\right\} \right)^{d}. \tag{81}\]
Therefore, whenever \(\max_{j\in[d]}\left\{\hat{p}_{\max}^{j}\right\}<1\), there exists some constant \(c>1\) such that
\[\hat{p}_{\max}\leq\frac{2}{c^{d}}. \tag{82}\]
**Summary:** In order for Theorem 1 to be meaningfully applicable - i.e to guarantee the efficiency of RFF for approximating variational QML - one requires that the operator norm of the kernel integral operator decays at most inverse polynomially with respect to \(d\), which via Lemma 1 requires that the maximum probability of the distribution associated with the kernel decays at most inversely polynomially with \(d\). Unfortunately, this rules out any efficiency guarantee, via Theorem 1, for kernels \(K_{(D,w)}\) whose associated distribution \(p_{(D,w)}\) is either the uniform distribution or a product-induced distribution (with all component probability distributions non-trivial).
### RKHS norm for PQC kernels
Recall from Theorem 1 that one requires
\[n\geq\max\left\{\frac{c_{1}^{2}\log^{4}\frac{1}{\delta}}{\epsilon^{2}},n_{0}\right\} \tag{83}\]
data samples, in order for Theorem 1 to guarantee, with probability greater than \(1-\delta\), an error of at most \(\epsilon\) between the output of the RFF procedure and the optimal PQC model. As such, in order to fully understand the complexity of RFF-based regression, it is necessary for us to gain a better understanding of \(c_{1}\), which is given by
\[c_{1}\leq 8(4b+3C+2\sqrt{C}), \tag{84}\]
where \(b\) is set by the regression problem (and we can assume to be constant), and \(C\) is an upper bound on the RKHS norm of the optimal PQC model, with respect to the kernel used for RFF - i.e.,
\[|f^{*}_{(\Theta,D,O)}|_{K_{(D,w)}}\leq C. \tag{85}\]
Therefore, we see that in order to determine the smallest number of sufficient data samples, it is necessary for us to obtain a concrete upper bound on the RKHS norm of the optimal PQC function with respect to the kernel \(K_{(D,w)}\). More specifically, we would like to understand, for which PQC architectures, and for which kernels, one obtains
\[C=\mathcal{O}\left(\mathrm{poly}(d)\right) \tag{86}\]
as for any such kernel and architecture, we can guarantee, via Theorem 1, the sample efficiency of the RFF procedure for approximating the optimal PQC model.
Ideally we would like to obtain results and insights which are _problem independent_ and therefore we focus here not on the optimal PQC function (which requires knowing the solution to the problem) but on the maximum RKHS norm over the entire PQC architecture. More specifically, given an architecture \((\Theta,\mathcal{D},O)\) we would like to place upper bounds on
\[C_{(\Theta,D,O)}\,:=\max\left\{|f|_{K_{(D,w)}}\,|\,f\in\mathcal{P}_{(\Theta,D,O)}\right\}, \tag{87}\]
as this clearly provides an upper bound on \(|f^{*}_{(\Theta,D,O)}|_{K_{(D,w)}}\) for _any_ regression problem.
We start off with an alternative definition of the RKHS norm, which turns out to be much more convenient to work with than the one we have previously encountered.
**Lemma 2:** (Alternative definition of RKHS norm - Adapted from Theorem 4.21 in Ref. [13]) _Given some kernel \(K:\ \mathcal{X}\times\mathcal{X}\to\mathbb{R}\) defined via_
\[K(x,x^{\prime})=\langle\phi(x),\phi(x^{\prime})\rangle, \tag{88}\]
_for some feature map \(\phi:\ \mathcal{X}\to\mathcal{X}^{\prime}\), one has that_
\[|f|_{K}=\inf\left\{|v|_{2}\,|\,v\in\mathcal{X}^{\prime}\text{ such that }f(\cdot)=\langle v,\phi(\cdot)\rangle\right\} \tag{89}\]
_for all \(f\in\mathcal{H}_{K}\)._
In words, Lemma 2 says that the RKHS norm is defined as the _infimum_ over the 2-norms of all hyperplanes in feature space which realize \(f\) - i.e., the infimum over \(|v|_{2}\) for all \(v\) such that \(f(x)=\langle v,\phi(x)\rangle\). We stress that in general functions in the reproducing kernel Hilbert space _do not_ have a unique hyper-plane representation with respect to the feature map. However, as detailed in Observation 4 below, for PQC feature maps and data-encoding strategies giving rise to integer frequency vectors (such as encoding strategies using only Pauli Hamiltonians [20]), the hyperplane representation is indeed _unique_.
**Observation 4:** (Hyperplane uniqueness for integer frequencies) Let \(\mathcal{D}\) be an encoding strategy for which \(\Omega_{\mathcal{D}}^{+}\subset\mathbb{Z}^{d}\). In this case, one has that
\[\Big{\{}\,1,\cos((\omega_{1},x)),\sin((\omega_{1}x)),...,\cos((\omega_{|\Omega _{\mathcal{D}}^{+}|},x)),\sin((\omega_{|\Omega_{\mathcal{D}}^{+}|},x))\,\Big{\}} \tag{90}\]
is a mutually orthogonal set of functions. Therefore for any strictly positive re-weighting \(w\), and any \(u,v\in\mathbb{R}^{|\Omega_{\mathcal{D}}|}\), if \(f(\cdot)=\langle v,\phi_{(D,w)}(\cdot)\rangle\) and \(f(\cdot)=\langle u,\phi_{(D,w)}(\cdot)\rangle\) then \(u=v\). Specifically, there exists only one hyperplane in feature space which realizes \(f\). As a consequence one has, via Lemma 2, that if \(f(\cdot)=\langle v,\phi_{(D,w)}(\cdot)\rangle\) then
\[|f|_{K_{(D,w)}}=|v|_{2}. \tag{91}\]
With this in hand, we can do some examples to gain intuition into the behaviour of the RKHS norm.
Example 1:Given a data-encoding strategy \(\mathcal{D}\), and the uniform weight vector \(w=\frac{1}{\sqrt{|\Omega_{\mathcal{D}}|}}(1,...,1)\), consider the function \(f(x)=\cos(\langle\omega_{1},x\rangle)\). In this case one has that \(f(\cdot)=\langle v,\phi_{(D,w)}(\cdot)\rangle\) with
\[v=\left(0,\sqrt{|\Omega_{\mathcal{D}}|},0,...,0\right), \tag{92}\]
and therefore \(|f|_{K_{(D,w)}}\leq\sqrt{|\Omega_{\mathcal{D}}|}\), with an equality in the case of encoding strategies with integer frequency vectors. Note that we obtain the same result for \(f(x)=\cos(\langle\omega,x\rangle)\) and \(f(x)=\sin(\langle\omega,x\rangle)\) for any \(\omega\in\Omega_{\mathcal{D}}\).
Example 2:Let us consider the same function as in Example 1 - i.e., \(f(x)=\cos((\omega_{1},x))\) - but this time let us consider the weight vector \(w=(0,1,0,...,0)\). In this case one has \(f(\cdot)=\langle v,\phi_{(D,w)}(\cdot)\rangle\) with
\[v=\left(0,1,0\,...,0\right), \tag{93}\]
and therefore \(|f|_{K_{(D,w)}}\leq 1\), with an equality in the case of encoding strategies with integer frequency vectors. We again get the same result for \(f(x)=\cos(\langle\omega,x\rangle)\) and \(f(x)=\sin((\omega,x))\) for any \(\omega\in\Omega_{\mathcal{D}}\), if one uses the weight vector with \(w_{\omega}=1\).
Example 3:Given a data-encoding strategy \(\mathcal{D}\), and the uniform weight vector \(w=\frac{1}{\sqrt{|\Omega_{\mathcal{D}}|}}(1,...,1)\), consider the function
\[f(x)=\frac{1}{|\Omega_{\mathcal{D}}|}\sum_{\omega\in\Omega_{\mathcal{D}}}\cos (\langle\omega,x\rangle). \tag{94}\]
In this case one has \(f(\cdot)=\langle v,\phi_{(D,w)}(\cdot)\rangle\) with
\[v=\frac{1}{\sqrt{|\Omega_{\mathcal{D}}|}}\left(1,1,0,1,0\,...,1,0\right), \tag{95}\]
and therefore \(|f|_{K_{(D,w)}}\leq 1\), with an equality in the case of encoding strategies with integer frequency vectors.
Given these examples, we can extract the following important observations:
1. As per Example 1, there exist functions, and re-weighted PQC kernels, for which the RKHS norm of the function scales with the number of frequencies in \(\Omega_{D}\), and therefore exponentially in \(d\). As such, we _cannot_ hope to place a universally applicable (architecture independent) polynomial (in \(d\)) upper bound on \(C_{(\Theta,D,O)}\). On the contrary, as per Examples 2 and 3 there do exist functions and reweightings for which the RKHS norm is _constant_. Therefore, while we cannot hope to place an architecture independent upper bound on the RKHS norm, it may be the case that there exist specific circuit architectures and kernel re-weightings for which \(C_{(\Theta,D,O)}\) is upper bounded by a polynomial in \(d\). Note that this can be interpreted as an _expressivity constraint_ on \((\Theta,\mathcal{D},O)\), as the more expressive an architecture is, the more likely it contains a function with large RKHS norm (with this likelihood becoming a certainty for the case of universal architectures).
2. By comparing Examples 1 and 2 we see that, as expected, the RKHS norm of a function depends strongly on the reweighting which defines the kernel. In particular, the same function can have a very different RKHS norm with respect to different feature map re-weightings.
3. By looking at all examples together, we see that informally, what seems to determine the RKHS norm of a given function \(f_{\rho}\) is the "alignment" between (a) the frequency distribution of the function \(f_{\rho}\), i.e., the components of the vector \(v\), and (b) the re-weighting vector \(w\) (or alternatively, the probability distribution \(p_{(D,\omega)}\)). In particular, in Example 1, the frequency representation of the function \(f\) is peaked on a single frequency, whereas the probability distribution \(p_{(D,\omega)}\) is uniform over all frequencies. In this example, \(v\) and \(p_{(D,\omega)}\) are _non-aligned_, and we find that the RKHS norm of \(f\) with respect to \(K_{(D,\omega)}\) scales exponentially in \(D\). On the contrary, in both Examples 2 and 3 we have that the frequency representation of \(f\) is well aligned with the probability distribution \(p\), and we find that we can place a _constant_ upper bound on the RKHS norm of the function.
4. At an intuitive level, one should expect the "alignment" between the frequency representation of a target function and the probability distribution associated with the kernel to play a role in the complexity of RFF. Informally, in order to learn an approximation of the function \(f_{\upsilon}\) via RFF, when constructing the approximate kernel via frequency sampling we need to sample frequencies present in \(v\). Therefore, if the distribution is supported mainly on frequencies _not_ present in \(v\), we cannot hope to achieve a good approximation via RFF. On the contrary, if the distribution is supported on frequencies present in \(v\), with the correct weighting, then we can hope to approximate \(f_{\upsilon}\) using our approximate kernel. In this sense, our informal observation that the RKHS norm depends on the alignment of target function with kernel probability distribution squares well with our intuitive understanding of RFF-based linear regression. We will make this intuition much more precise in Section 4.7.
**Summary:** In order for the statement of Theorem 1 to imply that a polynomial number of data samples is sufficient, we require that \(|f_{(\Theta,D,O)}^{*}|k_{(D,\omega)}\), the RKHS norm of the optimal PQC function, scales polynomially with respect to \(d\). Unfortunately, in the worst case, \(|f_{(\Theta,D,O)}^{*}|k_{(D,\omega)}\) can scale exponentially with respect to \(d\), and therefore we cannot hope for efficient dequantization of variational QML via RFF for _all_ possible circuit architectures. However, given a specific circuit architecture, and a re-weighting which leads to a distribution \(p_{(D,w)}\) with an efficient sampling algorithm, it may be the case that \(C_{(\Theta,D,O)}\) scales polynomially in \(d\), in which case Theorem 1 yields a meaningful sample complexity for the RFF dequantization of optimization of \((\Theta,D,O)\) for _any_ regression problem \(P\). Unfortunately however, it seems unlikely that an expressive circuit architecture will not contain _any_ functions with large RKHS norm. Ultimately though, all that is required by Theorem 1 is that the RKHS norm of the _optimal_ PQC function scales polynomially in \(d\), and this may be the case even when \(C_{(\Theta,D,O)}\) scales superpolynomially. Unfortunately, given a regression problem \(P\) it is not clear how to assess the RKHS norm of the optimal PQC function without knowing this function in advance, which seems to require running the PQC optimization.
### Lower bounds for RFF efficiency
By this point we have seen that the following is _sufficient_ for Theorem 1 to imply the efficient dequantization of variational QML via RFF-based linear regression:
1. We need to be able to efficiently sample from \(p_{(D,w)}\).
2. The distribution \(p_{(D,w)}\) needs to be sufficiently concentrated. In particular, we need \(p_{\max}=\Omega(1/\mathrm{poly}(d))\) in order to place a sufficiently strong lower bound on the operator norm of the kernel integral operator.
3. The frequency representation of the optimal PQC function needs to be "well aligned" with the probability distribution \(p_{(D,w)}\). This is required to ensure a sufficiently strong upper bound on the RKHS norm of the optimal PQC function.
It is clear that point 1 above is a _necessary_ criterion for efficient dequantization via RFF. However, it is less clear to which extent points 2 and 3 are _necessary_. In particular, it could be the case that the bounds provided by Theorem 1 are not tight, and that efficient PQC dequantization via RFF is possible even when not guaranteed by Theorem 1. In this section we address this issue, by proving _lower bounds_ on the complexity of RFF, which show that both points 2 and 3 are also necessary conditions, at least in order to obtain an _average-case_ guarantee on the output of RFF with respect to the \(L^{2}\)-norm.
We consider a data encoding strategy, giving rise to a frequency set \(\tilde{\Omega}_{D}\), as well as a weight vector \(w\), giving rise to the sampling distribution \(p_{(D,w)}\), which we abbreviate as \(p\). Now, given a regression problem, we define the following notions:
1. Let \(f^{*}_{(\Theta,D,O)}\) represent the optimal PQC model. In this section, for convenience we abbreviate this as \(f^{*}\).
2. As per Section 2.3 and the description of Algorithm 3 in Section 3, we consider running RFF regression by sampling \(M\) frequencies from the distribution \(\pi=p\times\mu\). Let \(\vec{\omega}=(\omega_{1},...,\omega_{M})\in\Omega_{D}^{M}\) be the random variable of \(M\) frequencies sampled from \(p\), and let \(g_{\vec{\omega}}\) be the output of linear regression using these frequencies to approximate the kernel \(K_{(D,w)}\).
In this section, we are concerned with lower bounding the _expected_\(L^{2}\)-norm of the difference between the optimal PQC function and the output of the RFF procedure, with respect to multiple runs of RFF-based linear regression. In particular, we want to place lower bounds on the quantity
\[\hat{e} :=\mathop{\mathrm{E}}_{\vec{\omega}\sim p^{M}}\lvert f^{*}-g_{ \vec{\omega}}\rvert_{2}^{2} \tag{96}\] \[=\sum_{\vec{\omega}\in\Omega_{D}^{M}}\lvert f^{*}-g_{\vec{\omega} }\rvert_{2}^{2}\vec{\zeta}(\vec{\omega}) \tag{97}\]
where \(\vec{\epsilon}=p^{M}\), i.e., \(\vec{\zeta}(\vec{\omega})=p(\omega_{1})\times...\times p(\omega_{M})\). In order to lower bound \(\hat{e}\), recall from Section 2.4 that \(f^{*}\) can be written as
\[f^{*}(x)=\sum_{\omega\in\tilde{\Omega}_{D}}\hat{f}^{*}(\omega)e^{i(\omega,x)}. \tag{98}\]
We abuse notation slightly and use the notation \(\hat{f}^{*}\) to denote the vector with entries \(\hat{f}^{*}(\omega)\). Finally, we denote by \(p_{\max}\) the maximum probability in \(p\). With this in hand, we have the following lemma (whose proof can be found in Appendix D):
**Lemma 3**:: (Lower bound on average relative error) _The expected \(L^{2}\)-norm of the difference between the optimal PQC function and the output of RFF-based linear regression can be lower bounded as_
\[\hat{e} \geq(2\pi)^{d}\lvert\hat{f}^{*}\rvert_{2}^{2}-(2\pi)^{d}2M\sum_{ \omega\in\Omega_{D}}\lvert\hat{f}^{*}(\omega)\rvert^{2}p(\omega) \tag{99}\] \[\geq(2\pi)^{d}\lvert\hat{f}^{*}\rvert_{2}^{2}-(2\pi)^{d}2M\sum_{ \omega\in\Omega_{D}}\lvert\hat{f}^{*}(\omega)\rvert^{2}p_{\max}\] (100) \[=\lvert f^{*}\rvert_{2}^{2}\left(1-2Mp_{\max}\right). \tag{101}\]
Using this Lemma, we can now see that indeed both concentration of the probability distribution \(p\), and "alignment" of the frequency representation of the optimal function with \(p\), are _necessary_ conditions to achieve a small _expected_ relative error \(\hat{e}\).
**Concentration of \(p\)**: To this end, note that we can rewrite Eq. (101) as
\[M\geq\frac{1}{2p_{\max}}\left(1-\frac{\hat{e}}{|f^{*}|_{2}^{2}}\right). \tag{102}\]
Therefore, provided \(\hat{e}/|f^{*}|_{2}^{2}\) is asymptotically upper bounded by some constant less than \(1\) (which, for example, will be the case for constant \(\hat{e}\) and growing \(|f^{*}|_{2}^{2}\)), then one requires \(M=\Omega(1/p_{\max})\) frequency samples to achieve expected relative error \(\hat{e}\). Given this, we see that the RFF procedure _cannot_ be efficient whenever \(p_{\max}\) is a negligible function - i.e., decays faster than any inverse polynomial. Recall from Section 4.5 that for all product-induced distributions, including the uniform distribution, \(p_{\max}\) is a negligible function of \(d\) - and therefore we _cannot_ achieve efficient RFF dequantization of variational QML via any re-weighting giving rise to such a distribution. However, on the contrary, as also discussed in Section 4.5, whenever \(p_{\max}=\Omega(1/\text{poly}(d))\) then one _can_ apply Theorem 1 to place a polynomial upper bound on \(M\).
**Alignment of \(\hat{f}^{*}\) and \(p\)**: Note that we can rewrite Eq. (99) as
\[M\geq\frac{1}{2\sum_{\omega\in\Omega_{D}}|\hat{f}^{*}(\omega)|^{2}p(\omega)} \left(|\hat{f}^{*}|_{2}^{2}-\frac{\hat{e}}{(2\pi)^{d}}\right). \tag{103}\]
Therefore, whenever \(|\hat{f}^{*}|_{2}^{2}-\hat{e}/(2\pi)^{d}=\Omega(1)\) then one has that
\[M=\Omega\left(\frac{1}{\sum_{\omega\in\Omega_{D}}|\hat{f}^{*}(\omega)|^{2}p( \omega)}\right) \tag{104}\]
where
\[\sum_{\omega\in\Omega_{D}}|\hat{f}^{*}(\omega)|^{2}p(\omega) \tag{105}\]
is the inner product between the frequency vector \(\hat{f}^{*}\) and the probability distribution \(p\), which we interpret as the "alignment" between the frequency representation and the sampling distribution. We therefore see that the smaller this overlap, the larger number of frequencies are required to achieve a given relative error. Again, we therefore see that "large alignment" between \(\hat{f}^{*}\) and the probability distribution \(p\) is a necessary condition to achieve a smaller expected relative error.
## 5 Discussion, conclusions and future directions
In this work, we have provided a detailed analysis of classical linear regression with random Fourier features, using re-weighted PQC kernels, as a method for the dequantization of PQC based regression. Intuitively, as discussed in Section 3, this method is motivated by the fact that it optimizes over a natural extension of the same function space used by PQC models - i.e., the method has to some extent an _inductive bias_ which is comparable to that of PQC regression. At a very high level, given a PQC architecture \((\Theta,\mathcal{D},O)\), and a regression problem \(P\sim\mathcal{X}\times\mathbb{R}\), the method consists of:
1. Choosing a re-weighting \(w\) of the PQC feature map, or equivalently, choosing a distribution \(p\) over frequencies appearing in \(\Omega_{D}\).
2. Sampling \(M\) frequencies from \(p\), and using them to construct an approximation of the PQC feature map.
3. Sampling \(n\) training data points from \(P\) and running regularized linear regression with the approximate feature map.
We know that Step 3 above has time and space complexity \(\mathcal{O}(nM)\) and \(\mathcal{O}(nM^{2}+M^{3})\), respectively. Given this, we have been interested in obtaining necessary and sufficient conditions on \(n\) and \(M\), in order to guarantee that, with high probability, the output of classical RFF-based linear regression achieves a true risk which is no more than \(\epsilon\) worse than the output of PQC based optimization. To this end, we have seen, via Theorem 1, that if the following conditions are satisfied, then RFF-based linear regression yields a fully efficient classical dequantization of PQC regression:
1. There exists an efficient algorithm to sample from \(p\), given as input only the data encoding strategy \(\mathcal{D}\).
2. The distribution \(p\) is sufficiently concentrated. In particular, \(p_{\max}\) should decay at most inversely polynomially in \(d\).
3. The RKHS norm of the optimal PQC function should scale polynomially in \(d\). Informally, the frequency distribution of the optimal PQC function should be sufficiently "well-aligned" with the probability distribution \(p\).
On the other hand, we have seen, via Lemma 3, that the above conditions are _necessary_ in a certain sense - i.e., that if they are not satisfied then RFF-based linear regression will _not_ provide an efficient dequantization of PQC regression (on average). With this in mind, we can make the following observations:
**Problem independent dequantization and PQC architecture design:** If there exists an efficiently sampleable distribution \(p\) over \(\Omega_{\mathcal{D}}\), which is also sufficiently concentrated, such that with respect to this distribution _all_ PQC functions \(f\in\mathcal{F}_{(\Theta,\mathcal{D},O)}\) have an RKHS norm which is polynomial in \(d\), then RFF-based linear regression using \(p\) provides an efficient classical dequantization method of variational QML using \((\Theta,\mathcal{D},O)\) for _any_ regression problem \(P\). Intuitively, we expect this to be the case if the frequency representations of _all_ functions \(f\in\mathcal{F}_{(\Theta,\mathcal{D},O)}\) are well aligned with some suitable distribution \(p\). We can use this observation to guide PQC architecture design - in particular, in order to ensure that generic dequantization via RFF is not immediate, one should ensure that \(\mathcal{F}_{(\Theta,\mathcal{D},O)}\) does _not_ have the property specified above. However, we note that for any sufficiently expressive architecture \((\Theta,\mathcal{D},O)\), and any distribution \(p\), we would expect there to exist functions \(f\in\mathcal{F}_{(\Theta,\mathcal{D},O)}\) whose frequency representation is misaligned with \(p\), and therefore have super-polynomial RKHS norm. Indeed, we have seen in Section 4.6 that there exist functions with exponentially scaling RKHS norm, and therefore in the limit of _universal_ circuit architectures \((\Theta,\mathcal{D},O)\), it is certainly the case that \(\mathcal{F}_{(\Theta,\mathcal{D},O)}\) contains such functions.
**Problem dependent dequantization and potential quantum advantage:** As above, we expect that for sufficiently expressive circuit architectures, for any distribution \(p\) there will exist functions \(f\in\mathcal{F}_{(\Theta,\mathcal{D},O)}\) with super-polynomial RKHS norm - i.e., that for such architectures problem independent dequantization is not possible. However, in order for RFF-based linear regression to provide an efficient dequantization method for a _specific regression problem \(P\)_, all that is required is that the _optimal_ PQC function for \(P\) has polynomial RKHS norm. Intuitively, we expect this to be the case when the regression function for \(P\) has a frequency representation which is aligned with some efficiently sampleable and sufficiently concentrated distribution \(p\). As such, we can use this to gain insight into which type of regression problems might admit a _quantum advantage_ via PQC based optimization. Specifically, we know that for any regression problem \(P\) whose regression function has a frequency representation aligned with a sufficiently "anti-concentrated" distribution over \(\Omega_{\mathcal{D}}\) (one for which \(p_{\max}\) is a negligible function), RFF-based linear regression will _not_ provide an efficient dequantization - and therefore variational QML might offer a quantum advantage. On the contrary, for any regression problem \(P\) whose regression function has a frequency representation well aligned with a distribution \(p\) that is both sufficiently concentrated and efficiently sampleable, then RFF-based linear regression _will_ provide an efficient dequantization, _provided one can identify_\(p\). We note that in practice, even if such a distribution \(p\) exists, it is unclear how to identify \(p\) from the training data, without first solving the problem.
Given the above, the following are natural avenues for future research:
**Identification of problems admitting potential quantum advantage:** Can we identify a class of scientifically, industrially or socially relevant regression problems, whose regression functions are well aligned with anti-concentrated distributions over exponentially large frequency sets, and are therefore good candidates for quantum advantage via variational QML?
**Extension to classification problems:** The analysis we have performed here has focused on _regression_ problems. Extending this analysis to classification problems would be both natural and interesting.
**Design of suitable sampling distributions:** We have seen that a _necessary_ condition for efficient dequantization via RFF is that the distribution \(p\) is both efficiently sampleable and sufficiently concentrated. This immediately _rules out_ a large class of natural distributions - namely the uniform distribution and product-induced distributions with non-trivial components. Given this, in order for RFF-based dequantization to be useful, it is important to identify and
motivate suitable sampling distributions. We note that ideally one would choose the distribution based on knowledge of the regression function of the problem \(P\), however in practice it is more likely that one would first choose a distribution \(p\), which will then determine the class of problems for which RFF will be an efficient dequantization method - namely those problems whose regression function is well aligned with \(p\).
**Understanding the RKHS norm for different architectures:** As we have discussed, for any circuit architecture for which _all_ expressible functions are well aligned with a suitable distribution \(p\), one _cannot_ obtain a quantum advantage, as RFF-based linear regression will provide an efficient dequantization method. Given this, it is of interest to investigate circuit architectures from this perspective, to understand which architectures might facilitate a quantum advantage, and which architectures are prone to dequantization. Unfortunately, we note that gaining _analytic_ insight into which hyperplanes (i.e frequency representations) are expressible by a given PQC architecture is a hard problem, which has so far resisted progress.
**Effect of noise on RKHS norm of PQC architectures:** As we have pointed out, for any sufficiently expressive circuit architecture we expect the worst case RKHS norm to scale superpolynomially - i.e., that there exist PQC functions whose frequency representation is aligned with an anti-concentrated distribution over frequencies. It would, however, be of interest to understand the effect of noise on architectures realizing such functions. In particular, it could be the case that realistic circuit noise causes a concentration of the frequencies which are expressible by a PQC architecture, and therefore facilitates dequantization via RFF-based linear regression.
**Improved RFF methods for sparse data:** In this work, we have provided an analysis of "standard" RFF-based linear regression, for regression problems with no promised structure. However, when one is promised that the distribution \(P\) has some particular structure, then one can devise variants of RFF with improved efficiency guarantees. One such example we have already seen - namely, if one can guarantee that the regression function has a frequency representation supported on a subset of possible frequencies, then on can design the sampling distribution appropriately, which leads to improved RFF efficiencies [14]. In a similar vein, it is known that when one has a promise on the sparsity of the vectors in the support of \(P\), then one can devise variants of RFF with improved efficiency [13]. As this is a natural promise for application-relevant distributions, understanding the potential and limitations of "Sparse-RFF" as a dequantization method is an interesting research direction.
**PQC dequantization without RFF:** Here we have discussed only _one_ potential method for the dequantization of variational QML. As we have noted in Section 3, recently Ref. [12] has noted that to each PQC one can associate a feature map for which the associated kernel can be evaluated efficiently classically _without_ requiring any approximations. As such, understanding the extent to which one can place relative error guarantees on linear regression using such a kernel is a natural avenue for investigation. Additionally, as mentioned in the introduction, a variety of recent works have shown that PQCs can be efficiently simulated, _in an average case sense_, in the presence of certain types of circuit noise [14, 15, 16]. Again, understanding the extent to which this allows one to classically emulate noisy variational QML is another natural approach to dequantization of _realistic_ variational QML. Finally, quite recently Ref. [17] has shown that one can sometimes efficiently extract from a trained PQC a "shadow model" which can be used for efficient classical inference. Given this, it would be interesting to understand the extent to which one can train classical shadow models directly from data.
## Acknowledgments
RS is grateful for helpful conversations with Alex Nietner, Christa Zoufal and Yanting Teng. ERA is funded by the Vicente Lopez grant given by Eurecat and also funded by ICFO. ERA would also like to give special thanks to Jens Eisert for inviting him to participate in his group and also thank Adan Garriga for helping him all the way with this stay. SJ thanks the BMBK (EniQma) and the Einstein Foundation (Einstein Research Unit on Quantum Devices) for their support. EGF is funded by the Einstein Foundation (Einstein Research Unit on Quantum Devices). JJM is funded by QuantERA (HQCC). JE is funded by the QuantERA (HQCC), the Einstein Foundation (Einstein Research Unit on Quantum Devices), the Munich Quantum Valley (K-8), the MATH+ cluster of excellence, the BMWK (EniQma), and the BMBF (Hybrid).
Construction of the frequency set \(\tilde{\Omega}_{D}\)
We describe here the way in which the frequency set \(\tilde{\Omega}_{D}\) of a PQC model is constructed from the data-encoding strategy \(\mathbf{D}\). We follow closely the presentation in Ref. [10], and start by noting that
\[\tilde{\Omega}_{D}=\tilde{\Omega}_{D}^{(1)}\times...\times\tilde{\Omega}_{D}^{ (d)} \tag{106}\]
where \(\tilde{\Omega}_{D}^{(j)}\subseteq\mathbb{R}\) depends only on \(\mathcal{D}^{(j)}\). We can therefore focus on the construction of \(\tilde{\Omega}_{D}^{(j)}\) for a single co-ordinate. In light of this, let us drop some coordinate-indicating superscripts for ease of presentation. In particular, let us write \(\mathbf{D}^{(j)}=\{H_{k}\,|\,k\in[L_{j}]\}\), where we have dropped the coordinate-indicating superscripts from the Hamiltonians. We then use \(\lambda_{k}^{i}\) to denote the \(i\)'th eigenvalue of \(H_{k}\), and \(N_{k}\) to denote the number of eigenvalues of \(H_{k}\). We also introduce the multi-index \(\widetilde{i}=(i_{1},...,i_{L_{j}})\), with \(i_{k}\in[N_{k}]\), which allows us to define the sum of the eigenvalues indexed by \(\widetilde{i}\), one from each Hamiltonian, as
\[\Lambda_{\widetilde{i}}=\lambda_{1}^{i_{1}}+...+\lambda_{L_{j}}^{i_{L_{j}}}. \tag{107}\]
With this setup, we then have that the frequency set \(\tilde{\Omega}_{D}^{(j)}\) is given by the set of all differences of all possible sums of eigenvalues, i.e.,
\[\tilde{\Omega}_{D}^{(j)}=\left\{\Lambda_{\widetilde{i}}-\Lambda_{\widetilde{ j}}|\,\widetilde{i},\widetilde{j}\right\}, \tag{108}\]
and as mentioned before, the total frequency set is given by Eq. (106). There is a convenient graphical way to understand this construction, which is illustrated in Figure. 2. Essentially, one notes that, in order to construct \(\tilde{\Omega}_{D}^{(j)}\) one can consider a tree, with depth equal to the number of data-encoding gates, whose leaves contain the eigenvalue sums \(\Lambda_{\widetilde{i}}\). The frequency set is then given by all possible pairwise differences between leaves.
## Appendix B Proof of Theorem 1
We start by noting that Theorem 1 in the main text follows as an immediate corollary of the following Theorem:
**Theorem 2**:: (RFF vs. variational QML - alternative form) _Let \(R\) be the risk associated with a regression problem \(P\sim\mathcal{X}\times\mathbb{R}\). Assume the following:_
1. \(\|f^{*}_{(\Theta,D,O)}\|_{K_{D}}\leq C\)_,_
2. \(|y|\leq b\) _almost surely when_ \((x,y)\sim P\)_, for some_ \(b>0\)
_Additionally, define_
\[n_{0} :=\max\left\{4|T_{K_{D}}|^{2},\left(528\log\frac{1112\sqrt{2}}{ \delta}\right)^{2}\right\}, \tag{109}\] \[c_{0} :=36\left(3+\frac{2}{|T_{K_{D}}|}\right),\] (110) \[c_{1} :=8\sqrt{2}(4b+\frac{5}{\sqrt{2}}C+2\sqrt{2C}). \tag{111}\]
_Then, let \(\delta\in(0,1]\), let \(n\geq n_{0}\), set \(\lambda_{n}=1/\sqrt{n}\), and let \(\hat{f}_{M_{n},\lambda_{n}}\) be the output of \(\lambda_{n}\)-regularized linear regression with respect to the feature map_
\[\phi_{M_{n}}(x)=\frac{1}{\sqrt{M_{n}}}\big{(}\psi(x,v_{1}),...,\psi(x,v_{M}) \big{)} \tag{112}\]
_constructed from the integral representation of \(K_{D}\) by sampling \(M_{n}\) elements from \(\pi\). Then,_
\[M_{n}\geq c_{0}\sqrt{n}\log\frac{108\sqrt{n}}{\delta} \tag{113}\]
_is enough to guarantee, with probability at least \(1-\delta\), that if \(R(\hat{f}_{M_{n},\lambda_{n}})\geq R(f_{(\Theta,D,O)}^{*})\), then_
\[R(\hat{f}_{M_{n},\lambda_{n}})-R(f_{(\Theta,D,O)}^{*})\leq\frac{c_{1}\log^{2} \frac{1}{\delta}}{\sqrt{n}}. \tag{114}\]
Next we note that Theorem 2 above - from which we derive Theorem 1 in the main text as an immediate corollary - is essentially a straightforward application of the generalization bound given as Theorem 1 in Ref. [14]. As such, we start our proof of Theorem 2 with a presentation of this result. To this end, we require first a few definitions. Firstly, given a kernel \(K\), with associated RKHS \((\mathcal{H}_{K},\langle\cdot,\cdot\rangle_{K})\), we define \(\mathcal{H}_{K}^{C}=\{f\in\mathcal{H}_{K}\,|\,|f|_{K}\leq C\}\) as the subset of functions in \(\mathcal{H}_{K}\) with RKHS norm bounded by \(C\). We define \(\mathcal{P}_{D}^{C}\) and \(\mathcal{F}_{(\Theta,D,O)}^{C}\) analogously. Additionally, given a regression problem \(P\) with associated risk \(R\), we then define
\[f_{\mathcal{H}_{K}^{C}}^{*}=\operatorname*{arg\,min}_{f\in\mathcal{H}_{K}^{C} }\left[R(f)\right], \tag{115}\]
as the optimal function for \(P\) in \(\mathcal{H}_{K}^{C}\). Finally, recall that we denote by \(T_{K}\) the kernel integral operator associated with the kernel \(K\) (see Definition 3). With this in hand, we can state a slightly reformulated version of the RFF generalization bound proven in Ref. [14] (which in turn build on the earlier results of Ref. [13]).
**Theorem 3**:: (Theorem 1 from [14]) _Assume a regression problem \(P\sim\mathcal{X}\times\mathbb{R}\). Let \(K:\mathcal{X}\times\mathcal{X}\to\mathbb{R}\) be a kernel, and let \(\mathcal{H}_{K}^{C}\) be the subset of the RKHS \(\mathcal{H}_{K}\) consisting of functions with RKHS-norm upper bounded by some constant \(C\). Assume the following:_
1. \(K\) _has an integral representation_ \[K(x,x^{\prime})=\int_{\Phi}\psi(x,v)\psi(x^{\prime},v)\,\mathrm{d}\pi(v).\] (116)
2. _The function_ \(\psi\) _is continuous in both variables and satisfies_ \(|\psi(x,v)|\leq\kappa\) _almost surely, for some_ \(\kappa\in[1,\infty)\)_._
3. \(|y|\leq b\) _almost surely when_ \((x,y)\sim P\)_, for some_ \(b>0\)_._
_Additionally, define_
\[\overline{B} :=2b+2\kappa\max\{1,|f_{\mathcal{H}_{K}^{C}}^{*}|k\}, \tag{117}\] \[\overline{\sigma} :=2b+2\kappa\sqrt{\max\{1,|f_{\mathcal{H}_{K}^{C}}^{*}|k\}}, \tag{118}\]
\[n_{0} :=\max\left\{4|T_{K}|^{2},\left(264\kappa^{2}\log\frac{556\kappa^{3}}{ \delta}\right)^{2}\right\}, \tag{119}\] \[c_{0} :=9\left(3+4\kappa^{2}+\frac{4\kappa^{2}}{|T_{K}|}+\frac{\kappa^{4 }}{4}\right),\] (120) \[c_{1} :=8\big{(}\overline{B}_{K}+\overline{\sigma}\kappa+\max\{1,|f^{*} _{M_{K}^{c}}|k\}\big{)}. \tag{121}\]
_Then, let \(\delta\in(0,1]\), \(n\geq n_{0}\), assume \(\lambda_{n}=1/\sqrt{n}\), and let \(\hat{f}_{M_{n},\lambda_{n}}\) be the output of \(\lambda_{n}\)-regularized linear regression with respect to the feature map_
\[\phi_{M_{n}}(x)=\frac{1}{\sqrt{M_{n}}}\big{(}\psi(x,v_{1}),...\,\psi(x,v_{M}) \big{)} \tag{122}\]
_constructed from the integral representation of \(K\) by sampling \(M_{n}\) elements from \(\pi\). Then,_
\[M_{n}\geq c_{0}\sqrt{n}\log\frac{108\kappa^{2}\sqrt{n}}{\delta} \tag{123}\]
_is enough to guarantee, with probability at least \(1-\delta\), that_
\[R(\hat{f}_{M_{n},\lambda_{n}})-R(f^{*}_{M_{K}^{c}})\leq\frac{c_{1}\log^{2} \frac{1}{\delta}}{\sqrt{n}}. \tag{124}\]
This statement demonstrates, that under reasonable assumptions, the estimator that is obtained with a number of random features proportional to \(\mathcal{O}(\sqrt{n}\log n)\) achieves a \(\mathcal{O}(1/\sqrt{n})\) learning error. We would now like to prove Theorem 2 by applying Theorem 3 to the classical PQC-kernel \(K_{D}\). To do this, we require the following Lemma:
**Lemma 4**: (Function set inclusions) _For any constant \(C\) one has that_
\[\begin{split}\mathcal{F}^{C}_{(\Theta,D,O)}\subseteq& \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\
i.e. \(f_{v}\) is indeed in the RKHS \(\mathcal{H}_{K_{D}}\). Finally, the inclusion \(\mathcal{F}_{D}^{C}\subseteq\mathcal{H}_{K_{D}}^{C}\) now follows easily, as \(\mathcal{F}_{D}^{C}\subseteq\mathcal{F}_{D}\subseteq\mathcal{H}_{K_{D}}\) yields
\[f\in\mathcal{F}_{D}^{C}\implies f\in\mathcal{H}_{K_{D}}, \tag{132}\]
and by definition
\[f\in\mathcal{F}_{D}^{C}\implies|f|_{K_{D}}\leq C \tag{133}\]
which together means that
\[f\in\mathcal{F}_{D}^{C}\implies f\in\mathcal{H}_{K_{D}}^{C}. \tag{134}\]
With this in hand, we can now prove Theorem 2.
Proof of Theorem 2.: We start by recalling that, as shown in Section 4.4, for any reweighting vector \(w\) the reweighted PQC-kernel \(K_{(D,w)}\) has the integral representation
\[K_{(D,w)}(x,x^{\prime})=\frac{1}{2\pi}\int_{\mathcal{X}}\int_{0}^{2\pi}\sqrt{2 }\cos((\omega,x)+\gamma)\sqrt{2}\cos((\omega,x^{\prime})+\gamma)q_{(D,w)}( \omega)\ \mathrm{d}y\mathrm{d}v, \tag{135}\]
where
\[q_{(D,w)}(\omega)=\sum_{i=0}^{|\Omega_{D}^{*}|}\frac{w_{i}^{2}}{|w|_{2}^{2}} \delta(\omega-\omega_{i}). \tag{136}\]
As a result, for any reweighting, including \(w=(1,...,1)\), the kernel \(K_{(D,w)}\) satisfies assumption (1) of Theorem 3 with
\[\psi(x,v)=\sqrt{2}\cos((\omega,x)+\gamma). \tag{137}\]
Given this, we note that \(\psi\) is continuous in both variables and that \(|\psi(x,v)|\leq\sqrt{2}\) for all \(x,v\) - i.e., for any kernel \(K_{(D,w)}\), assumption (2) of Theorem 3 is satisfied with \(\kappa=\sqrt{2}\).
Next, set the \(C\) appearing in Theorem 3 to the constant \(C\) appearing in assumption (1) of Theorem 2. More specifically, we apply Theorem 3 to the subset \(\mathcal{H}_{K_{D}}^{C}\), where \(C\) is an upper bound on the RKHS norm of the optimal function for \(P\) in \(\mathcal{F}_{(\Theta,D,O)}\) - i.e. \(|f_{(\Theta,D,O)}^{*}|_{K_{D}}\leq C\). Doing this we obtain, via Theorem 3 and the fact that \(\kappa=\sqrt{2}\), that provided all the conditions of Theorem 2 are satisfied, then
\[R(\hat{f}_{M_{\Theta},\lambda_{n}})-R(f_{\mathcal{H}_{K_{D}}^{C}}^{*})\leq \frac{c_{1}\log^{2}\frac{1}{\delta}}{\sqrt{n}}. \tag{138}\]
To achieve the statement of Theorem 1 we then use the assumption that \(|f_{(\Theta,D,O)}^{*}|_{K_{D}}\leq C\). More specifically, via Lemma 4 this assumption implies that \(f_{(\Theta,D,O)}^{*}\in\mathcal{H}_{K_{D}}^{C}\), which together with the definition of \(f_{\mathcal{H}_{K_{D}}^{C}}^{*}\) as the _optimal_ function in \(\mathcal{H}_{K_{D}}^{C}\), allows us to conclude that
\[R(f_{(\Theta,D,O)}^{*})\geq R(f_{\mathcal{H}_{K_{D}}^{C}}^{*}). \tag{139}\]
This then implies
\[R(\hat{f}_{M_{\alpha},\lambda_{n}})-R(f_{(\Theta,D,O)}^{*}) \leq R(\hat{f}_{M_{\alpha},\lambda_{n}})-R(f_{\mathcal{H}_{K_{D} }^{C}}^{*}) \tag{140}\] \[\leq\frac{c_{1}\log^{2}\frac{1}{\delta}}{\sqrt{n}} \text{[via Eq.\eqref{eq:RHS}]} \tag{141}\]
as per the statement of Theorem 2.
As already mentioned, Theorem 1 in the main text then follows as an immediate corollary of Theorem 2.
Proof of Lemma 1
Proof of Lemma 1.: As discussed in Ref. [10], the kernel integral operator is self-adjoint. In light of this, we know that \(|T_{K_{(D,w)}}|=\rho(T_{K_{(D,w)}})\), where \(\rho(T_{K_{(D,w)}})\) denotes the _spectral radius_ of \(\rho(T_{K_{(D,w)}})\). As such, we focus on determining the spectrum of \(\rho(T_{K_{(D,w)}})\). To this end, note that under assumption (a) of the lemma statement we have that
\[(T_{K_{(D,w)}}g)(x) =\int_{\mathcal{X}}K_{(D,w)}(x,x^{\prime})g(x^{\prime})dP_{ \mathcal{X}}(x^{\prime}) \tag{142}\] \[=\frac{1}{(2\pi)^{d}}\int_{\mathcal{X}}K_{(D,w)}(x,x^{\prime})g( x^{\prime})\,\mathrm{d}x^{\prime}\qquad\qquad\text{[via assumption (a)]} \tag{143}\]
with
\[K_{(D,w)}(x,x^{\prime})=\frac{1}{|w|_{2}^{2}}\left(w_{0}^{2}+\sum_{i=1}^{| \Omega_{D}^{+}|}w_{i}^{2}\cos\bigl{(}\left\langle\omega_{i},(x-x^{\prime}) \right\rangle\bigr{)}\right) \tag{144}\]
where \(\omega_{0}=(0,...,0)\). We now use assumption (b) - i.e. that \(\Omega_{D^{+}}\subset\mathbb{Z}^{d}\) - to show that for any \(\omega\in\mathbb{Z}^{d}\), the function \(g(x^{\prime})=\cos\bigl{(}\left\langle\omega,x^{\prime}\right\rangle\bigr{)}\) is an eigenfunction of \(T_{K_{(D,w)}}\). Specifically, using the following notation
\[\delta(\omega\pm\nu)\,:=\begin{cases}1&\text{if }(\omega=\nu)\vee(\omega=-\nu),\\ 0&\text{else},\end{cases} \tag{146}\]
and defining \(w_{\omega}\) to be the weight associated with \(\omega\in\Omega_{D}\), we have that
\[(T_{K_{(D,w)}}g)(x) =\frac{1}{(2\pi)^{d}}\int_{\mathcal{X}}\left[\frac{1}{|w|_{2}^{2 }}\left(w_{0}^{2}+\sum_{i=1}^{|\Omega_{D}^{+}|}w_{i}^{2}\cos\bigl{(}\left\langle \omega_{i_{1}}(x-x^{\prime})\right\rangle\bigr{)}\right)\right]\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
As such, we have that all functions in the set \(\{\sin(\omega,x)\}\,|\,\omega\in\mathbb{Z}^{d}\}\cup\{\cos(\omega,x)\}\,|\,\omega \in\mathbb{Z}^{d}\}\) are eigenfunctions of \(T_{K_{D}}\). However, as this set is a basis for \(L^{2}(\mathcal{X},P_{\mathcal{X}})\) - in the relevant case where \(P_{\mathcal{X}}\) is the uniform distribution - we can conclude that
\[|T_{K_{(D,\omega)}}| =\rho\left(T_{K_{(D,\omega)}}\right) \tag{154}\] \[=\max_{\omega\in\Omega_{D}}\left\{\frac{1}{2}\frac{w_{\omega}^{2} }{|w|_{2}^{2}}\right\}\] (155) \[=\max_{\omega\in\Omega_{D}}\left\{\frac{1}{2}\dot{p}_{(D,\omega)} (\omega)\right\}. \tag{156}\]
As an aside, it is interesting to note that the minimization of the norm \(|T_{K_{(D,\omega)}}|\) subject to the constraint \(|w|_{2}^{2}\leq c_{0}\) for some \(c_{0}>0\) can be captured in terms of a convex semi-definite problem. This problem can be written as
\[\text{minimize} c \tag{157}\] \[\text{subject to }\frac{1}{2}\frac{w_{\omega}^{2}}{|w|_{2}^{2}} \leq c,\] (158) \[|w|_{2}^{2} \leq c_{0}, \tag{159}\]
which is easily seen to be equivalent with
\[\text{minimize} c \tag{160}\] \[\text{subject to }\left[\begin{array}{cc}2c&|w_{\omega}|\\ |w_{\omega}|&d\end{array}\right] \geq 0\,\text{for all }\omega,\] (161) \[d \leq c_{0},\] (162) \[\left[\begin{array}{cc}d&w\\ w^{T}&I\end{array}\right] \geq 0, \tag{163}\]
by making use of Schur complements.
## Appendix D Proof of Lemma 3
Proof of Lemma 3.: As \(g_{\vec{\omega}}(x)\) is the output of the RFF procedure in which frequencies \(\vec{\omega}=(\omega_{1},...,\omega_{M})\) were drawn, we know that \(g_{\vec{\omega}}\) can be written as
\[g_{\vec{\omega}}(x)=\sum_{\omega\in\tilde{\Omega}_{D}}\hat{g}_{\vec{\omega}}( \omega)\mathrm{e}^{i(\omega,x)}, \tag{164}\]
where \(\hat{g}_{\vec{\omega}}(\omega)=0\) for all \(\omega\notin\{\omega_{i}\}_{i=1}^{M}\). Again we abuse notation and use \(\hat{g}_{\vec{\omega}}\) to denote the vector with entries \(\hat{g}_{\vec{\omega}}(\omega)\). Now, given some vector \(\vec{\omega}=(\omega_{1},...,\omega_{M})\in\Omega_{D}^{M}\), we define the sets \(\Omega_{\vec{\omega}}:=\{\omega_{1},...,\omega_{M}\}\subseteq\Omega_{D}\) and \(\tilde{\Omega}_{\vec{\omega}}:=\Omega_{\vec{\omega}}\cup(-\Omega_{\vec{\omega}})\). Given some \(\hat{f}^{*}\), we then define the vectors
\[\hat{f}_{\vec{\omega}}^{*}(\omega) =\begin{cases}\hat{f}^{*}(\omega)\text{ if }\omega\in\tilde{ \Omega}_{\vec{\omega}},\\ 0\text{ else,}\end{cases} \tag{165}\] \[\hat{f}_{/\vec{\omega}}^{*}(\omega) =\begin{cases}0\text{ if }\omega\in\tilde{\Omega}_{\vec{\omega}},\\ \hat{f}^{*}(\omega)\text{ else.}\end{cases} \tag{166}\]
Note that with these definitions, \(\hat{f}^{*}=\hat{f}^{*}_{\vec{\omega}}+\hat{f}^{*}_{\vec{\omega}}\). Using this, we have that
\[|\hat{f}^{*}-\hat{g}_{\vec{\omega}}|^{2}_{2} =|\hat{f}^{*}_{\vec{\omega}}+\hat{f}^{*}_{\vec{\omega}}-\hat{g}_{ \vec{\omega}}|^{2}_{2} \tag{167}\] \[=|\hat{f}^{*}_{\vec{\omega}}|^{2}_{2}+|\hat{f}^{*}_{\vec{\omega}} -\hat{g}_{\vec{\omega}}|^{2}_{2}\] (168) \[\geq|\hat{f}^{*}_{\vec{\omega}}|^{2}_{2}\] (169) \[=|\hat{f}^{*}|^{2}_{2}-|\hat{f}^{*}_{\vec{\omega}}|^{2}_{2}. \tag{170}\]
Using this expression, we can then lower-bound \(\hat{\epsilon}\), the expected \(L^{2}\)-norm of the difference between the optimal PQC function and the output of the RFF procedure, recalling that \(\xi(\vec{\omega})\) is the probability of sampling the vector of frequencies \(\vec{\omega}\):
\[\hat{\epsilon} =\sum_{\vec{\omega}\in\Omega_{D}^{M}}|f^{*}-g_{\vec{\omega}}|^{2} _{2}\,\xi(\vec{\omega}) \tag{171}\] \[=(2\pi)^{d}\sum_{\vec{\omega}\in\Omega_{D}^{M}}|\hat{f}^{*}-\hat {g}_{\vec{\omega}}|^{2}_{2}\xi(\vec{\omega}) \text{[via Parseval's identity]}\] (172) \[\geq(2\pi)^{d}\sum_{\vec{\omega}\in\Omega_{D}^{M}}\left[|\hat{f}^ {*}|^{2}_{2}-|\hat{f}^{*}_{\vec{\omega}}|^{2}_{2}\right]\xi(\vec{\omega}) \text{[via Eq.~{}\eqref{eq:PQC_eq_eq_1}]}\] (173) \[=(2\pi)^{d}|\hat{f}^{*}|^{2}_{2}-(2\pi)^{d}\sum_{\vec{\omega}\in \Omega_{D}^{M}}|\hat{f}^{*}_{\vec{\omega}}|^{2}_{2}\,\xi(\vec{\omega}). \tag{174}\]
Using the short-hand notation \(0\) to denote the frequency vector \((0,...,0)\), we can now analyze the final term as
\[\sum_{\vec{\omega}\in\Omega_{D}^{M}}|\hat{f}^{*}_{\vec{\omega}}| ^{2}_{2}\,\xi(\vec{\omega}) =\sum_{\begin{subarray}{c}\vec{\omega}\in\Omega_{D}^{M}\\ 0\in\Omega_{D}^{M}\end{subarray}}\sum_{i=1}^{M}\left(|\hat{f}^{*}(-\omega_{i} )|^{2}+|\hat{f}^{*}(\omega_{i})|^{2}\right)\xi(\vec{\omega}) \tag{175}\] \[\qquad\qquad+\sum_{\begin{subarray}{c}\vec{\omega}\in\Omega_{D}^ {M}\\ 0\in\Omega_{D}^{M}\end{subarray}}\left[\sum_{i=1}^{M-1}\left(|\hat{f}^{*}(- \omega_{i})|^{2}+|\hat{f}^{*}(\omega_{i})|^{2}\right)+|\hat{f}^{*}(0)|^{2} \right]\xi(\vec{\omega})\] (176) \[\leq\sum_{\begin{subarray}{c}\vec{\omega}\in\Omega_{D}^{M}\\ 0\in\Omega_{D}^{M}\end{subarray}}\sum_{i=1}^{M}2|\hat{f}^{*}(\omega_{i})|^{2} \xi(\vec{\omega})\] (177) \[\qquad\qquad+\sum_{\begin{subarray}{c}\vec{\omega}\in\Omega_{D}^ {M}\\ 0\in\Omega_{D}^{M}\end{subarray}}\left[\sum_{i=1}^{M-1}2|\hat{f}^{*}(\omega_{i} )|^{2}+2|\hat{f}^{*}(0)|^{2}\right]\xi(\vec{\omega})\] (178) \[=2\sum_{i=1}^{M}\sum_{\omega_{i}\in\Omega_{D}}\cdots\sum_{\omega_ {M}\in\Omega_{D}}|\hat{f}^{*}(\omega_{i})|^{2}p(\omega_{i})\cdots p(\omega_{M})\] (179) \[=2\sum_{i=1}^{M}\sum_{\omega_{i}\in\Omega_{D}}|\hat{f}^{*}(\omega _{i})|^{2}p(\omega_{i})\] (180) \[=2M\sum_{v\in\Omega_{D}}|\hat{f}^{*}(v)|^{2}p(v). \tag{181}\]
Substituting Eq. (181) into Eq. (174) then gives the statement of the Lemma. |
2309.16896 | Algorithmic Recourse for Anomaly Detection in Multivariate Time Series | Anomaly detection in multivariate time series has received extensive study
due to the wide spectrum of applications. An anomaly in multivariate time
series usually indicates a critical event, such as a system fault or an
external attack. Therefore, besides being effective in anomaly detection,
recommending anomaly mitigation actions is also important in practice yet
under-investigated. In this work, we focus on algorithmic recourse in time
series anomaly detection, which is to recommend fixing actions on abnormal time
series with a minimum cost so that domain experts can understand how to fix the
abnormal behavior. To this end, we propose an algorithmic recourse framework,
called RecAD, which can recommend recourse actions to flip the abnormal time
steps. Experiments on two synthetic and one real-world datasets show the
effectiveness of our framework. | Xiao Han, Lu Zhang, Yongkai Wu, Shuhan Yuan | 2023-09-28T23:50:11Z | http://arxiv.org/abs/2309.16896v1 | # Algorithmic Recourse for Anomaly Detection in Multivariate Time Series
###### Abstract.
Anomaly detection in multivariate time series has received extensive study due to the wide spectrum of applications. An anomaly in multivariate time series usually indicates a critical event, such as a system fault or an external attack. Therefore, besides being effective in anomaly detection, recommending anomaly mitigation actions is also important in practice yet under-investigated. In this work, we focus on algorithmic recourse in time series anomaly detection, which is to recommend fixing actions on abnormal time series with a minimum cost so that domain experts can understand how to fix the abnormal behavior. To this end, we propose an algorithmic recourse framework, called RecAD, which can recommend recourse actions to flip the abnormal time steps. Experiments on two synthetic and one real-world datasets show the effectiveness of our framework.
algorithmic recourse, anomaly detection, time series +
Footnote †: journal: Acoustics of the Acoustics of the Acoustics of the Acoustics of the Acoustics of the Acoustics of the Acoustics of the Acoustics of the Acoustics of the Acoustics of the Acoustics of the Acoustics of the Acoustics of the Acoustics of the Acoustics of the Acoustics of the Acoustics of the Acoustics of the Acoustics of the Acoustics of the Acoustics of the Acoustics of the Acoustics of the Acoustics of the Acoustics of the Acoustics of the Acoustics of the Acoustics of the Acoustics of the Acoustics of the Acoustics of the Acoustics of the Acoustics of the Acoustics of the Acoustics of the Acoustics of the Acoustics of the Acoustics of the Acoustics of the Acoustics of the Ac
The contribution of this paper can be summarized as follows. 1) We propose a novel framework for algorithmic recourse in time series anomaly detection, called RecAD. To the best of our knowledge, this is the first work on this topic. 2) RecAD considers the downstream impact of the intervention on the abnormal time step by deriving the counterfactual time series after the intervention. The goal is to ensure the following time series after intervention should also be normal. 3) The empirical studies on two synthetic and one real-world datasets show the effectiveness of RecAD for recommending recourse in time series anomaly detection.
## 2. Related Work
### Time Series Anomaly Detection
A time series anomaly is defined as a sequence of data points that deviates from frequent patterns in the time series (Kang et al., 2017). Recently, a large number of deep learning-based approaches have been developed for time series anomaly detection (Kang et al., 2017; Kang et al., 2017). Most of the approaches are trained in the semi-supervised setting, which assumes the availability of normal time series. Then, in the test phase, the anomaly detection model can mark anomalies that are different from normal behavior measured by an anomaly score. In this work, given a detected abnormal time series, we would further like to recommend recourse actions to flip the abnormal outcome.
### Algorithmic Recourse
Algorithmic recourse is to provide explanations and recommendations to flip unfavorable outcomes by an automated decision-making system (Kang et al., 2017). Specifically, given a predictive model and a sample having an unfavorable prediction from the model, algorithmic recourse is to identify the minimal consequential recommendation that leads to a favorable prediction from the model. The key challenge of identifying the minimal consequential recommendation is to consider the causal relationships governing the data. Any recommended actions on a sample should be carried out via structural interventions leading to a counterfactual instance. Multiple algorithmic recourse algorithms on binary classification models have developed (Kang et al., 2017; Kang et al., 2017; Kang et al., 2017; Kang et al., 2017). Recently, algorithmic recourse for anomaly detection on tabular data is also discussed (Kang et al., 2017). However, the existing study also does not consider causal relationships when generating counterfactuals. In this work, we focus on addressing the algorithmic recourse for anomaly detection in multivariate time series with the consideration of causal relationships.
## 3. Preliminary
### Granger Causality
Granger causality (Granger, 1958; Granger, 1958) is commonly used for modeling causal relationships in multivariate time series. The key assumption is that if the prediction of the future value \(Y\) can be improved by knowing past elements of \(X\), then \(X\) "Granger causes" \(Y\). Let a stationary time-series as \(\mathcal{X}=(\mathbf{x}_{1},\dots,\mathbf{x}_{t},\dots,\mathbf{x}_{T})\), where \(\mathbf{x}_{t}\in\mathbb{R}^{d}\) is a d-dimensional vector (e.g., d-dimensional time series data from \(d\) sensors) at a specific time \(t\). Suppose that the true data generation mechanism is defined in the form of
\[\mathbf{x}_{t}^{(j)}\coloneqq f^{(j)}(\mathbf{x}_{<t-1}^{(1)},\cdots,\mathbf{x }_{<t-1}^{(d)})+u_{t}^{(j)},\text{ for }1\leq j\leq d, \tag{1}\]
where \(\mathbf{x}_{\leq t-1}^{(j)}=[\cdots,x_{t-2}^{(j)}.x_{t-1}^{(j)}]\) denotes the present and past of series \(j\); \(u_{t}^{(j)}\) indicates exogenous variable of time series \(j\) at time step \(t\); \(\mathcal{F}=\{f^{(1)},...,f^{(d)}\}\) is a set of nonlinear functions, and \(f^{(j)}(\cdot)\in\mathcal{F}\) is a nonlinear function for time series \(j\) that captures how the past values impact the future values of \(\mathbf{x}^{(j)}\). Then, the time series \(i\) Granger causes \(j\), if \(f^{(j)}\) depends on \(\mathbf{x}_{\leq t-1}^{(i)}\), i.e., \(\exists\mathbf{x}_{\leq t-1}^{(i)}\neq\mathbf{x}_{\leq t-1}^{(i)}:f^{(j)}( \mathbf{x}_{\leq t-1}^{(1)},\cdots,\mathbf{x}_{\leq t-1}^{(i)},\cdots,\mathbf{ x}_{\leq t-1}^{(d)})\neq f^{(j)}(\mathbf{x}_{\leq t-1}^{(1)},\cdots,\mathbf{x}_{\leq t-1}^{(j)}, \cdots,\mathbf{x}_{\leq t-1}^{(d)})\).
### Generalised Vector Autoregression (GVAR)
Granger causal inference has been extensively studied (Kang et al., 2017; Kang et al., 2017; Kang et al., 2017). Recently, a generalized vector autoregression (GVAR) is developed to model nonlinear Granger causality in time series by leveraging neural networks (Kang et al., 2017). GVAR models the Granger causality of the \(t\)-th time step given the past \(K\) lags by
\[\mathbf{x}_{t}=\sum_{k=1}^{K}g_{k}(\mathbf{x}_{t-k})\mathbf{x}_{t-k}+\mathbf{u }_{t}, \tag{2}\]
where \(g_{k}(\cdot):\mathbb{R}^{d}\rightarrow\mathbb{R}^{d\times d}\) is a feedforward neural network predicting a coefficient matrix at time step \(t-k\); \(\mathbf{u}_{t}\) is the exogenous variable for time step \(t\). The element \((i,j)\) of the coefficient matrix from \(g_{k}(\mathbf{x}_{t-k})\) indicates the influence of \(x_{t-k}^{(j)}\) on \(\mathbf{x}_{t}^{(i)}\). Meanwhile, \(K\) neural networks are used to predict \(\mathbf{x}_{t}\). Therefore, relationships between \(d\) variables over \(K\) time lags can be explored by inspecting \(K\) coefficient matrices. The \(K\) neural networks are trained by the objective function: \(\mathcal{L}=\frac{1}{T-K}\sum_{t=K+1}^{T}\|\mathbf{x}_{t}-\hat{\mathbf{x}}_{t} \|_{2}+\frac{1}{T-K}\sum_{t=K+1}^{T}R(\mathcal{M}_{t})+\frac{\gamma}{T-K-1}\sum_ {t=K+1}^{T-1}\|\mathcal{M}_{t+1}-\mathcal{M}_{t}\|_{2}\), where \(\hat{\mathbf{x}}_{t}=\sum_{k=1}^{K}g_{k}(\mathbf{x}_{t-k})\mathbf{x}_{t-k}\) indicates the predicted value GVAR; \(\mathcal{M}_{t}\coloneqq[g_{K}(\mathbf{x}_{t-K}):g_{K-1}(\mathbf{x}_{t-K+1}) :\cdots:g_{1}(\mathbf{x}_{t-1})]\) indicates the concatenation of generalized coefficient matrices over the past the \(K\) time steps; \(R(\cdot)\) is the penalty term for sparsity, such as L1 or L2 norm; the third term is a smoothness penalty; \(\lambda\) and \(\gamma\) are hyper-parameters. After training, the generalized coefficient predicted by \(g_{k}(\mathbf{x}_{t-k})\) indicates the causal relationships between time series at the time lag \(k\).
## 4. Framework
In this work, we aim to achieve algorithmic recourse for anomaly detection in multivariate time series. To this end, UnSupervised Anomaly Detection for multivariate time series (USAD) (Kang et al., 2017) is adopted as a base anomaly detection model. After detecting the abnormal time steps, we propose to recommend recourse actions to flip the abnormal outcome, where the action values can fix the abnormal behavior with the minimum cost. Because the variables in a time series have causal connections through time, when recommending actions, we should consider the downstream impact on other variables. Therefore, we develop a framework for algorithmic **R**ecourse in time series **A**nomaly **D**etection (RecAD), which is able to predict the recourse actions that fix the abnormal time series.
**Anomaly in multivariate time series**. Based on the structural equation of multivariate time series, we propose to describe the anomaly from the perspective of causal relationships in multivariate
time series:
\[\mathbf{x}_{t}^{(j)}\coloneqq f^{(j)}(\mathbf{x}_{\leq t-1}^{(1)},\cdots,\mathbf{ x}_{\leq t-1}^{(d)})+u_{t}^{(j)}+\epsilon_{t}^{(j)},\text{ for }1\leq j\leq d. \tag{3}\]
The anomaly term \(\epsilon_{t}^{(j)}\) can be due to either an external intervention or a structural intervention. The external intervention (i.e., **non-causal anomaly**) indicates a significantly deviating value in its exogenous variable \(\tilde{u}_{t}^{(j)}\) and can be defined as: \(\mathbf{x}_{t}^{(j)}=f^{(j)}(\mathbf{x}_{\leq t-1}^{(1)},\cdots,\mathbf{x}_{ \leq t-1}^{(d)})\), \(\tilde{u}_{t}^{(j)}=f^{(j)}(\mathbf{x}_{\leq t-1}^{(1)},\cdots,\mathbf{x}_{ \leq t-1}^{(d)})+u_{t}^{(j)}+\epsilon_{t}^{(j)}\), for \(1\leq j\leq d\), where \(\tilde{u}_{t}^{(j)}=u_{t}^{(j)}+\epsilon_{t}^{(j)}\). The structural intervention (i.e., **causal anomaly**) indicates the replacement of the structural functions \(\mathcal{F}\) with abnormal functions \(\mathcal{F}\) and can be defined as: \(\mathbf{x}_{t}^{(j)}=\tilde{f}^{(j)}(\mathbf{x}_{\leq t-1}^{(1)},\cdots, \mathbf{x}_{\leq t-1}^{(d)})\), \(\mathbf{u}_{t}^{(j)}=f^{(j)}(\mathbf{x}_{\leq t-1}^{(1)},\cdots,\mathbf{x}_{ \leq t-1}^{(d)})+u_{t}^{(j)}+\epsilon_{t}^{(j)}\), for \(1\leq j\leq d\), where \(\tilde{f}^{(j)}(\mathbf{x}_{\leq t-1}^{(1)},\cdots,\mathbf{x}_{\leq t-1}^{( d)})\) is an abnormal function for the time series \(j\) at time \(t\). The anomaly term caused by the change of causal relationships is given by \(\epsilon_{t}^{(j)}=\tilde{f}^{(j)}(\mathbf{x}_{\leq t-1}^{(1)},\cdots,\mathbf{ x}_{\leq t-1}^{(d)})-f^{(j)}(\mathbf{x}_{\leq t-1}^{(1)},\)\(\cdots,\mathbf{x}_{\leq t-1}^{(d)})\) which is time-dependent.
Equation (3) also follows the intuitive definition of an anomaly as an observation that deviates from some concepts of normality (Bang et al., 2017; Chen et al., 2017). Here, normality indicates the structural equation without the anomaly term \(\epsilon_{t}^{(j)}\).
### Problem Formulation
Denote a multivariate time series as \(\mathcal{X}=(\mathbf{x}_{1},\ldots,\mathbf{x}_{t},\ldots,\mathbf{x}_{T})\), where each time step \(\mathbf{x}_{t}\in\mathbb{R}^{d}\) indicates an observation measured at time step \(t\). Following the common setting for time series anomaly detection (Bang et al., 2017; Chen et al., 2017), given a time series \(\mathcal{X}\), to model the local dependence between a time step \(\mathbf{x}_{t}\) and past lags, we first define a local window with length \(K\) as \(\mathbf{W}_{t}=(\mathbf{x}_{t-K+1},...,\mathbf{x}_{t})\) and convert a time series \(\mathcal{X}\) to a sequence of sliding windows \(\mathcal{W}=(\mathbf{W}_{K},\mathbf{W}_{K+1},...,\mathbf{W}_{T})\). The multivariate time series anomaly detection approaches aim to label whether a time step \(\mathbf{x}_{t}\) is abnormal based on a score function \(s(\cdot)\) given the time window \(\mathbf{W}_{t}\). If \(s(\mathbf{W}_{t})>\tau\), then the last time step \(\mathbf{x}_{t}\) will be labeled as abnormal.
When a sliding window \(\mathbf{W}_{t}\) is detected as abnormal, we would like to recommend recourse actions \(\theta_{t}\) on the actionable variables at the time step \(\mathbf{x}_{t}\) to reverse the abnormal outcome. Meanwhile, as the time steps keep coming in, if the following sliding window is still abnormal, we would keep recommending recourse actions until the time series is normal.
As shown in Figure 2, in the training phase, once we intervene in a time step \(\mathbf{x}_{t}\), the following time series is in the counterfactual world as the intervention has the downstream impact, which is unobservable. To properly train the model for action recommendations, the key is to derive the counterfactual time series, denoted as \(\mathbf{W}_{t+1}^{CP}=(\mathbf{x}_{2},...,\mathbf{x}_{t}+\theta_{t},\mathbf{ x}_{t+1}(\theta_{t}))\), where \(\mathbf{x}_{t+1}(\theta_{t})\) indicates the counterfactual time step \(t+1\) after conducting intervention on \(\mathbf{x}_{t}\). If \(\mathbf{W}_{t+1}^{CP}\) is still detected as abnormal, further recourse actions will be recommended on the counterfactual data \(\mathbf{x}_{t+1}(\theta_{t})\).
### Anomaly Detection for Time Series
In this work, we adopt UnSupervised Anomaly Detection for multivariate time series (USAD) (Bang et al., 2017) which is a state-of-the-art autoencoder-based anomaly detection model. USAD consists of two autoencoders, i.e., \(AE_{1}\) and \(AE_{2}\), with a shared encoder and two independent decoders, and derives the anomaly score function \(s(\cdot)\) based on the reconstruction errors of two autoencoders. In the training phase, given a set of normal sliding windows, USAD combines traditional reconstruction-based training with adversarial training to capture the normal patterns of time series. Specifically, reconstruction-based training will let both autoencoders learn how to reproduce the input window \(\mathbf{W}\), i.e., \(\mathcal{L}_{AE_{i}}=\|\mathbf{W}-AE_{i}(\mathbf{W})\|_{2}\), where \(AE_{i}\) indicates either \(AE_{1}\) or \(AE_{2}\). The goal of adversarial training is for \(AE_{1}\) to deceive \(AE_{2}\), while \(AE_{2}\) learns to distinguish between real data and reconstructed data from \(AE_{1}\), i.e., \(\mathcal{L}_{AD}=\min\limits_{AE_{1}}\max\limits_{AE_{2}}\|\mathbf{W}-AE_{2}(AE _{1}(\mathbf{W}))\|_{2}\).
After training, the anomaly score of a new window \(\mathbf{W}^{*}\) is then calculated using the combination of reconstruction errors of two autoencoders,
\[s(\mathbf{W}^{*})\!=\!\alpha\|\mathbf{W}-AE_{1}(\mathbf{W}^{*})\|_{2}\!+\! \theta\|\mathbf{W}^{*}\!-\!AE_{2}(AE_{1}(\mathbf{W}^{*}))\|_{2}, \tag{4}\]
where \(\alpha\) and \(\beta\) are hyperparameters.
### Algorithmic Recourse
When a sliding window \(\mathbf{W}_{t}\) is detected as abnormal, we then recommend recourse actions on \(\mathbf{x}_{t}\) with the consideration of downstream impacts.
#### 4.3.1. Recourse on the abnormal time step
Given an abnormal point \(\mathbf{x}_{t}\) and the previous \(K-1\) time lags in \(\mathbf{W}_{t}=(\mathbf{x}_{t-K+1},\ldots,\mathbf{x}_{t-1},\mathbf{x}_{t})\), we formulate the recourse on \(\mathbf{x}_{t}\) as soft intervention,
\[\mathbf{x}_{t}(\theta_{t})=\mathbf{x}_{t}+\theta_{t}, \tag{5}\]
where \(\theta_{t}\) is the action values on \(\mathbf{x}_{t}\) derived by a function, i.e., \(\theta_{t}=h_{\phi}(\cdot)\) parameterized by \(\phi\).
In order to successfully flip the abnormal outcomes, we consider two types of information for recourse prediction via \(h_{\phi}(\cdot)\), time lag exclusion term \(\Delta_{t}\) and the past window \(\mathbf{W}_{t-1}\). As shown in Equation (3), the anomaly in multivariate time series is due to an additional anomaly term. Therefore, we derive the time lag independent term \(\Delta_{t}\) at time \(t\) as \(\Delta_{t}=\mathbf{x}_{t}-\hat{\mathbf{x}}_{t}\), where \(\hat{\mathbf{x}}_{t}\) is the expected values given \(\mathbf{W}_{t-1}\) derived by GVAR. As GVAR can simulate the nonlinear multivariate GC functions \(\mathcal{F}\), \(\Delta_{t}\) contains only independent noise term and anomaly term at time \(t\).
Let \(\theta_{t}=h_{\phi}(\mathbf{W}_{t-1},\Delta_{t})\) be the function for predicting the recourse action given the previous \(K\) time lags \(\mathbf{W}_{t-1}\) and \(\Delta_{t}\) at time step \(t\) parameterized by \(\phi\), which is defined below:
Figure 2. Algorithmic Recourse on Multivariate Time Series
\[\begin{split}\mathbf{z}_{t-1}&=LSTM(\mathbf{W}_{t-1}) \quad\mathbf{z}_{\Delta}=FFNN(\Delta_{t})\\ \boldsymbol{\theta}_{t}&=FFNN(\mathbf{z}_{t-1}\oplus \mathbf{z}_{\Delta}),\end{split} \tag{6}\]
where \(LSTM(\cdot)\) is the long short-term memory (LSTM) neural network; \(FFNN(\cdot)\) is a feedforward neural network; and \(\oplus\) indicates the vector concatenation operation. In a nutshell, to predict the recourse action, first, we adopt LSTM that takes the past \(K\) time lags \(\mathbf{W}_{t-1}\) as input and derives a hidden representation \(\mathbf{z}_{t-1}\) of the last time step to represent \(\mathbf{W}_{t-1}\). Similarly, we adopt a feedforward neural network that takes \(\Delta_{t}\) as input to derive the hidden representation \(\mathbf{z}_{\Delta}\). Finally, we use another feedforward neural network for recourse prediction by concatenating \(\mathbf{z}_{t-1}\) and \(\mathbf{z}_{\Delta}\) as input.
By applying the action \(\boldsymbol{\theta}_{t}\) on \(\mathbf{x}_{t}\), the counterfactual time step can be computed as \(\mathbf{x}_{t}(\boldsymbol{\theta}_{t})=\mathbf{x}_{t}+\boldsymbol{\theta}_ {t}\). The counterfactual window \(\mathbf{W}_{t}(\boldsymbol{\theta}_{t})\) is derived by replacing \(\mathbf{x}_{t}\) with \(\mathbf{x}_{t}(\boldsymbol{\theta}_{t})\) in \(\mathbf{W}_{t}\). To train the recourse prediction functions, the objective function is defined as:
\[\mathcal{L}_{t}(\phi)=\max\left\{s(\mathbf{W}_{t}(\boldsymbol{\theta}_{t}))- \alpha r,0\right\}+\lambda\|\mathbf{e}\cdot\boldsymbol{\theta}_{t}\|_{2}, \tag{7}\]
where \(s(\mathbf{W}_{t}(\boldsymbol{\theta}_{t}))\) indicates the anomaly score defined in Equation (4); \(\lambda\) is a hyperparameter balancing the action values on the anomaly and the flipping of abnormal outcome, \(\alpha\) is another hyperparameter controlling how close the anomaly score of the counterfactual sample should be to the threshold \(\tau\), \(\mathbf{c}\in\mathbb{R}^{d}\) is a hyperparameter, describing the costs of revising time series (cost vector). Because in USAD, the anomaly is labeled due to a large reconstruction error on the input sample, the first term in the objective function is to ensure the counterfactual variant has a small reconstruction error. The second term, as a regularization term, ensures the minimum action cost on the original values.
**Inferring the downstream impact.** Based on the assumption of Granger causality, the recourse on \(\mathbf{x}_{t}\) leads to the counterfactual variants of the following time steps \(\mathbf{x}_{t^{\prime}}(\boldsymbol{\theta}_{t})\), where \(t^{\prime}\geq t\).
To evaluate the impact of the intervention, assuming that \(\mathbf{x}_{t}(\boldsymbol{\theta}_{t}),\mathbf{x}_{t+1}(\boldsymbol{\theta}_ {t})\), \(\ldots,\mathbf{x}_{t^{\prime}}(\boldsymbol{\theta}_{t})\) is known, where \(t^{\prime}\geq t\), we further derive the counterfactual quantity of the next step \(\mathbf{x}_{t^{\prime}+1}\) by the Abduction-Action-Prediction (AAP) process [16]: 1) Abduction: update the probability \(P(u_{t^{\prime}+1}^{(i)})\) to obtain \(P(u_{t^{\prime}+1}^{(i)}|\boldsymbol{e})\), where \(u\) indicates the exogenous variables and \(\boldsymbol{e}\) indicates propositional evidence; 2) Action: variables are intervened to reflect the counterfactual assumption; 3) Prediction: counterfactual reasoning occurs over the new model using updated knowledge.
Formally, based on the causal relationships learned by GVAR, the Abduction-Action-Prediction process to compute the counterfactual value in the \(t^{\prime}+1\)-th time step can be described below.
**Step 1 (abduction):**
\[\mathbf{u}_{t^{\prime}+1}=\mathbf{x}_{t^{\prime}+1}-\sum_{k=1}^{K}g_{k}( \mathbf{x}_{t^{\prime}+1-k})\mathbf{x}_{t^{\prime}+1-k} \tag{8}\]
**Step 2 (action):**
\[\mathbf{a}_{t^{\prime}}=\mathbf{x}_{t^{\prime}}+\boldsymbol{\theta}_{t}\text { if }t^{\prime}=t\quad\mathbf{a}_{t^{\prime}}=\mathbf{x}_{t^{\prime}}(\boldsymbol{ \theta}_{t})\text{ if }t^{\prime}>t \tag{9}\]
**Step 3 (prediction):**
\[\mathbf{x}_{t^{\prime}+1}(\boldsymbol{\theta}_{t})\!\!=\!\!\!\sum_{k=2}^{K}g_{k} (\mathbf{x}_{t^{\prime}\!\!+\!k}(\boldsymbol{\theta}_{t}))\mathbf{x}_{t^{\prime }\!\!+\!k}(\boldsymbol{\theta}_{t})\!\!+\!g_{1}(\mathbf{a}_{t^{\prime}})\mathbf{ a}_{t^{\prime}}\!+\!\mathbf{u}_{t^{\prime}+1}. \tag{10}\]
Equations (8)-(10) provide the recursive equations for computing the counterfactual time series for \(L\) (\(L<K\)) steps based on the AAP process. The closed form formula for computing counterfactual value \(\mathbf{x}_{t+L}(\boldsymbol{\theta}_{t})\) can be derived as.
\[\begin{split}\mathbf{x}_{t+L}(\boldsymbol{\theta}_{t})& =\sum_{l=0}^{L-1}g_{L-l}(\mathbf{x}_{t+l}(\boldsymbol{\theta}_{t})) \mathbf{x}_{t+l}(\boldsymbol{\theta}_{t})\\ &+\sum_{n=1+L}^{K}g_{n}(\mathbf{x}_{t+L-n})\mathbf{x}_{t+L-n}+ \mathbf{u}_{t+L},\end{split} \tag{11}\]
where \(\mathbf{u}_{t+L}\) can be derived similar to Equation (8).
Because the intervention on the \(t\)-th step has the downstream impact, besides ensuring the counterfactual window \(\mathbf{W}_{t}(\boldsymbol{\theta}_{t})\) be normal, we would like to make sure that the following \(L\) steps are also normal. Therefore, we update the objective function in Equation (7) by considering the normality of following \(L\) steps,
\[\mathcal{L}(\boldsymbol{\phi})\!\!=\!\!\!\sum_{t^{\prime}=t}^{t\!+\!L}\max\left\{ s(\mathbf{W}_{t^{\prime}}(\boldsymbol{\theta}_{t}))\!\!-\!\!\alpha r,0\right\}\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
step \(t\) needs to consider the time lag exclusion term \(\mathbf{u}_{t}\) (Line 20). The \(\mathcal{X}^{CF}\) and \(\mathbf{W}^{CF}_{t}\) values need to update each time step after a recourse action is conducted (Line 21).
If the updated sliding window \(\mathbf{W}^{CF}_{t}\) is still detected as an anomaly (Line 5), we need to keep predicting the recourse action (Line 6. Specifically, we first compute the action values with \(h_{\phi}(\cdot)\) (Line 27). The counterfactual values of the current time step \(t\) can be calculated by Equation (9) (Line 28). Then we further update the values at time step \(t\) of counterfactual time series \(\mathbf{x}^{CF}_{t}\) with values \(\mathbf{x}_{t}(\theta_{t})\), meaning that another recourse action is conducted (Line 8).
During the training phase, we expect \(h_{\phi}(\cdot)\) can recommend the recourse action with the consideration of its downstream impact on future time steps. Therefore, we compute the counterfactual values for the following \(L\) steps according to Equation (11) (Line 9). Then, we update the parameters in \(h_{\phi}(\cdot)\) based on the objective function defined in Equation (12).
#### 4.3.2. Recourse prediction in the test phase
After completing the training process, the recourse prediction function \(h_{\phi}(\cdot)\) is capable of predicting the appropriate recourse recommendations for each detected anomaly. In the test phase, the multivariate time series is considered as streaming data that is continuously fed into our framework. USAD starts by analyzing the first \(K\) time steps to detect any anomalies. If an anomaly is detected with the \(K\) time step, RecAD will utilize the information gathered up to that point \(\mathbf{W}_{K-1}\) to make a recourse recommendation for the last time step \(\mathbf{x}_{K}\). Then, different from the training phase, where we can only derive the counterfactual time series after intervention based on the AAP process, as the time series comes in as a stream, we can directly observe the following time series after the intervention. We will continue monitoring the incoming data to detect any anomalies and further recommend recourse actions once an abnormal time step is detected. This process will continue as long as the system is receiving input data.
## 5. Experiments
### Experimental Setups
#### 5.1.1. Datasets
We conduct experiments on two semi-synthetic datasets and one real-world dataset. The purposes of using semi-synthetic datasets are as follows. 1) We can derive the ground truth downstream time series after the intervention on the abnormal time step based on the data generation equations in the test phase. 2) We can evaluate the fine-grained performance of RecAD by injecting different types of anomalies.
**Linear Dataset**(Kipf and Welling, 2017) is a **synthetic** time series dataset with linear interaction dynamics. We adopt the structural equations defined in (Kipf and Welling, 2017) that are
defined as:
\[\begin{split} x^{(1)}_{t}&=a_{1}x^{(1)}_{t-1}+u^{(1) }_{t}+\epsilon^{(1)}_{t},\\ x^{(2)}_{t}&=a_{2}x^{(2)}_{t-1}+a_{3}x^{(1)}_{t-1}+u^ {(2)}_{t}+\epsilon^{(2)}_{t},\\ x^{(3)}_{t}&=a_{4}X^{(3)}_{t-1}+a_{5}x^{(2)}_{t-1}+u^ {(3)}_{t}+\epsilon^{(3)}_{t},\\ x^{(4)}_{t}&=a_{6}x^{(4)}_{t-1}+a_{7}x^{(2)}_{t-1}+ a_{8}x^{(3)}_{t-1}+u^{(4)}_{t}+\epsilon^{(4)}_{t},\end{split} \tag{13}\]
where coefficients \(a_{i}\sim\mathcal{U}([-0.8,-0.2]\cup[0.2,0.8])\), additive innovation terms \(u^{(\cdot)}_{t}\sim\mathcal{N}(0,0.16)\), and anomaly term \(\epsilon^{(\cdot)}_{t}\).
_Ahnormal behavior injection_. For non-causal point anomalies, the anomaly term is single or multiple extreme values for randomly selected time series variables at a specific time step \(t\). For example, a point anomaly at time step \(t\) can be generated with an abnormal term \(\epsilon_{t}=[0,2,4,0]\), which means the second and third time series have extreme values.
For non-causal sequence anomalies, the anomaly terms are function-generated values in a given time range. For instance, setting \(\epsilon^{(1)}_{t+i}=0.1\times i,\ \text{ for }0\leq i\leq n\), will cause a trend anomaly for time series variable \(x^{(1)}\); setting \(\epsilon^{(1)}_{t+i}\sim\mathcal{N}(0,0.16),\ \text{for }0\leq i\leq n\), will cause a shapelet anomaly; and setting \(\epsilon^{(1)}_{t+i}=(a_{1}x^{(1)}_{t+i2-1}+u^{(1)}_{t+2i})+(a_{1}x^{(1)}_{t+i 2-2}+u^{(1)}_{t+2i-1})-(a_{1}x^{(1)}_{t+i-1}+u^{(1)}_{t+i}),\ \text{for }0\leq i \leq n\), will cause a seasonal anomaly.
For causal sequence anomalies, we consider two scenarios: 1) changing the coefficients \(\mathcal{A}=\{a_{1},a_{2},\cdots,a_{8}\}\) from a normal one to a different one in a time range \(t\) to \(t+n\); 2) changing generative functions from the original equation to the following equation:
\[\begin{split} x_{t}^{(1)}&=a_{1}x_{t-1}^{(1)}+a_{2}x _{t-1}^{(3)}+a_{3}x_{t-1}^{(4)}+u_{t}^{(1)}+e_{t}^{(1)},\\ \ x_{t}^{(2)}&=a_{4}\mathcal{X}_{t-1}^{(2)}+a_{3}x_{t -1}^{(1)}+u_{t}^{(2)}+e_{t}^{(2)},\\ \ x_{t}^{(3)}&=a_{6}\mathcal{X}_{t-1}^{(3)}+u_{t}^{(3 )}+e_{t}^{(3)},\\ \ x_{t}^{(4)}&=a_{7}x_{t-1}^{(4)}+u_{t}^{(4)}+e_{t}^{ (4)}.\end{split} \tag{14}\]
**Lotka-Volterra**(Han et al., 2017) is another **synthetic** time series model that simulates a prairie ecosystem with multiple species. We follow the structure from (Kang et al., 2017), which defines as:
\[\begin{split}\frac{d\mathbf{x}^{(i)}}{dt}&=\alpha \mathbf{x}^{(i)}-\beta\sum_{j\in Pa(\mathbf{x}^{(i)})}\mathbf{y}^{(j)}-\eta( \mathbf{x}^{(i)})^{2},\text{ for }1\leq j\leq p,\\ \frac{d\mathbf{y}^{(j)}}{dt}&=\delta\mathbf{y}^{(j)} \sum_{k\in Pa(\mathbf{y}^{(j)})}\mathbf{x}^{(k)}-\rho\mathbf{y}^{(j)},\text{ for }1\leq j\leq p,\\ x_{t}^{(i)}&=x_{t}^{(i)}+\epsilon_{t}^{(i)},\text{ for }1 \leq j\leq p,\\ y_{t}^{(j)}&=y_{t}^{(j)}+\epsilon_{t}^{(i)},\text{ for }1 \leq j\leq p,\end{split} \tag{15}\]
where \(\mathbf{x}^{(i)}\) and \(\mathbf{y}^{(j)}\) denote the population sizes of prey and predator, respectively; \(\alpha,\beta,\eta,\delta,\rho\) are parameters that decide the strengths of interactions, \(Pa(\mathbf{x}^{(i)})\) and \(Pa(\mathbf{y}^{(j)})\) correspond the Granger Causality between prey and predators for \(\mathbf{x}^{(i)}\) and \(\mathbf{y}^{(j)}\) respectively, and \(\epsilon_{t}^{(\cdot)}\) is the abnormal term. We adopt 10 prey species and 10 predator species.
_Abnormal behavior injection._ We adopt similar strategies as used in the Linear Dataset to inject abnormal behavior.
For point anomalies and non-causal sequence anomalies, we perform a similar procedure as the linear dataset, i.e., randomly select time series variables at a specific time step \(t\) and assign single or multiple extreme values as point anomalies, and assign function-generated abnormal terms for a time range from \(t\) to \(t+n\) as sequence anomalies.
For causal sequence anomalies, we still consider two scenarios: 1) changing the coefficients \(\alpha,\beta,\eta,\delta,\rho\) to different values than the normal ones; 2) changing \(Pa(\mathbf{x}^{(i)})\) and \(Pa(\mathbf{y}^{(j)})\) to different ones from the original generative functions Equation (15).
**Multi-Source Distributed System (MSDS)**(Kang et al., 2017) is a **real-world** dataset that contains distributed traces, application logs, and metrics from an OpenStack testbed. MSDS consists 10-dimensional time series. The fault injections are treated as anomalies. The first half of MSDS without fault injection is used as a training set, while the second half includes 5.37% time steps as fault injections, which is used as a test set. As the real-world dataset, we cannot observe the downstream time series after the intervention. Therefore, in the test phase, we use GVAR and AAP to generate the counterfactual time series for evaluation.
Table 1 shows the statistics of three datasets. Training datasets only consist of normal time series. Note that the test sets listed in Table 1 are used for evaluating the performance of anomaly detection. After detecting the abnormal time series in the test set, for the synthetic datasets, we use 50% of abnormal time series for training RecAD and another 50% for evaluating the performance of RecAD on recourse prediction, while for the MSDS dataset, we use 80% of abnormal time series for training RecAD and the rest 20% for evaluation.
#### 5.1.2. Baselines
To our best knowledge, there is no causal algorithmic recourse approach in time series anomaly detection. We compare RecAD with the following baselines: 1) Multilayer perceptron (MLP) which is trained with the normal flattened sliding windows to predict the normal values for the next step; 2) LSTM which can capture the information over long periods of time and learn complex temporal dependencies to make predictions for the next step; 3) Vector Autoregression (VAR) is a statistical model that used to analyze GC within multivariate time series data and predict future values; 4) Generalised Vector Autoregression (GVAR) (Kang et al., 2017) is an extension of self-explaining neural network that can infer nonlinear multivariate GC and predict values of the next step.
For all the baselines, in the training phase, we train them to predict the last value in a time window on the normal time series so that they can capture the normal patterns. In the testing phase, when a time window is detected as abnormal by USAD for the first time, indicating the last time step \(\mathbf{x}_{t}\) is abnormal, we use baselines to predict the expected normal value in the last time step \(\tilde{\mathbf{x}}_{t}\). Then, the recourse action values can be derived as \(\theta_{t}=\tilde{\mathbf{x}}_{t}-\mathbf{x}_{t}\). For the sequence anomalies, we keep using the baselines to predict the expected normal values and derive the action values by comparing them with the observed values.
#### 5.1.3. Evaluation Metrics
We evaluate the performance of algorithmic recourse based on the following three metrics.
1) **Flipping ratio**, which is to show the effectiveness of algorithms for algorithmic recourse.
\[\text{Flipping Ratio}=\frac{\text{Number of flipped time steps}}{\text{All detected abnormal time steps}}\]
2) **Action cost** per multivariate time series, which is to check the efficiency of predicted actions.
\[\text{Action Cost}=\frac{\text{Total action cost}}{\text{\# of abnormal multivariate time series}},\]
where the "Total action cost" indicates the action cost (\(\|\mathbf{c}\cdot\mathbf{\theta}_{t}\|_{2}\)) to flip all the abnormal data in the test set.
3) **Action step** per multivariate time series, which is to show how many action steps are needed to flip the abnormal time series.
\[\text{Action Step}=\frac{\text{Total number of action time steps}}{\text{\# of abnormal multivariate time series}}\]
#### 5.1.4. Implementation Details
Similar to (Han et al., 2017), we adopt a sliding window with sizes 5, 5, and 10 for the Linear, Lotka-Volterra, and MSDS datasets, respectively. We set the hyperparameters for GVAR by following (Kang et al., 2017). When training \(h_{\delta}(\cdot)\), we set \(L\) in the objective function as \(L=1\), which is to ensure the following one time step
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Dim.} & \multirow{2}{*}{Train} & \multicolumn{4}{c|}{Test (Anomalies 5)} \\ \cline{4-5} & & & Point & Non-causal Seq. & Causal Seq. \\ \hline Linear & 4 & 50,000 & 250,000 (25) & 250,000 (6\%) & 250,000 (6\%) \\ \hline Lotka-Volterra & 20 & 100,000 & 500,000 (15) & 500,000 (35) & 500,000 (5\%) \\ \hline MSDS & 10 & 146,340 & \multicolumn{4}{c|}{146,340 (5,377)} \\ \hline \end{tabular}
\end{table}
Table 1. Statistics of three datasets for anomaly detection.
should be normal. The cost vector \(\mathbf{c}\) can be changed according to the requirements or prior knowledge. Because the baseline models are prediction-based models that cannot take the cost into account, to be fair, we use \(\mathbf{1}\) as the cost vector. For baselines, MLP is a feed-forward neural network with a structure of \(((K-1)*d)\)-100-100-100-\(d\) that the input is the flattened vector of \(K-1\) time steps with \(d\) dimensions and the output is the predicted value of the next time step. The LSTM model consists of one hidden layer with 100 dimensions and is connected with a fully connected layer with a structure of 100-\(d\). We use statsmodels1 to implement the VAR model. The baseline GVAR model is the same as GVAR within our framework. To implement \(h_{\theta}(\cdot)\) in RecAD, we utilize an LSTM that consists of one hidden layer with 100 dimensions and a feed-forward network with structure \(d\)-100. Then we use another feed-forward network with a structure of 200-\(d\) to predict the intervention values. Our code is available online2.
Footnote 1: [https://www.statsmodels.org/](https://www.statsmodels.org/)
Footnote 2: [https://www.linyurl.com/RevAD2023](https://www.linyurl.com/RevAD2023)
### Experimental Results
#### 5.2.1. Evaluation Results on Synthetic Datasets
We first report the experimental results with standard deviations over 10 runs on synthetic datasets.
**The performance of anomaly detection.** We evaluate the performance of USAD for anomaly detection in terms of the F1 score, the area under the precision-recall curve (AUC-PR), and the area under the receiver operating characteristic (AUC-ROC) on two synthetic datasets. Table 2 shows the evaluation results. Overall, USAD can achieve promising performance on different types of anomalies, which lays a solid foundation for recourse prediction.
**The performance of recourse prediction on non-causal anomaly.** The non-causal anomaly encompasses both point and sequential anomalies. Table 3 shows the performance of RecAD for recourse prediction on non-causal anomaly. First, in all settings, RecAD can achieve the highest flipping ratios, which shows the effectiveness of RecAD on flipping abnormal behavior. Meanwhile, RecAD can achieve low or comparable action costs and action steps compared with other baselines. Although some baselines can achieve lower action costs in some settings, this could be due to the low flipping ratios. More importantly, all baselines do not consider the downstream impact of recourse actions.
**The performance of recourse prediction on causal anomaly.** We examine the performance of RecAD on causal anomaly. The results are shown in Table 4. First, RecAD can achieve the highest flipping ratio compared with baselines on both Linear and Lotka-Volterra datasets. High flipping ratios on both datasets indicate that the majority of causal anomalies can be successfully flipped. Meanwhile, RecAD can also achieve low action costs and action steps with high flipping ratios. Although MLP can achieve the lowest action cost on the Lotka-Volterra dataset, the flipping ratio achieved by MLP is much lower than RecAD. Overall, RecAD meets the requirement of algorithmic recourse, i.e., flipping the abnormal outcome with minimum costs, on causal anomalies.
Therefore, based on Tables 3 and 4, we can demonstrate that RecAD can provide recourse prediction on different types of anomalies in multivariate time series.
#### 5.2.2. Evaluation Results on Real Dataset
We further report the experimental results with standard deviations over 10 runs on MSDS. **The performance of anomaly detection.** We first evaluate the performance of USAD for anomaly detection. USAD can achieve \(0.888_{\pm 0.097}\), \(0.996_{\pm 0.001}\), and \(0.985_{\pm 0.003}\) in terms of F1 score, AUC-PR, and AUC-ROC, respectively. It means USAD can find most of the anomalies in the MSDS dataset.
**The performance of recourse prediction.** Because for the real-world dataset, we do not know the types of anomalies, we report the performance of recourse prediction on any detected anomalies. As shown in Table 5, RecAD achieves the flipping ratio of 0.84, meaning that RecAD can flip 84.1% of detected abnormal time steps, much higher than all baselines. Regarding the average action cost per time series and the average action step, RecAD also outperforms the baselines by registering the lowest values. This suggests that, by incorporating Granger causality, RecAD is capable of identifying recourse actions that minimize both cost and the number of action steps.
#### 5.2.3. Ablation Study
We evaluate the performance of using different parts of RecAD (i.e., FFNN and LSTM) for recourse prediction. As RecAD contains an LSTM to catch the previous \(K-1\) time lags and a feedforward neural network (FFNN) to include the time lag exclusion term \(\Delta_{t}\), we then test the performance of these two parts separately. Table 6 shows the average flipping ratio, action cost, and action step for three types of anomalies for the synthetic datasets and results for the real-world dataset MSDS. We can notice that RecAD achieves higher flipping ratios and lower action steps than the one only using a part of RecAD. It shows the importance of considering both information for reasonable action value prediction.
#### 5.2.4. Sensitivity Analysis
The objective function (Equation (12)) for training RecAD employs the hyperparameter \(\lambda\) to balance the
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Anomaly Types & Metrics & Linear & Lotka-Volterra \\ \hline \multirow{3}{*}{Non-causal Point} & F1 & \(0.749_{\pm 0.022}\) & \(0.787_{\pm 0.106}\) \\ \cline{2-4} & AUC-PR & \(0.619_{\pm 0.024}\) & \(0.840_{\pm 0.116}\) \\ \cline{2-4} & AUC-ROC & \(0.816_{\pm 0.016}\) & \(0.851_{\pm 0.085}\) \\ \hline \multirow{3}{*}{Non-causal Seq.} & F1 & \(0.878_{\pm 0.011}\) & \(0.677_{\pm 0.061}\) \\ \cline{2-4} & AUC-PR & \(0.798_{\pm 0.015}\) & \(0.519_{\pm 0.026}\) \\ \cline{2-4} & AUC-ROC & \(0.914_{\pm 0.010}\) & \(0.794_{\pm 0.089}\) \\ \hline \multirow{3}{*}{Causal Seq.} & F1 & \(0.756_{\pm 0.003}\) & \(0.714_{\pm 0.020}\) \\ \cline{2-4} & AUC-PR & \(0.604_{\pm 0.004}\) & \(0.559_{\pm 0.016}\) \\ \cline{1-1} \cline{2-4} & AUC-ROC & \(0.877_{\pm 0.002}\) & \(0.824_{\pm 0.078}\) \\ \hline \end{tabular}
\end{table}
Table 2. Anomaly detection on synthetic datasets.
Figure 3. Effects of the hyperparameter \(\lambda\) in Eq. (12).
flipping ratio and action value. As shown in Figure 3, on both synthetic datasets, we have similar observations that with the increasing of \(\lambda\), both action value and flipping ratio decrease. A large \(\lambda\) indicates a large penalty for high action values, which could potentially hurt the performance of flipping abnormal time steps as small action values may not be sufficient to flip the anomalies.
#### 5.2.5. Case Study
We further conduct case studies to show how to use the recourse action predicted by RecAD as an explanation for anomaly detection in multivariate time series.
**Case study on the Lotka-Volterra dataset.** Figure 4 shows a simulation of a prairie ecosystem that contains antelope, hare, fox, and gray wolf based on the Lotka-Volterra model [2], where each time series indicates the population of a species. As shown in the top figure, in most of the time steps, the numbers of carnivores (fox and gray wolf) and herbivores (antelope and hare) keep stable in a balanced ecosystem, say 0.1k-1k antelopes, 1k-10k hares, 0.1k-1k foxes, and 0.1k-1k gray wolves. After detecting abnormal behavior at a specific time step (red area in the top figure), the algorithmic recourse aims to provide recourse actions to flip the abnormal outcome. In this case, the algorithmic recourse model recommends the intervention of reducing the populations of hares, foxes, and gray wolves by 100.1k, 9.3k, and 7.5k, respectively. After applying the recourse actions (green area in the bottom figure), we can notice the populations of four species become stable again (the dashed line in the bottom figure). Therefore, the recourse actions can provide recommendations to restore the balance of the prairie ecosystem.
**Case study on the MSDS dataset.** Figure 5 depicts a case study on MSDS with control nodes 117 and 124. USAD detects a subsequence of anomaly consisting of two abnormal time steps from two time series (CPU and RAM usages on node 117), highlighted in the red area of the top figure.
When the first abnormal time step is detected, RecAD suggests releasing the CPU usage by 6.7 on node 117 (the green area in the middle figure). In other words, it also means the anomaly here is due to the higher CPU usage than normal with a value of 6.7. After taking this action, the following time steps are affected by
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Model & Flipping Ratio \(\uparrow\) & Action Cost \(\downarrow\) & Action Step \(\uparrow\) \\ \hline MLP & 0.687,0.282 & 6.848,2.506 & 1.443,6.080 \\ \hline LSTM & 0.830,0.211 & 6.798,2.604 & 1.279,4.033 \\ \hline VAR & 0.704,0.273 & 6.759,2.821 & 1.432,0.596 \\ \hline GVAR & 0.712,0.211 & 8.923,525 & 1.425,0.666 \\ \hline \hline RecAD & **0.841,0.080** & **6.747**\({}_{\pm 1.543}\) & **1.249**\({}_{\pm 0.088}\) \\ \hline \end{tabular}
\end{table}
Table 6. The performance of recourse prediction using different components of RecAD.
Figure 4. Recourse recommendations for intervening in an imbalanced ecosystem to restore balance.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Model} & \multicolumn{4}{c|}{Linear} & \multicolumn{4}{c|}{Seq} & \multicolumn{4}{c|}{Total} & \multicolumn{4}{c|}{Seq} \\ \cline{2-11} & Flipping Ratio \(\uparrow\) & Action Cost \(\downarrow\) & Action Step \(\downarrow\) & Mapping Ratio \(\uparrow\) & Action Cost \(\downarrow\) & Action Step \(\uparrow\) & Proposed Range \(\uparrow\) & Action Cost \(\downarrow\) & Action Step \(\downarrow\) & Action Step \(\uparrow\) \\ \hline MLP & 0.778,0.456 & 8.340,4.327 & 1.186,4.051 & 0.936,4.496 & 2.294,4.456 & 2.284,4.457 & 0.741,4.056 & 2.277,390,41.05 & 1.237,4.018 & 0.688,4.20 & 761,350,64.422 & 2.199,4.337 \\ \hline LSTM & 0.870,0.456 & 8.335,4.513 & 1.176,0.578 & 0.758,4.249 & 2.479,4.516 & 2.284,4.456 & 0.593,4.513,4.516 & 1.096,4.584 & 0.538,4.513 & 1.396,4.513 & 1.396,4.513 \\ \hline VAR & 0.674,0.580 & 8.444,0.582 & 1.131,0.457 & 0.753,0.584 & 2.479,4.516 & 2.464,0.584 & 0.554,0.584 & 336,597,41.01 & 1.357,4.519 & 0.554,0.584 & 1.456,458,4.594 & 2.570,6.584 \\ \hline GVAR & 0.775,0.501 & 8.446,0.582 & 1.207,0.451 & 0.348,0.451 & 2.241,0.582 & 2.287,0.582 & 0.409,0.510 & 2.270,335,0.112 & 2.202,021,0.411 & 0.506,0.582 & 1.479,792,0.582 & 2.547,0.684 \\ \hline RecAD & 0.891,0.455 & **2.291,0.453** & 1.104,0.459 & 9.444,4.456 & 2.124,0.456 & 2.129,0.458 & 0.915,4.458 & 2.275,0.581 & 1.198,0.451 & 0.679,744,0.582 & 1.292,0.458 \\ \hline \end{tabular}
\end{table}
Table 3. The performance of recourse prediction on non-causal anomaly.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Dataset & Model & Flipping Ratio \(\uparrow\) & Action Cost \(\downarrow\) & Action Step \(\uparrow\) \\ \hline MLP & 0.884,0.025 & 41.387,2.151 & 2.359,0.035 \\ \hline LSTM & 0.903,0.021 & 42.412,0.022 & 2.399,0.045 \\ \hline VAR & 0.782,0.025 & 95.946,3.456 & 2.885,0.022 \\ \hline GVAR & 0.874,0.024 & 39.474,42.116 & 2.415,0.041 \\ \hline RecAD & **0.919,0.037** & **38.917,6.427** & **2.165,0.206** \\ \hline LSTM & 0.655,0.277 & **157.859**\({}_{\pm 0.23}\) & 2.247,0.244 \\ \hline VAR & 0.859,0.099 & 3159,353,17.343 & 2.310,0.400 \\ \hline GVAR & 0.712,0.211 & 8.923,525 & 1.425,0.466 \\ \hline \hline RecAD & **0.841,0.080** & **6.747**\({}_{\pm 1.543}\) & **1.249**\({}_{\pm 0.088}\) \\ \hline \end{tabular}
\end{table}
Table 4. The performance of recourse prediction on causal anomaly.
Figure 5. Recourse recommendations for restoring the abnormal CPU and RAM usages in MSDS.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Model & Flipping Ratio \(\uparrow\) & Action Cost \(\downarrow\) & Action Step \(\uparrow\) \\ \hline MLP & 0.687,0.282 & 6.848,2.506 & 1.443,6.080 \\ \hline LSTM & 0.830,0.211 & 6.798,2.604 & 1.279,4.033 \\ \hline VAR & 0.704,0.273 & 6.759,2.821 & 1.432,0.596 \\ \hline GVAR & 0.712,0.211 & 8.923,525 & 1.425,0.466 \\ \hline \hline RecAD & **0.841,0.080** & **6.747**\({}_{\pm 1.543}\) & **1.249**\({}_{\pm 0.088}\) \\ \hline \end{tabular}
\end{table}
Table 5. The performance of recourse prediction in MSDS.
this action. A counterfactual time series is then generated using the AAP process, which is shown as the dashed lines in the middle of Figure 5. RecAD continues to monitor subsequent time steps for any abnormalities.
The following time step is still detected as abnormal in the time series of memory usage of node 117. RecAD recommends releasing the RAM usage by 13.39 on node 117 (the green area in the bottom figure), meaning that the abnormal time step here is due to high memory usage in a margin of 13.39. After taking the recourse action, the counterfactual time series is then generated (the dashed lines in the bottom figure). We can then observe that the entire time series returns to normal.
In summary, recourse actions recommended by RecAD can effectively flip the outcome and lead to a normal counterfactual time series. Meanwhile, based on the recourse actions, the domain expert can understand why a time step is abnormal.
## 6. Conclusions
In this work, we have developed a novel framework for algorithmic recourse in time series anomaly detection, called RecAD, which can recommend recourse actions to fix anomalies with the minimum cost. To recommend proper actions with the consideration of the downstream impact of the intervention on the current time step, we leverage Granger causality to model the interdependence in multivariate time series and derive the counterfactual time series based on the Abduction-Action-Prediction process. The empirical studies have demonstrated the effectiveness of RecAD for recommending recourse actions in time series anomaly detection.
|
2309.03525 | A New Model for Testing IPv6 Fragment Handling | Since the origins of the Internet, various vulnerabilities exploiting the IP
fragmentation process have plagued IPv4 protocol, many leading to a wide range
of attacks. IPv6 modified the handling of fragmentations and introduced a
specific extension header, not solving the related problems, as proved by
extensive literature. One of the primary sources of problems has been the
overlapping fragments, which result in unexpected or malicious packets when
reassembled. To overcome the problem related to fragmentation, the authors of
RFC 5722 decided that IPv6 hosts MUST silently drop overlapping fragments.
Since then, several studies have proposed methodologies to check if IPv6
hosts accept overlapping fragments and are still vulnerable to related attacks.
However, some of the above methodologies have not been proven complete or need
to be more accurate. In this paper we propose a novel model to check IPv6
fragmentation handling specifically suited for the reassembling strategies of
modern operating systems. Previous models, indeed, considered OS reassembly
policy as byte-based. However, nowadays, reassembly policies are
fragment-based, making previous models inadequate. Our model leverages the
commutative property of the checksum, simplifying the whole assessing process.
Starting with this new model, we were able to better evaluate the RFC-5722 and
RFC-9099 compliance of modern operating systems against fragmentation handling.
Our results suggest that IPv6 fragmentation can still be considered a threat
and that more effort is needed to solve related security issues. | Edoardo Di Paolo, Enrico Bassetti, Angelo Spognardi | 2023-09-07T07:15:34Z | http://arxiv.org/abs/2309.03525v2 | # A New Model for Testing IPv6 Fragment Handling
###### Abstract
Since the origins of the Internet, various vulnerabilities exploiting the IP fragmentation process have plagued IPv4 protocol, many leading to a wide range of attacks. IPv6 modified the handling of fragmentations and introduced a specific extension header, not solving the related problems, as proved by extensive literature. One of the primary sources of problems has been the overlapping fragments, which result in unexpected or malicious packets when reassembled. To overcome the problem related to fragmentation, the authors of RFC 5722 decided that IPv6 hosts MUST silently drop overlapping fragments.
Since then, several studies have proposed methodologies to check if IPv6 hosts accept overlapping fragments and are still vulnerable to related attacks. However, some of the above methodologies have not been proven complete or need to be more accurate. In this paper we propose a novel model to check IPv6 fragmentation handling specifically suited for the reassembling strategies of modern operating systems. Previous models, indeed, considered OS reassembly policy as byte-based. However, nowadays, reassembly policies are fragment-based, making previous models inadequate. Our model leverages the commutative property of the checksum, simplifying the whole assessing process. Starting with this new model, we were able to better evaluate the RFC-5722 and RFC-9099 compliance of modern operating systems against fragmentation handling. Our results suggest that IPv6 fragmentation can still be considered a threat and that more effort is needed to solve related security issues.
## 1 Introduction
Internet standards allow the use of fragmentation when a router has to transmit an IP packet larger than the next link's _Maximum Transmission Unit_ (MTU), i.e., the maximum number of bytes that the link can transmit in a single IP packet. The fragmentation process consists of dividing the packet into smaller units, called fragments, so that the resulting pieces can pass through a link with a smaller MTU than the original packet size.
Initial IPv4 specifications, RFC 791[19], describes a reassembly algorithm that allows new fragments to overwrite any overlapped portions of previously-received fragments [22]. Unfortunately, this algorithm enabled bypassing filtering solutions and resulted in the operating system adopting different policies
to reassemble fragments [18]. Over the years, various vulnerabilities that exploit the fragmentation process have been discovered, mainly using overlapping fragments, exposing the Internet to several types of attacks: _Denial of Service_ (DoS), _Traffic Modification_, _Traffic Interception_, _Intrusion Detection Systems_ (IDS)/_Intrusion Prevention Systems_ (IPS) _evasion_, _Firewall evasion_[7, 24].
IPv6 brought about significant changes to handling fragmentation compared to its predecessor, IPv4. It introduced a specific extension header and aimed to address the shortcomings of IPv4 fragmentation. RFC 5722 [15] tackles the fragmentation problem by explicitly forbidding overlapping fragments. However, despite these efforts, extensive literature and previous studies have demonstrated that IPv6 fragmentation still poses security risks. In particular, it has been shown that many operating systems are not entirely RFC 5722 compliant, accepting some sequences of IPv6 overlapping fragments, being exposed to several forms of detection evasion [2] and traffic hijacking [7].
Numerous studies have been conducted to assess the vulnerability of IPv6 hosts to overlapping fragments and related attacks. However, some of the existing methodologies have not been proven to be complete or accurate enough. Some others, like the Shankar and Paxson model, were proposed in the past, but they are obsolete due to recent changes in the reassembly strategies, as we will demonstrate in this work. Therefore, we propose a novel model specifically designed to evaluate the handling of IPv6 fragmentation, taking into account the reassembling strategies employed by modern operating systems.
To prove the usefulness of our model, we thoughtfully tested it over widely used operating systems. We also compared the results achieved using the Shankar and Paxson model over the same targets. As shown later, our model was able to capture the non-compliance of all operating systems that we tested, whereas the Shankar and Paxson model indicates full compliance on IPv6 fragmentation.
Additionally, to demonstrate that IPv6 fragmentation has still to be considered a real threat, we implemented a _Traffic Modification_ attack. The attack requires the ability to predict the IP identification number (IP-id): IP-id prediction has been considered quite critical since a long time ago, and some successful attempts are present in literature [29, 28, 27, 10, 23]. In the attack, we take advantage of the partial or non-existing compliance of RFC 5722 to alter the legitimate traffic between two hosts, again exploiting the use of overlapping fragments [2].
Thus, we show that vulnerabilities inherent to IPv6 fragmentation persist. Despite numerous recommendations, attacks on IPv6 fragmentation remain feasible, necessitating more effort to eliminate all flaws in implementations.
The paper is structured as follows: the next section provides a brief background on IP fragmentation and past work on the topic. Section 3 introduces two well-known models for testing IPv6 fragmentation issues, and discusses their limitations. Section 4 reports our experiments performed to evaluate the RFC 5722 compliance of modern operating systems. Section 5 reports our findings on RFC 9099 compliance. Section 6 report our experiment results for the Traffic Modification attacks. Finally, Section 7 summarizes the contributions of our work and provides some further comments to help fix the IPv6 fragmentation flaws.
Background and Related Works
In this section, we briefly introduce some details about IP fragmentation that provide the background for the experimental section. Then, we report a quick survey about the main contribution related to IP fragmentation vulnerabilities, focusing on IPv6.
### IP Fragmentation in Internet
An essential property of an Internet link is the number of bytes it can transmit in a single IP packet, namely the _Maximum Transmission Unit_ (MTU). The MTU may differ between different networking technologies. IPv4 requires every link to support a minimum MTU of 576 bytes, as recommended by RFC 791 [19], while IPv6 requires every link to support an MTU of 1280 bytes or greater (RFC 2460 [5]). When an endpoint has to transmit an IP packet greater than the next link MTU, IP calls for fragmentation, which is the process of separating a packet into units (fragments) smaller than the link MTU. The receiving host performs fragment reassembly to pass the complete (re-assembled) IP packet up to its protocol stack.
The fragmentation process is handled differently in IPv4 and IPv6. In IPv4, an IP packet can be fragmented by the source node or intermediate routers along the path between the source and the destination. However, intermediate routers may avoid fragmenting IPv4 packets by dropping the packet and forcing the Path MTU by the source host. In IPv6, only _end-to-end fragmentation_ is supported; intermediate routers cannot create fragments. To discover the best MTU size, both IPv4 and IPv6 leverage on the _Path MTU Discovery_, provided by _Internet Control Message Protocol_ version 4 (ICMP) or 6 (ICMPv6).
In order to reassemble all the fragments related to the same packet, IP protocol uses some information present in the header, namely: the _identification_ field (shared among all the fragments of the same packet), the _fragment offset_ (that specifies which is the starting fragment position in the original packet) and the _More Fragments_ flag, set to 1 for all fragments except for the last one. IPv4 and IPv6 differ mainly on the identification field length (16-bit long in the former, 32-bit in the latter), and that IPv6 uses a specific extension header to hold the above information.
While not predominant, statistics show that IP fragmentation is still used in the Internet and, notably, for security protocols like IKE [13] or DNSSEC [1], that typically rely on large UDP packets for cryptographic material exchange [7].
### Related Works
IP Fragmentation has been exploited to make many different types of attacks, as anticipated by Mogul and Kantarjiev back in 1987 [12]. Most of them allow realizing denial of services [14] or IDS/firewall evasion [24] or operating system fingerprinting [2]. Besides those based on IP fragmentation, many other attacks rely on the possibility of predicting the IP identification field of the victim[29]. For this reason, there has been a flurry of works focusing on the feasibility of predicting IP-id field [16, 20, 21, 4]. While most studies about IP fragmentation are related to IPv4, only a few specifically focus on IPv6. As it will be further explored in Section 3, IPv6, among the other vulnerabilities [25] mitigations,
has been revised to specifically fix the IP fragmentation issues, firstly with RFCs 3128 [17], 5722 [15], 6946 [8] and, later, with an updated version of the IPv6 specification, namely RFC 8200 [6]. Moreover, RFC 9099 [26] discusses extension headers with a special focus on fragments stating that, if not handled properly, they could lead to bypassing stateless filtering.
In [14], the authors exploit the IP fragmentation to prevent legitimate IKE handshakes from completing. The main idea is to flood one of the endpoints with fragments to consume all the memory resources dedicated to the fragment buffers, realizing a fragment cache overflow. The overflow prevents legitimate hosts from completing the IKE handshake because the reassembling becomes impossible. Another type of attack has been described in RFC 4963 [11], consisting of a fragment misassociation. The idea is that the attacker can poison the fragment cache of a host, sending some spoofed fragments so that when the fragments of the victim reach the poisoned host, they will be misassociated and, consequently, maliciously altered.
The most influential work for our research has been done by Gilad and Herzberg [7]. They present a DoS attack inspired by a traffic injection technique based on IP fragmentation, proposed by Zalewski1 and appeared in the seclist mailing list in 2003. The idea is to inject the second fragment of TCP connections since the IP-id was highly predictable. Following this intuition, the authors in [7] propose performing a DoS attack against a communicating host, targeting the NAT-ing host behind which the destination endpoint resides. This type of attack allows the authors to realize a very effective DoS attack, causing more than 94% of packet loss without leveraging any fragment cache overflow.
Footnote 1: Michal Zalewski, _A new TCP/IP blind data injection technique?_, [https://seclists.org/bugtraq/2003/Dec/161](https://seclists.org/bugtraq/2003/Dec/161)
Another pivotal work that inspired this paper has been done by Atlasis [2]. In his paper, the author performs an exhaustive battery of tests to verify the effective behavior of several operating systems when overlapping IP fragments are present. In particular, the experiments verified the different reassembly strategies and how those can be exploited to perform various evasion attacks. The methodology used in Atlasis' paper for understanding the different reassembly policies is the driving factor for realizing our Traffic Modification attack, as detailed in Section 6. Moreover, Atlasis in [3] shows how high-end commercial IDPS devices could be evaded by the use of IPv6 extension headers.
## 3 IP Fragmentation Handling in the Wild
In this section, we evaluate RFC 5722 compliance of different operating systems. We introduce two established methodologies adopted in the literature to evaluate the IP fragmentation reassembly strategies that we discovered to be obsolete. Then, we propose a new methodology, based on the presented ones, and we discuss the results we obtained by testing widely used operating system.
### Shankar and Paxson Model
The first methodology we consider is the one we call _Shankar and Paxson model_[24]. In their paper, the authors introduce a model consisting of six fragments of different lengths and offsets, as shown in Figure 1, creating a diversified
combination of fragment overlap and overwrite. In the figure, each fragment is represented by a block labeled with a character (e.g., 'A', 'B', 'C'), and the payload of each fragment is a sequence of bytes encoding the corresponding character. The vertical axis marked as "time" represents the temporal succession of the transmitted fragments. For example, the first fragment in Figure 1 has offset 0 in the final (reassembled) payload, a length of 32 bytes, and contains the ICMPv6 header plus 24 'A's.
For each two adjacent fragments, X and Y, the Shankar and Paxson model guarantees that there is [24]:
* At least one fragment (X) wholly overlapped by a subsequent fragment (Y) with identical offset and length;
* At least one fragment (X) partially overlapped by a subsequent fragment (Y) with an offset greater than fragment X;
* At least one fragment (X) partially overlapped by a subsequent fragment (Y) with an offset smaller than fragment X.
By using six fragments, five different fragment reassembled sequences were found [18]: _BSD_, that favors an original fragment with an offset smaller or equal to a subsequent fragment; _BSD-right_, that favors a subsequent fragment when the original fragment has an offset smaller or equal to the subsequent one; _Linux_, that favors an original fragment with an offset that is smaller than a subsequent fragment; _First_, that favors the original fragment with a given offset; _Last_, that favors the subsequent fragment with a given offset.
In our experiments we discovered that modern operating systems don't use parts of a fragment: they assemble fragments by using or discarding them entirely, as discussed in Section 4.1. Due to this new behavior, the Shankar and Paxson model is no more suitable for IPv6 fragment overlapping tests as the reassembly phase may never finish in some occasion (more on this in Section 4.1).
### Three Fragments Model
Besides the Shankar and Paxson model, another methodology named _"3-fragments model"_ was proposed by Atlasis [2]. They used this methodology to evaluate a host behavior with the fragmentation overlapping. Their model is based on several tests in which only three fragments are exchanged, and only the header and payload of the second fragment change, as shown in Figure 2.
The model is defined by these three fragments:
* The first fragment has always offset 0, and _More Fragments_ flag (_M-flag_ from now on) set to 1. It consists of an ICMPv6 Echo Request (8 bytes) with 16 bytes of payload for a total length of 24 bytes;
* The second fragment has variable length and offset, and within the different tests, it also varies in the value of the _M-flag_;
* The third fragment has always offset 24, _M-flag_ always set to 0, and a length of 24 bytes, carrying part of the payload.
The model comprises 11 different combinations of length and offsets for the second fragment while varying the value of the _M-flag_ for the second fragment and reversing the sending order of the three fragments (from 1 to 3 and from 3 to 1). This shuffling leads to a total of 44 tests.
While this test was successfully used to investigate the IP fragmentation reassembly [2], the model is now obsolete because it assumes that IPv6 endpoints may reassemble the packet using a fragment partially. As discussed in Section 4.1, modern operating systems assemble fragments by using or discarding them entirely.
## 4 A New Model for Testing IPv6 Fragment Handling
This section proposes a new model to check RFC 5722 compliance on IPv6 fragmentation. We discuss how operating systems handle IPv6 fragments nowadays, and then we present our proposal for a model for testing overlapping fragments based on the Shankar and Paxson model. Finally, we discuss the results obtained from the different experiments performed.
### Overlapping Fragments Today
Previously proposed models for testing overlapping fragments proved to be obsolete in our experiments. We noted that all operating systems in Table 1 discard entire overlapping fragments. Which fragment is discarded depends on the operating system policy, which may include the arrival time or offset position. Figure 3 shows an example of this problem with the Shankar and Paxson model when the operating system drops overlapping fragments that arrived late. Since fragments "D" and "E" overlap with "A", "B" and "C", they are discarded by the operating systems, thus producing a "hole" between fragments "A" and "B".
The gap between "A" and "B" (caused by dropping "D") does not allow the machine to reassemble the packet correctly and reply to the "ICMPv6 Echo Request", since it is missing information between offsets 32 and 40. The machine waits for a predetermined time (ip6frag_time, which in some systems is set to 30 seconds by default) and then deletes the fragments received up to that time from memory. Thus, these machines seem compliant with the RFC 5722, as there is no way to know externally whether the packet has been discarded because of the gap or because they dropped the packet and all fragments (which is the action required by the RFC).
### A New Model for Testing IPv6 Fragment Handling
Our approach is based on the well-known Shankar and Paxson model, which provides a comprehensive framework for analyzing the reassembly process of fragmented packets. However, we modify the original model by reducing all fragment offsets by one, resulting in an offset reduction of 8 bytes. Also, although we display the model using the same time sequence as the Shankar and Paxson model, our model is meant to be tested by shuffling the sequence. In other words, multiple tests should be done, and the fragments should have different arrival times on each test but the same offsets and content. The model is shown in Figure 4.
The primary motivation behind the offset mutation is to have some combinations where the packet reassembly is done by using or discarding entire fragments, as modern operating systems do. Previous models could not expose issues in the fragmentation handling due to the problem discussed in Section 4.1.
It is important to note that the ICMPv6 header is excluded from this overlap. We excluded the "next header" in the payload as no operating systems allow overwriting it in any combination of fragments.
A significant contribution of our modified model lies in identifying two combinations of fragments, AAABBCCCFFF and AAABBEEEFFF, that do not create any holes during reassembly. These fragment combinations are particularly interesting, as they present scenarios where fragments can be reassembled even when partial overlapping fragments are dropped.
To avoid issues with different checksums created by different ways of reassembling fragments, we also re-defined the payload of the fragments as shown in Table 3 and presented in Section 4.4. By doing so, we have that any combination of fragments in our model has the same checksum.
Figure 3: Overlapping fragments discarded by operating systems may create hole and prevent a correct reassembly. In this example, the operating system decided to discard “D” and “E” fragments due to its reassembly policy, leaving a hole between “A” and “B”.
### Model Validation and Results
We performed a series of experiments to assess the usefulness of our model and to catch anomalies in the reassembly procedure in operating systems. For some tests, we also compared our model with Shankar and Paxson, demonstrating that our model is able to capture the non-compliance where the Shankar and Paxson model suggests that the operating system is RFC 5722-compliant. We also tested all possible permutations of the arrival time of the fragments (720 permutations). Although some may be superfluous due to previous tests, we checked the complete set of permutations to create a dataset for further analysis. Overall, the total number of tests in our dataset is \(2226\)2 for each operating system listed in Table 1.
Footnote 2: The complete list of tests can be found in the GitHub repository.
All tests were performed in a small laboratory created by Vagrant by Hashicorp, an open-source software that helps and simplifies the management of reproducible virtual machine environments. The lab network contains one attacker, a Debian 11 virtual machine with our fuzzer, and one victim, which rotated between all operating systems listed in Table 1. The victim and the attacker are
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Operating System** & **Kernel Version** & **Vagrant box version** \\ \hline GNU/Linux Arch & 6.3.2-arch1-1 & 3.23.0522 \\ GNU/Linux Debian 11 & 5.10.0-22-amd64 & 11.20230501.1 \\ GNU/Linux Ubuntu 23.04 & 6.2.0-20-generic & 20230524.0.0 \\ OpenBSD & 6.9 GENERIC.MP\#473 & 4.2.16 \\ FreeBSD & 13.1-RELEASE & 4.2.16 \\ Microsoft Windows 10 & 10.0.19041.2965 & 2202.0.2305 \\ Microsoft Windows 11 & 10.0.22621.1702 & 2202.0.2305 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Operating systems, versions tested and Vagrant box version.
Figure 4: The new model we are proposing. It is based on the Shankar and Paxson model but it has different offsets to better fit current reassembly policies.
directly connected, as in a LAN, to avoid side effects by other network elements. We performed tests multiple times to reduce the number of errors due to some transient situation in the victim.
We can summarize the tests in:
1. Single ICMPv6 packet fragmented using multiple permutations of fragments;
2. Single ICMPv6 packet fragmented using multiple permutations, but fragments are sent multiple times;
3. Multiple ICMPv6 packets fragmented using multiple permutations.
In the first test, a single packet is fragmented and sent to the victim for each different permutation in our model. In the second test, the same occurs, but all frames are sent again (in the same order) 4 more times, to simulate a network retransmission or a malicious act by the attacker. In the third test, for each different permutation, five different ICMPv6 packets are created and sent. Note that, while both the second and the third tests are sending five packets, in the second test all fragments have the same fragment ID (as they belong to the same packet), while in the third test fragments are grouped by the fragment ID, which is different for each packet.
Table 2 shows a summary of the results we obtained by running the proposed model in the virtual environment. These results not only demonstrate that all listed operating systems are violating the RFC 5722 by replying to overlapping fragments, but also that the way they handle overlapping fragments (the "reasonbly policy") may pose some risk, as in the case when different reassembly policies are used between an IDS/IPS/firewall and the victim. Different reassembly policies may result in a different reassembled packet: an attacker can exploit this difference to provide a different packet to the IDS/IPS/firewall than the one reassembled by the victim, bypassing the IDS/IPS/firewall protection.
Figure 5: One of the permutation is when the first fragment is the latest one to arrive. This particular arrangement might result in a different reassembled packet.
We performed the same set of tests using the Paxson and Shakar model. As shown in Table 2, the Paxson and Shakar model ended with no replies for all tests, which may indicate RFC 5722 compliance. However, we know that ICMPv6 Echo Replies are missing not because of discarding overlapping fragment (RFC requirement), but due to missing payload pieces in the reassembly created by the new policy of fragment reassembly (as explained in Section 4.1).
### On IPv6 Checksum and Overlapping Fragments
IPv6 header does not contain a checksum [6], shifting the duty of checking the integrity of the transmission to the upper layers. The choice was made to speed up the packet forwarding: IPv6 intermediate devices, like routers, do not check the integrity of the datagram (except for security systems like IDS). Upper layers protocols, like ICMPv6, UDP, and TCP, may require a checksum, which is computed and verified by the transmission endpoints.
When the checksum is required (e.g., ICMPv6), the first fragment contains the checksum of the entire packet inside the upper layer header. Overlapping fragments might create a situation where the final checksum of the packet is incorrect since hosts can have different reassembly policies [2]. This, in turn, might cause the victim to discard the packet and not reply to our ICMPv6 ping tests, invalidating the results presented until now.
To rule out that packets are dropped because of the checksum mismatch, we re-defined the payload of the fragments as shown in Table 3. This new definition exploits the commutative property of the checksum: by definition, the checksum contains the sum of 2 bytes pair of the entire IPv6 packet [9]. Given that the sum has the commutative property, any combination of fragments in our model has the same checksum.
### Denial of Service due to RFC 5722 Compliance
The RFC 5722 requires dropping packets with overlapping fragments. This rule creates a vulnerable surface for a Denial-of-Service attack by a malicious third host, the "attacker". For the attacker to exploit the vulnerability, the
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & **Shankar** & \multicolumn{3}{c}{**Proposed model**} \\ & **and** & & & \\ & **Paxson** & & & \\ \hline
**OS / Test** & & **1** & **2** & **3** \\ \hline GNU/Linux Arch & 0 & 35 & 37 & 1634 \\ GNU/Linux Debian & 0 & 35 & 37 & 1634 \\ GNU/Linux Ubuntu & 0 & 35 & 37 & 1634 \\ OpenBSD & 0 & 0 & 24 & 373 \\ FreeBSD & 0 & 0 & 20 & 1609 \\ Windows 10 & 0 & 0 & 20 & 1800 \\ Windows 11 & 0 & 0 & 20 & 1800 \\ \hline \hline \end{tabular}
\end{table}
Table 2: ICMPv6 Echo replies received while testing the Paxson and Shakar model and our model. A RFC-compliant host should never reply to these tests.
transmission between two parties should use fragments, and the attacker should be able to predict the IP-id field and spoof the source IP address.
The attack strategy involves the attacker sending a spoofed fragment to the target host (the transmission receiver) before it can reassemble all the fragments received from the victim (the transmission source). By using the same source address as the victim and the same IP-id value, the attacker creates an overlap during the fragment reassembly phase, causing the entire packet to be discarded and preventing the victim from receiving a reply.
This Denial-of-Service vulnerability exposed by the RFC is exploitable only in certain conditions. Also, it is the desired effect as fragment handling exposes higher security issues. So, in our opinion, it does not present a real threat due to the practical difficulties of satisfying prerequisites and the low gain of such attacks.
## 5 RFC 9099 Compliance
This section presents the study on RFC 9099 [26] compliance, which mainly focuses on IPv6 Extension Headers. We briefly introduce the requirements stated in RFC 9099 [26]. Then, we describe the experiments we performed to verify the compliance of operating systems.
### IPv6 Fragment Headers and RFC 9099
IPv6 extension headers are additional data structures that can be included in the IPv6 packet header to provide extra functionality or options beyond what is defined in the IPv6 protocol. These headers are placed between the IPv6 header and the upper-layer protocol data, allowing for features such as fragmentation, authentication, encryption, routing, and mobility.
IPv6 uses a specific extension header named "Fragment Header" to handle a fragmented packet where the packet is too big for the Maximum Transfer Unit of underlying links [6]. While not strictly required, RFC 9099 [26] suggest that security devices in the middle of the transmission (such as firewall, IDS, IPS) and the destination endpoint should drop first fragments that do not contain the entire IPv6 header chain (which include the transport-layer header). The reason
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Shankar and Paxson** & **Odd/single packet** & **Even** \\ \hline A & 11223344 & 44113322 \\ B & 11332244 & 44331122 \\ C & 22113344 & 44332211 \\ D & 22331144 & 11224433 \\ E & 33112244 & 11334422 \\ F & 33221144 & 22114433 \\ \hline \hline \end{tabular}
\end{table}
Table 3: New payload definition. This payload exploits the checksum’s commutative property to avoid re-assembly errors. The “odd” version is also used in tests with one packet, whereas both “odd” and “even” are used in tests with multiple packets.
for this requirement is to avoid issues when dealing with stateless filtering [6].
Some IPv6 extension headers can be expressed multiple times in a single datagram; others do not. Also, these headers are linked together in a chain, and the RFC 8200 [6] suggests an order for headers to optimize the computation and avoid processing issues. RFC 9099 strongly recommends that the correct order and the maximum number of repetitions of extension headers are enforced in endpoints and in any security device in the path. Non-conforming packets should be dropped[26].
In the context of fragmented IPv6 packets, a malicious actor may try to send additional headers in different fragments, or they might try to overwrite the upper layer header in the payload from the first fragment using a subsequent overlapping fragment. In both cases, the packet must be discarded.
### Fragment Headers Experiments and Results
We designed our experiments around the two requirements from RFC 9099: the first experiment checks the requirement of having all extension headers in the first fragment, and the second experiment is on overlapping the upper layer header using an overlapping fragment.
In the first experiment (Table 4), we send a packet divided into three fragments: the first contains the ICMPv6 Header and no payload, the second contains the payload, and the last contains a Destination Options header. These fragments are not overlapping, but no response is expected as the first fragment does not contain the complete IPv6 headers chain [26].
In the second experiment (Table 5), we send a packet divided into three fragments: the first contains no headers and no payload, the second contains the ICMPv6 Echo Request header (offset for this fragment is zero, as the first fragment), and the last contains a payload. The first and the second fragments do not overlap because the first fragment is empty. Nevertheless, no response is expected as the first fragment does not contain the complete IPv6 headers chain [26].
We tested a subset of the operating system in Table 1, namely Debian 11, FreeBSD, and OpenBSD.
In the first experiment, we received an ICMPv6 Echo Reply from all the operating systems, even if the first fragment did not have the destination options header. The correct response was to discard the first fragment or the entire packet.
In the second experiment, all operating systems recognized the malformed packet; however, only FreeBSD and OpenBSD silently dropped the received
\begin{table}
\begin{tabular}{c c c c c} \hline \# & **Type** & **Offset** & **More Fragment** & **Payload** \\ \hline
1 & ICMPv6 Echo Request & 0 & 1 & / \\
2 & Fragment & 1 & 1 & AAAAAAAAA \\
3 & IPv6 Fragment \& Dest. Options & 2 & 0 & BBBBBBBBB \\ \hline \end{tabular}
\end{table}
Table 4: First experiment for RFC 9099 compliance. No response is expected: the first fragment should be dropped because it does not contain all the extension headers.
packet. Debian, instead, the victim responds to the attacker with an ICMPv6 packet with code 3, which stands for "IPv6 First Fragment has incomplete IPv6 Header Chain".
The results of the first experiment show that the systems tested, at present, are not compliant with RFC 9099 since they are not dropping the first fragments or packets when extension headers are spread through fragments. Also, since test 1 contains the "Destination Options" extension header after the Fragment Header (while the RFC 8200 recommends the opposite), those systems are not dropping non-conforming packets, while RFC 9099 suggests discarding them.
Moreover, it is possible to recognize an operating system using the fragmented packet as described in Table 5 (second experiment): the different behavior between operating systems could lead an attacker to perform fingerprinting against victims. While RFCs 9099 and 5722 do not specify whether a host should silently discard these packets, we believe a silent discard is the safest option, and an RFC should mandate it.
## 6 Modification Attacks with Overlapping Fragments
This type of attack has been firstly performed, in a different context, by Gilad et al. [7]: the attacker aims to modify the content of the communication between the victim and a legitimate host, namely host X. We leverage the RFC 5722 non-compliance: since the victim accepts overlapping fragments, the attacker can change with another fragment the bytes sent by the host X.
In the Modification Attack, the attacker must send one or more fragments and ensure that when these fragments are reassembled with the legitimate ones, the result is a correct packet (not malformed) and that the final packet payload is the desired one. For this reason, to perform a successful modification attack, the attacker has to calculate the correct checksum to avoid the discard of a packet because of a checksum mismatch (Section 4.4).
We demonstrate that the modification attack is possible by altering the content of a syslog UDP transmission from host X to the victim.
### Implementation of Modification Attack with Scapy
We present the implementation of the Modification Attack in a specific scenario where both host X and the victim are running the rsyslog, an open-source
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**\#** & **Type** & **Offset** & **More Fragment** & **Payload** \\ \hline
1 & IPv6 Fragment & 0 & 1 & / \\
2 & ICMPv6 Echo Request & 0 & 1 & / \\
3 & IPv6 Fragment & 1 & 0 & BBBBBBBB \\ \hline \hline \end{tabular}
\end{table}
Table 5: Second experiment for RFC 9099 compliance. No response is expected: the first fragment should be dropped because it does not contain the full IPv6 headers chain.
software for managing logs and forwarding log messages between machines. The host X is configured to send syslog messages via UDP to the victim.
This attack requires the malicious actor to know (or guess) the IP identification field value, the payload of the IPv6 packet (in this case, the log message) host X sends to the victim, and requires spoofing the host X address. The IP ID requirement can be easily satisfied: guessing the IP identification value has already been demonstrated in literature by Salturi et al. [23]. The packet payload, instead, should be predictable (for example, a TLS Client-Hello) or known in advance. Spoofing is still a problem, especially in local area networks [23].
We will alter the log line regarding a successful SSH authentication in this case. In particular, we will try to alter the fragment containing the attacker's IP address. The original line is Jun 1 20:47:08 git sshd[88459]: Accepted publickey for git from 10.10.10.100 port 49240 ssh2: ED25519 SHA256:vNTXCU7b6C6mqvcaH7j1/uRCSunllTpG5kCtd01xxoc
The host X sends the log line in three fragments:
1. the first fragment with the UDP header, the syslog severity and facility code, and the first 51 bytes of the message: <43> Jun 1 20:47:08 git sshd[88459]: Accepted publickey ;
2. the second fragment with 56 bytes of the message: for git from 10.10.10.100 port 49240 ssh2: ED25519 SHA25;
3. the third fragment with the rest of the log line: 6:vNTXCU7b6C6mqvcaH7j1/uRCSunllTpG5kCtd01xxoc;
To successfully perform the attack, the malicious actor should send a spoofed second fragment with a different payload before or after the second fragment from the host X to the victim (depending on the reassembly policy used by the victim). When the victim re-assembles the final packet, the new payload for the second fragment will be used.
However, a different payload will likely result in a different checksum: the victim will drop the packet as corrupted. To work around this problem, the attacker exploits the commutative property of the sum in the checksum by shuffling the payload. The shuffle should swap groups of two bytes as the checksum is calculated by a series of 16-bit sums.
Another technique for keeping the same checksum is to calculate the difference between the correct checksum and the checksum of the new final packet and then add this difference to the spoofed fragment payload. Thanks to the commutative property of the sum, the result will be the original checksum.
### Modification Attacks Results Discussion
In our test environment, we used two GNU/Linux Debian machines (the same version listed in Table 1). We tested this attack by sending the following payload in a spoofed second fragment to the victim just before the original second fragment: 0 0. Efromr 9225HAgi.12: 110fo55po10D240r19t 0. 4sh S s. This string is a permutation of the payload of the original second fragment, so the final payload (after the reassembly) will remain the same as the original one.
Since the RFC 5722 states that datagrams with overlapping fragments must be silently discarded, we should not expect any log in the victim machine. However, as the victim is not RFC 5722 compliant, we found the modified payload in the log file, as shown in Figure 6.
These attacks can be prevented mainly by implementing RFC 5722 recommendations of dropping the entire packets in the presence of overlapping fragments. IPSec may provide additional protections.
## 7 Conclusions
In conclusion, this work addresses the ongoing issues associated with packet fragmentation in IPv6, explicitly focusing on the issue of overlapping fragments. Despite the requirement listed by different RFCs for hosts to drop overlapping fragments silently, our work indicates that the problem persists. Also, changes in the fragment reassembly policies by operating systems from byte-based to fragment-based made current models for testing IPv6 fragmentation issues (such as the Shankar and Paxson model) obsolete.
To address these issues, the authors propose a novel model that exploits the fragment-based strategy in modern operating systems when handling IPv6 fragmentation. By leveraging the commutative property of the checksum, the authors simplify the assessment process and propose a more accurate evaluation methodology.
Using this new model, the authors evaluate the compliance of modern operating systems with RFC-5722 and RFC-9099, which pertain to fragmentation handling in IPv6. The evaluation was performed both using ICMPv6 Echo Request/Reply, and by performing a real attack named "Modification Attack", where a fragmented transmission was altered.
The results of the evaluation reveal that IPv6 fragmentation remains a significant threat, and further efforts are required to address the related security issues. These findings underscore the need for ongoing research and development to enhance the security measures and mechanisms associated with IPv6 fragmentation.
Taking the necessary countermeasures to deal with fragmentation attacks and secure IPv6 would still be appropriate since adopting IPv6 is an irreversible and ever-growing process, especially with new technologies based on the Internet of Things.
We released the dataset and all scripts developed to run our experiments in a public GitHub repository at
[https://github.com/netsecuritylab/ipv6-fragmentation](https://github.com/netsecuritylab/ipv6-fragmentation).
Figure 6: The string in the log file. The log line contains the attacker’s modified payload.
## Acknowledgments
This work was partially supported by project 'Prebunking: predicting and mitigating coordinated inauthentic behaviors in social media' project, funded by Sapienza University of Rome; by the Italian Ministry of Defense PNRM project "UNAVOX", by project SERICS (PE00000014) under the MUR National Recovery and Resilience Plan funded by the European Union - NextGenerationEU.
|
2303.00077 | Beyond the limitations of any imaginable mechanism: large language
models and psycholinguistics | Large language models are not detailed models of human linguistic processing.
They are, however, extremely successful at their primary task: providing a
model for language. For this reason and because there are no animal models for
language, large language models are important in psycholinguistics: they are
useful as a practical tool, as an illustrative comparative, and
philosophically, as a basis for recasting the relationship between language and
thought. | Conor Houghton, Nina Kazanina, Priyanka Sukumaran | 2023-02-28T20:49:38Z | http://arxiv.org/abs/2303.00077v1 | # Beyond the limitations of any imaginable mechanism: large language models and psycholinguistics
###### Abstract
Large language models are not detailed models of human linguistic processing. They are, however, extremely successful at their primary task: providing a model for language. For this reason and because there are no animal models for language, large language models are important in psycholinguistics: they are useful as a practical tool, as an illustrative comparative, and philosophically, as a basis for recasting the relationship between language and thought.
## 1 This is a commentary on Bowers et al. (2022).
Neural-network models of language are optimized to solve practical problems such as machine translation. Currently, when these large language models (LLMs) are interpreted as models of human linguistic processing they have similar shortcomings to those that deep neural network have as models of human vision. Two examples can illustrate this. First, LLMs do not faithfully replicate human behaviour on language tasks (Marcin and Linzen, 2018; Kuncoro et al., 2018; Linzen and Leonard, 2018; Mitchell et al., 2019). For example, an LLM trained on a word-prediction task shows similar error rates to humans overall on long-range subject-verb number agreement but errs in different circumstances: unlike humans, it makes more mistakes when sentences have relative clauses (Linzen and Leonard, 2018), indicating differences in how grammatical structure is represented. Second, the LLMs with better performance on language tasks do not necessarily have more in common with human linguistic processing or more obvious similarities to the brain. For example, Transformers learn efficiently on vast corpora and avoid human-like memory constraints but are currently more successful as language models than recurrent neural networks such as the Long-Short-Term-Memory LLMs (Devlin et al., 2018; Brown et al., 2020), which employ sequential processing, as humans do, and can be more easily compared to the brain.
Furthermore, the target article suggests that, more broadly, the brain and neural networks are unlikely to resemble each other because evolution differs in trajectory and outcome from the optimization used to train a neural network. Generally, there is an unanswered question about which aspects of learning in LLMs are to be compared to the evolution of our linguistic ability and which to language learning in infants but in either case, the comparison seems weak. LMMs are typically trained using a next-word prediction task; it is unlikely our linguistic ability evolved to optimize this and next-word prediction can only partly describe language learning: for example, infants generalize word meanings based on shape (Landau et al., 1988) while LLMs lack any broad conceptual encounter with the world language describes.
In fact, it would be peculiar to suggest that LLMs are models of the neural dynamics that support linguistic processing in humans; we simply know too little about those dynamics. The challenge presented by language is different to that presented by vision: language lacks animal models and debate in psycholinguistics is occupied with broad issues of mechanisms and principles, whereas visual neuroscience often has more detailed concerns. We believe that LLMs have a valuable role in psycholinguistics and this does not depend on any precise mapping from machine to human. Here we describe three uses of LLMs: **(1)** the **practical**, as a tool in experimentation; **(2)** the **comparative**, as an alternate example of linguistic processing and **(3)** the **philosophical**, recasting the relationship between language and thought.
**(1)**: An LLM models language and this is often of **practical** quantitative utility in experiment. One straight-forward example is the evaluation of _surprisal_: how well a word is predicted by what has preceded it. It has been established that reaction times, (Fischler and Bloom, 1979; Kleiman, 1980), gaze duration, (Rayner and Well, 1996), and EEG responses, (Dambacher et al., 2006; Frank et al., 2015), are modulated by surprisal, giving an insight into prediction in neural processing. In the past, surprisal was evaluated using \(n\)-grams, but \(n\)-grams become impossible to estimate as \(n\) grows and as such they cannot quantify long-range dependencies. LLMs are typically trained on a task akin to quantifying surprisal and are superior to \(n\)-grams in estimating word probabilities. Differences between LLM-derived estimates and neural perception of surprisal may quantify which linguistic structures, perhaps poorly represented in the statistical evidence, the brain privileges during processing.
**(2)**: LLMs are also useful as a point of **comparison**. LLMs combine different computational strategies, mixing representations of word properties with a computational engine based on memory or attention. Despite the clear differences between LLMs and the brain, it is instructive to compare the performance of different LLMs on language tasks to our own language ability. For example, although LLMs are capable of long range number and gender agreement, (Linzen et al., 2016; Gulordava et al., 2018; Bernardy and Lappin, 2017; Sukumaran et al., 2022), they are not successful in implementing another long-range rule: Principle C, (Mitchell et al., 2019), a near-universal property of languages which depends in its most straight-forward description on hierarchical parsing. Thus, LLMs allow us to recognize those aspects of language which require special consideration while revealing others to be within easy reach of statistical learning.
**(3)**: In the past, **philosophical** significance was granted to language as evidence of thought or personhood. Turing (1950), for example, proposes conversation as a proxy for thought and Chomsky (1966) describes Descartes as attributing the possession of mind to other humans because the human capacity for innovation and for the creative use of language, is 'beyond the limitations of any imaginable mechanism'. It is significant that machines are now capable of imitating the use of language. While machine-generated text still has attributes of awkwardness and repetition that make it recognizable on careful reading, it would seem foolhardy to predict these final quirks are unresolvable or are characteristic of the division between human and machine. Nonetheless, most of us appear to feel intuitively that LLMs enact an imitation rather than a recreation of our linguistic ability: LLMs seem empty things whose pantomime of language is not underpinned by thought, understanding or creativity. Indeed, even if an LLM were capable of imitating us perfectly, we would still distinguish between a loved one and their simulation.
This is a challenge to our understanding of the relationship between language and thought: either we must claim that, despite recent progress, machine-generated language will remain unlike human language in vital respects, or we must defy our intuition and consider machines to be as capable of thought as we are, or we must codify our intuition to specify why a machine able to produce language should, nonetheless, be considered lacking in thought. |
2309.15347 | Stars Bisected by Relativistic Blades | We consider the dynamics of an equatorial explosion powered by a millisecond
magnetar formed from the core collapse of a massive star. We study whether
these outflows -- generated by a priori magneto-centrifugally-driven,
relativistic magnetar winds -- might be powerful enough to produce an
ultra-relativistic blade ("lamina") that successfully carves its way through
the dense stellar interior. We present high-resolution numerical
special-relativistic hydrodynamic simulations of axisymmetric
centrifugally-driven explosions inside a star and follow the blast wave
propagation just after breakout. We estimate the engine requirements to produce
ultra-relativistic lamina jets and comment on the physicality of the parameters
considered. We find that sufficiently collimated -- half-opening angle
$\theta_r \leq 0.2^\circ$ -- laminas successfully break out of a compact
progenitor at ultra-relativistic velocities ($\Gamma_{\rm core} \gtrsim 30$)
and extreme isotropic energies ($E_{k,\rm iso} \sim 5 \times
10^{52}\text{erg}$) within a few percent of the typical spin-down period for a
millisecond magnetar. The various phases of these ultra-thin outflows such as
collimation shocks, Kelvin-Helmholtz instabilities, and lifetime are discussed
and we speculate on the observational signatures echoed by this outflow
geometry. | Marcus DuPont, Andrew MacFadyen | 2023-09-27T01:33:15Z | http://arxiv.org/abs/2309.15347v1 | # Stars Bisected by Relativistic Blades
###### Abstract
We consider the dynamics of an equatorial explosion powered by a millisecond magnetar formed from the core collapse of a massive star. We study whether these outflows -- generated by a priori magneto-centrifugally-driven, relativistic magnetar winds -- might be powerful enough to produce an ultra-relativistic blade ("lamina") that successfully carves its way through the dense stellar interior. We present high-resolution numerical special-relativistic hydrodynamic simulations of axisymmetric centrifugally-driven explosions inside a star and follow the blast wave propagation just after breakout. We estimate the engine requirements to produce ultra-relativistic lamina jets and comment on the physicality of the parameters considered. We find that sufficiently collimated -- half-opening angle \(\theta_{r}\leq 0.2^{\circ}\) -- laminas successfully break out of a compact progenitor at ultra-relativistic velocities (\(\Gamma_{\rm core}\gtrsim 30\)) and extreme isotropic energies (\(E_{k,{\rm iso}}\sim 5\times 10^{52}\,{\rm erg}\)) within a few percent of the typical spin-down period for a millisecond magnetar. The various phases of these ultra-thin outflows such as collimation shocks, Kelvin-Helmholtz instabilities, and lifetime are discussed and we speculate on the observational signatures echoed by this outflow geometry.
Relativistic fluid mechanics (1389) -- Magnetars (992) -- Relativistic jets (1390) 0000-0002-4880-2880]Marcus DuPont
0000-0002-4888-7885]Andrew MacFadyen
## 1 Introduction
Highly magnetized neutron stars as sources of classical gamma-ray bursts (GRBs) has been a topic spanning many decades (e.g., Usov, 1992; Cheng & Ding, 1993; Thompson, 1994; Komissarov & Lyubarsky, 2004; Thompson et al., 2004; Thompson, 2005; Bucciantini et al., 2012; Metzger et al., 2015; Bugli et al., 2020). It is thought that with large enough surface magnetic fields (\(B\geq 10^{15}\,{\rm G}\)) and short spin-down times, magnetars can theoretically meet the extreme energy requirements necessary to power GRBs. This is in slight contrast with another proposed GRB engine, wherein a black hole (BH) accretes several solar masses worth of stellar material during a catastrophic collapse before eventually powering a fireball that blasts its way through the stellar envelope (e.g., Paczynski, 1986, 1998; Goodman, 1986; Eichler et al., 1989; Mochkovitch et al., 1993; Woosley, 1993; MacFadyen & Woosley, 1999).
While the exact dynamics preceding core collapse remains poorly understood, there is common ground on the asymptotic outflow geometry. The most common assumption promulgated about these types of explosions is that the fire ball is collimated into a relativistic, conical jet ("classical" jet hereafter) that burrows through the collapsing star along the symmetry axis of the compact central engine. These classical jets are well supported by GRB afterglow observations (see, e.g., Kann et al., 2010, 2011; Fong et al., 2015, for comprehensive compilations of classical GRB observations), but in most cases these data are only indicative of the asymmetry of the relativistic outflow, and the exact geometry is not entirely constrained (e.g., Granot, 2005; DuPont et al., 2023). Therefore, we may rightly ask: can an equatorial jet slice its way through the dense core of a dying star like its classical counterpart? To investigate, we invoke an a priori axisymmetric millisecond magnetar (MSM) central engine that outputs a highly collimated outflow near the stellar equator. This is admissible since it has been shown that Poynting flux-dominated flows -- as is the case for MSMs -- can be efficiently collimated (Vlahakis & Konigl, 2003; Komissarov et al., 2007; Bucciantini, 2011), and highly relativistic, energetic, equatorial winds can exist for pulsars like Crab (Komissarov & Lyubarsky, 2004; Spitkovsky & Arons, 2004). We dub these types of jets "lamina" jets (or "blades" colloquially) because of their ultra-thin resultant outflow. Better understanding of these blast wave geometries lends itself to more stringent interpretations of transients seen by ongoing and upcoming surveys
(Barthelmy et al., 2005; Shappee et al., 2014; Chambers et al., 2016; Ivezic et al., 2019; Bellm et al., 2019). In this Letter, we present a 2D axisymmetric special relativistic simulation of a lamina jet slicing its way through an 18 \(M_{\odot}\) pre-supernova helium star and track the jet's evolution until just after breakout.
This Letter is organized as follows: Section 2 discusses the numerical setup and initial conditions, in Section 3 we present our results, Section 4 discusses the relevance of our work, and Section 5 provides a summary.
## 2 Numerical setup
### Governing Equations
The governing equations in this setup are the standard special-relativistic hydrodynamic equations:
\[(\rho u^{\mu})_{,\mu} = \Psi \tag{1}\] \[(T^{\mu\nu})_{,\nu} = \Theta^{\mu}, \tag{2}\]
where \(\rho\) is proper fluid density, \([u^{\mu}]=\Gamma(1,\vec{\beta})\) is four-velocity, \(\Gamma=(1-\beta^{2})^{-1/2}\) is Lorentz factor, \(\beta\) is velocity in units of the speed of light, \(c\), which is unity in our setup, \(\Theta\) and \(\Psi\) are source terms, and \(T^{\mu\nu}\) is the stress-energy tensor for a perfect fluid,
\[T^{\mu\nu}=\rho hu^{\mu}u^{\nu}+p\eta^{\mu\nu}, \tag{3}\]
where \(h=1+\varepsilon+p/\rho\) is total specific enthalpy, \(\varepsilon\) is internal energy, \(p\) is pressure, and \(\eta^{\mu\nu}\) is the Minkowski metric with signature \((-,+,+,+)\). The set of Equations 1 - 3 become closed when choosing an ideal gas equation of state \(p(\varepsilon)=(\hat{\gamma}-1)\rho\varepsilon\) where \(\hat{\gamma}=4/3\) is the ratio of specific heats at constant pressure and volume.
### Engine Model
The source terms are modeled as Dirac delta distributions such that the power density of the engine has the form
\[\Theta^{0}(\vec{r},t)=\frac{L_{\rm eng}}{2\pi r^{2}}\delta(r-r_{n})\delta(\mu -\mu_{n})g(t), \tag{4}\]
where \(L_{\rm eng}\) is the engine power integrated over the entire sphere, the Dirac deltas are written as Gaussian approximations:
\[\delta(r-r_{n})\approx\frac{r}{r_{n}^{2}}e^{-r^{2}/2r_{n}^{2}}, \tag{5}\]
\[\delta(\mu-\mu_{n})\approx\frac{f}{\sqrt{\pi}}e^{-f^{2}(\mu-\mu_{n})^{2}}, \tag{6}\]
where \(r_{n}\) is the effective radius of the engine nozzle, \(\mu=\cos\theta\), \(\mu_{n}\) is the direction of the beam, and \(f\) is the geometric factor of the jet -- i.e., \(f=\theta_{0}^{-1}\) for a lamina jet while it is \(\theta_{0}^{-2}\) for a classical jet where \(\theta_{0}\) is the injection angle. The function \(g(t)\) is taken to be a sigmoid decay,
\[g(t)=\frac{1}{1+e^{\xi(t-\tau)}}, \tag{7}\]
where \(\tau\) is the engine duration and \(\xi\) is the sharpness of the drop-off. The remaining source terms are constructed from Equation 4, where the radial momentum density source term is
\[\Theta^{1}=\Theta^{0}\beta_{0}=\Theta^{0}\sqrt{1-1/\Gamma_{0}^{2}}, \tag{8}\]
and the baryon loading term is
\[\Psi=\Theta^{0}/\eta_{0}, \tag{9}\]
where we define \(\Gamma_{0}\) as the injected Lorentz factor and \(\eta_{0}\equiv\dot{E}/\dot{M}_{0}\) as the radiation to baryon ratio where \(\dot{E}\) is the energy outflow rate near the engine and \(\dot{M}_{0}\) is the initial mass outflow rate 1. The engine duration can be set by requiring \(\tau_{\rm bo}<\tau_{*}\) where \(\tau_{*}\) is the spin-down time of the magnetar,
Footnote 1: Some texts call \(\eta\) the dimensionless entropy or the initial random Lorentz factor.
\[\tau_{*} = -\frac{\omega}{\dot{\omega}}=\frac{2E_{\rm rot}}{L_{*}}\] \[\sim 200\,{\rm s}\left(\frac{M_{\rm PNS}}{1.4M_{\odot}}\right)\left( \frac{R_{\rm PNS}}{12\,{\rm km}}\right)^{-4}\left(\frac{B}{10^{15}\,{\rm G}} \right)^{-2}\left(\frac{T}{1\,{\rm ms}}\right)^{2},\]
\(\tau_{\rm bo}\) is the breakout time, \(\omega\) is the rotational frequency, \(E_{\rm rot}\) is the rotational energy, \(L_{*}\) is the spin-down luminosity, \(M_{\rm PNS}\) is the proto-neutron star mass, \(R_{\rm PNS}\) is the proto-neutron star radius, \(B\) is the surface equatorial magnetic field, and \(T\) is the rotation period. In reality, it is not as simple as setting \(\tau<\tau_{*}\) because the engine must do considerable work to displace enough stellar material to launch a successful jet. The breakout time of the beam is \(\tau_{\rm bo}=\Gamma_{\rm ej}R/u_{\rm ej}(r)\), where \(R\) is the stellar radius. We can compute \(u_{\rm ej}(r)\) by noting the isotropic luminosity of the jet as in (Meszaros & Waxman, 2001),
\[L_{\rm iso}=4\pi r^{2}u_{\rm j}^{2}h_{j}\rho_{\rm j}, \tag{11}\]
and balancing the pressure of the jet head with that of sub-relativistic ejecta ahead of it, i.e., \(u_{\rm j}^{2}h_{j}\rho_{\rm j}=u_{\rm ej}^{2}\rho_{\rm ej}(r)\), giving
\[u_{\rm ej}^{2}=\frac{L_{\rm iso}}{4\pi r^{2}\rho_{\rm ej}(r)}=\frac{q(\theta) rL_{*}}{3M_{\rm ej}}, \tag{12}\]
where \(q\equiv 4\pi/\Omega\), \(\Omega\) is the solid angle, and we've made use of the fact that \(L_{\rm iso}=qL_{*}\). The breakout time
is therefore, \(\tau_{\rm bo}\sim 3\Gamma_{\rm ej}u_{\rm ej}RM_{\rm ej}/qrL_{*}\). Now, we revisit \(\tau_{\rm bo}<\tau_{*}\):
\[\frac{3\Gamma_{\rm ej}u_{\rm ej}M_{\rm ej}R}{qrL_{*}}<\frac{2E_{\rm rot}}{L_{*}} =\frac{2u_{\rm ej}^{2}M_{\rm ej}}{L_{*}}, \tag{13}\]
where the last equality stems from assuming the rotational energy is extracted with perfect efficiency. Equation 13 requires \(q^{-1}<\frac{2r}{3R}\beta_{\rm ej}\) for a successful breakout. Quataert and Kasen (2012) compute the half-opening angle constraint for a classical jet (i.e., \(q=2/\theta_{j}^{2}\)) to be \(\theta_{j}<\theta_{j,\rm max}\equiv(\beta_{\rm ej}/2)^{1/2}\). This fixes \(r/R=3/8\) in our framework. The constraint on the half-opening angle for the lamina is then
\[\theta_{r}<\theta_{r,\rm max}\equiv\beta_{\rm ej}/4=\theta_{j,\rm max}^{2}/2, \tag{14}\]
which implies that the lamina must be much more collimated than a classical jet with the same power in order to break out of the star.
### Initial Conditions
To set the engine power, we assume a "split monopole" (Weber and Davis, 1967) field geometry and scale the MSM luminosity calculated by Thompson et al. (2004), \(L_{\rm eng}\approx L_{*}|_{t=0}=2.4\times 10^{51}\,{\rm erg\ s^{-1}}(R_{\rm PNS }/12\,{\rm km})^{8/3}\). We also assume a relativistic, hot fireball component with \(\eta_{0}=1000\), consistent with a radiatively driven engine 2. In a scenario in which the magnetar is accreting, the spin-down is counteracted by angular momentum transport from the fallback material (e.g., Parfrey et al., 2016; Metzger et al., 2018), so we assume rough constancy within the timescale considered in this work by setting \(\xi=100\) in Equation 7. We do not explicitly invoke magnetic fields since we assume all of the magnetic energy outside of the magnetar light cylinder is converted into kinetic energy, e.g., by magnetic dissipation, (see Spruit et al., 2001; Vlahakis and Konigl, 2003; Komissarov et al., 2009). Our engine-progenitor model is reminiscent of Duffell and MacFadyen (2015, hereafter DM15) wherein they launch a successful collapsar jet through a star with \(\theta_{\rm 0,DM15}=0.1\). With this classical jet injection angle as a baseline, we use Equation 14 to set \(\theta_{0}=0.005\) to match the power density of the classical jet. For an equatorial outflow, we set \(\mu_{n}=0\) in Equation 6. The stellar model is an 18 \(M_{\odot}\) Wolf-Rayet star that was originally a 30 \(M_{\odot}\) Zero Age Main Sequence (ZAMS) star rotating at 99% breakup. The progenitor was evolved using the Modules for Experiments in Stellar Astrophysics (MESA; Paxton et al., 2011, 2013, 2015, 2018, 2019) code, and we invoke the density profile for this star fitted by [20],
Footnote 2: In reality \(\eta\) and other parameters are time variable, but since \(L_{*}\propto(1+t/\tau_{*})^{-2}\) we can set the engine duration to \(\tau=e\tau_{*}\) where \(\epsilon\ll 1\). This would ensure \(|\Delta L_{*}/L_{*}|\sim 2\epsilon\) is small enough to assume rough constancy in our simulations.
\[\rho(r)=\frac{\rho_{c}\times\max\left(1-r/R_{3},0\right)^{n}}{1+(r/R_{1})^{k_{1 }}/\left[1+(r/R_{2})^{k_{2}}\right]}+\frac{A}{r^{2}}, \tag{15}\]
where \(A=A_{*}\dot{M}/4\pi v_{\rm wind}=A_{*}\times 5\times 10^{11}\,{\rm g\ cm^{-1}}\) is the ambient medium mass-loading parameter and the remaining parameters are defined in Table 1. We use an axisymmetric spherical-polar grid with logarithmic radial zones and uniform angular zones. We enforce 1024 radial zones per decade and \(\delta\theta=\theta_{0}/N_{\rm beam}\), where \(N_{\rm beam}\) is the number of zones within the half-opening angle of the beam. We fix \(N_{\rm beam}=10\) in our simulations. The domain range is \(r\in[0.001,10]R_{\odot}\) and \(\theta\in[0,\pi/2]\), which corresponds to 4096 radial zones by 3142 angular zones. The initial pressure and velocity everywhere are negligible. All variables are made dimensionless through combinations of fundamental constants: \(R_{\odot}\), \(c\), and \(M_{\odot}\). This concludes the initial conditions required to launch the relativistic lamina jet into the stellar progenitor. The problem is simulated using an open source GPU-accelerated second-order Godunov code entitled SIMBI(Du Pont, 2023), written by this Letter's first author. It uses a piecewise linear reconstruction algorithm to achieve second-order accuracy in space and second-order Runge-Kutta is employed for the time integration. \(\theta_{\rm PLM}\), a numerical diffusivity parameter for second order schemes, is fixed to 1.5 in our simulations for more aggressive treatment of contact waves.
## 3 Results
### A bisected Wolf-Rayet star
Our fiducial runs show promising relativistic breakout for highly collimated lamina jets. Figure 1 shows the time-evolved snapshots of the explosion from the initially stationary conditions to the relativistic breakout of the beam. Since the lamina is ultra-thin and radiative, it can easily push aside matter as the beam tunnels through the dense stellar interior, allowing it to get out within four light crossing times of the progenitor. Kelvin-Helmholtz instabilities develop deep in the interior as the beam propagates through the very thin cocoon, and the the jet core is naked once it accelerates down the steep density gradient ahead of it. The maximum Lorentz factor increases monotonically and exceeds the injected value, \(\Gamma_{0}\). This hints at the fact that GRBs might not be very sensitive to the intricacies of their complicated central engines outside of a terminal bulk Lorentz factor \(\Gamma_{\infty}\sim\eta\)(Zhang et al., 2003; Meszaros, 2006). In just a few percent of a typical MSM
spin-down time, the beam breaks out at ultra-relativistic velocities before the cocoon has traversed \(\sim 40\%\) of the star, affecting a clean slice through the progenitor.
### Collimation shocks
Throughout its evolution, the lamina jet appears to experience collimation shocks as evidenced by Figure 2. In Figure 2, we also include another lamina jet with double the opening angle \(\theta_{0,\rm{wide}}=0.01\) to gauge key differences in the cocoon evolution, which depicts a more advanced blast wave for a thinner beam. We believe this is due to the relatively negligible thickness of the blade-like structure of the outflow. The nature of this effect is two-pronged. That is to say, the \(\theta_{0}=0.005\) beam's working surface is is half as small as the \(\theta_{0,\rm{wide}}=0.01\) beam, which means it is impeded by half of the mass and has double the pressure. As the thinner lamina pierces through the star, it interacts with less stellar material that is mixed in the Kelvin-Helmholtz layers of the cocoon and therefore the pressurized cocoon has a lesser impact on the collimation of the flow. Because of this, the beam encounters fewer interactions from rarefaction waves and shocks as the jet-cocoon interface is propagated throughout the star. Another way of putting it is that with the effective working surface of the engine reduced, the wave more easily travels through a star analogous to how a sharper knife more easily cuts through material while keeping the applied force fixed. We suspect that this is also the case for skinny classical jets. However, the knots from the collimation for classical jets are more prominent than for lamina jets as evidenced by previous numerical studies on classical jets (e.g., MacFadyen & Woosley, 1999; Aloy et al., 2000; MacFadyen et al., 2001; Zhang et al., 2003; Tchekhovskoy et al., 2008; Mosta et al., 2014; Mandal et al., 2022). Of course, this would have to be further analyzed in 3D to encapsulate a
\begin{table}
\begin{tabular}{l l l} \hline \hline \multicolumn{1}{c}{ Variable} & \multicolumn{1}{c}{Definition} & \multicolumn{1}{c}{Value} \\ \hline \(\rho_{\odot}\) & Characteristic density scale & \(M_{\odot}/R_{\odot}^{3}\) \\ \(t_{\odot}\) & Characteristic time scale & \(R_{\odot}/c\) \\ \(L_{0}\) & Characteristic power scale & \(M_{\odot}c^{2}/t_{\odot}\) \\ \(\rho_{c}\) & Central density & \(3\times 10^{7}\rho_{\odot}\) \\ \(\rho_{\rm{wind}}\) & Wind density at surface & \(10^{-9}\rho_{\odot}\) \\ \(R_{1}\) & First break radius & 0.0017 \(R_{\odot}\) \\ \(R_{2}\) & Second break radius & 0.0125 \(R_{\odot}\) \\ \(R_{3}\) & Outer radius & 0.65 \(R_{\odot}\) \\ \(k_{1}\) & First break slope & 3.25 \\ \(k_{2}\) & Second break slope & 2.57 \\ \(n\) & Atmospheric cutoff slope & 16.7 \\ \hline \(\Gamma_{0}\) & Injected Lorentz factor & 10 \\ \(\eta_{0}\) & Initial radiation-to-baryon ratio & 1000 \\ \(L_{\rm{eng}}\) & Engine power & \(3.2\times 10^{-3}L_{0}\) \\ \(\tau\) & Engine duration & \(2\ t_{\odot}\) \\ \(\theta_{0}\) & Engine half-opening angle & 0.005 \\ \(r_{n}\) & Nozzle radius & 0.01 \(R_{\odot}\) \\ \(A_{\star}\) & Dimensionless wind parameter & 1 \\ \hline \end{tabular}
\end{table}
Table 1: Stellar Model & Engine Parameters
Figure 1: Snapshots of the lamina jet with injection angle \(\theta_{0}=0.005\). The left half shows the four-velocity and the right shows the lab frame density \(D=\Gamma\rho\). Panel (a) shows the interior beam zoomed in to \(0.1R_{\odot}\) at the boundary. We see the development of Kelvin-Helmholtz instabilities as the beam tunnels through the dense core with the region nearest the jet head experiencing a few collimation shocks. In panel (b), the lamina breaks out of the star successfully with a \(\Gamma\geq 30\). The white-dashed line marks the radius of the progenitor at \(0.5R_{\odot}\).
fuller picture of beam deflection and various instabilities that might arise.
### Outflow lifetime
Although the injection angle was \(\theta_{0}=0.3^{\circ}\), the lamina at or above \(\Gamma_{0}=10\) breaks out with a half-opening angle \(\theta_{r}\sim 0.2^{\circ}\)3 and the carries a maximum Lorentz factor \(\Gamma_{\rm core}\sim 30\) at the time of the simulation end. An observer within the stellar equatorial plane and whose line of sight passes through the lamina centroid would see \(1/\Gamma=3\%\) of the total structure. If extreme beaming took place -- i.e., \(\Gamma\gg\theta_{r}^{-1}\) -- the lamina would travel through the interstellar medium (ISM) with fluid parcels causally disconnected from their neighbors which would help maintain a non-spreading outflow until it sweeps up a mass \(M/\Gamma\) and slows down (Porth & Komissarov, 2015). In Figure 3, we compute the cumulative isotropic-equivalent energy per solid angle at simulation end,
Footnote 3: Measured using \(\sin\theta_{r}=E/E_{\rm iso}\)
\[E_{k,{\rm iso}}(>\Gamma_{c}\beta_{\rm c};\theta)=4\pi\frac{dE}{d\Omega}(> \Gamma_{c}\beta_{\rm c};\theta), \tag{16}\]
where \(\Gamma_{\rm c}\beta_{\rm c}\) is the four-velocity cutoff. Moreover, we estimate the motion of the bulk flow by noting the mean energy-weighted four-velocity,
\[\langle\Gamma\beta\rangle_{E}=\frac{\int_{V}\Gamma\beta(\Gamma^{2}\rho h-p- \Gamma\rho)dV}{\int_{V}(\Gamma^{2}\rho h-p-\Gamma\rho)dV}, \tag{17}\]
moving above some fixed value. We are interested in the material which is moving at or above the injected Lorentz factor, \(\Gamma\geq\Gamma_{0}=10\). Therefore, the ultra-relativistic component gives \(\langle\Gamma\beta\rangle_{E}\sim 15\) as the velocity of the bulk flow. From Figure 3 we find that \(E_{k,{\rm iso}}(>15)=3\times 10^{52}\,{\rm erg}\) is focused purely in the equator. This beam has mass \(M(>15)|_{\theta=90^{\circ}}=E_{k,{\rm iso}}/\langle\Gamma\rangle_{E}\approx 1 0^{-3}M_{\odot}\). Assuming the MSM engine spontaneously shuts off, the lamina will decelerate after sweeping up \(7\times 10^{-5}M_{\odot}\) which, for \(A_{*}=1\), occurs at a radius \(r_{\rm dec}=2\times 10^{16}\,{\rm cm}\) or \(6\times 10^{5}R\).
### Supernova energy budget
Although the lamina breaks out of the star inefficiently (i.e., \(\tau_{b}\sim 4R\)), the cocoon has little time to traverse the remaining undisturbed star before the jet head outruns the explosion. To get a sense of the remaining energy that might be attributed to a supernova, we estimate this by summing all of the _total_ energy available in the slowest material: \(E_{\rm T}(<0.1)\sim 3\times 10^{51}\,{\rm erg}\). Only about 30% of this total energy is kinetic at the time of the simulation end. Thus, once the cocoon finishes its journey throughout the remaining stationary star and full conversion of the thermal energy into kinetic energy is complete, the energy liberated in the explosion is of order the canonical supernova explosion energy. About 10% the deposited energy from the engine is shared with the supernova component if it were to shut off spontaneously after four seconds.
Figure 2: Snapshots at 3 s depicting the lab frame density in the northern hemisphere and pressure in the southern hemisphere where the radial boundary is at \(0.2R_{\odot}\). The density color stretch is identical to panel (a) in Figure 1. The pressure colormap is chosen such that direction of increasing pressure gets brighter. _Upper_: The beam propagation for \(\theta_{0}=0.005\). In the equatorial plane, the beam path is evacuated and the pressure is slightly laterally stratified. _Lower_: The same variables as in the upper wedge, but with injection angle \(\theta=0.01\). The cocoon tail is fatter in this instance and the jet head lags behind the thinner beam. In both cases, there exist no clear knots or pinching as usually seen for collimation shocks in classical jets.
## 4 Discussion
A centrifugally-focused millisecond magnetar central engine at the core of a compact Wolf-Rayet star gives rise to both the formation and the propagation of a relativistic lamina through the mantle and envelope of the star and to a supernova explosion. This is assuming an axisymmetric "split-monopole" magnetic field structure deep in the stellar interior which might fully dissipate magnetic energy due to the equatorial current sheet (Spruit et al., 2001; Drenkhahn & Spruit, 2002; Lyutikov & Blandford, 2003). This allows for energy deposition at a rate \(L_{*}=\mathrm{few}\times 10^{51}\,\mathrm{erg}\,\,\mathrm{s}^{-1}\)(Thompson et al., 2004; Thompson, 2005), which is a bit larger than the rate for a typical dipole field MSM. In just 4 s or four light crossing times of the progenitor, a clean relativistic blade-like structure bisects the helium core and breaks out into the dense circumstellar medium intact.
### Causality and Stability
The ring will come into causal contact with its edges when the angle of influence, i.e., the proper Mach angle (Konigl, 1980), is comparable to the angular size of the beam, viz., \(\theta_{\mathrm{M}}/\theta_{r}\geq 1\). The relativistic Mach angle evolves as \(\tan\theta_{\mathrm{M}}=u_{s}/u\propto 1/\Gamma\) and angular size of the ring is \(\theta_{r}=r_{\perp}/r\) where \(u_{s}\) is the proper sound speed and \(r_{\perp}\) is the size of the ring perpendicular to the flow. Together, these functions imply the causality constraint,
\[\frac{\theta_{\mathrm{M}}}{\theta_{r}}\propto\frac{r}{r_{\perp}\Gamma}. \tag{18}\]
Assuming the thermal energy dominates the rest mast energy of the particles, i.e., \(\rho h\propto p\), constancy of energy implies \(\Gamma^{2}r^{2}\Delta r\theta_{r}p=\mathrm{constant}\) where \(\Delta r\) is the width of the annular blast wave. Energy conservation together with mass conservation, \(\Gamma r^{2}\Delta r\theta_{r}\rho=\mathrm{constant}\), leads to the scalings \(\Gamma\propto\rho^{1-\hat{\gamma}}\) and \(r_{\perp}\propto r^{-2}\rho^{-\hat{\gamma}}\) after utilizing \(\Delta r\sim r/\Gamma^{2}\), \(r\theta_{r}=r_{\perp}\), and \(p\propto\rho^{\hat{\gamma}}\). Equation 18 thus becomes
\[\frac{\theta_{\mathrm{M}}}{\theta_{r}}\propto r^{3-k(2\hat{\gamma}-1)}. \tag{19}\]
Equation 19 implies that for an ultra-relativistic ring with \(\hat{\gamma}=4/3\), the critical ambient medium density slope is \(k=9/5\) where \(k<9/5\) implies full causality is reached while \(k>9/5\) implies causality is lost. This critical ambient medium slope is different from the \(k=2\) classical jet value calculated by Porth & Komissarov (2015). Note that \(k=2\) is the value for stellar winds, so Equation 19 implies that small perturbations in the \(\phi\)-direction might excite interesting instability modes if the ring evolves in a wind-like environment, and we plan to address this in another paper. Moreover, in the scenario in which the ambient medium is not uniform in \(\phi\), the ring-like blast wave will become corrugated in the \(\phi\) direction like pizza slices of uneven length, which might drive quasi-periodic radiation signatures in the light curves. While interior to the star and in in full 3D, the blade is likely to bend and wobble which might excite further azimuthal instabilities, which we also plan to investigate in a follow up work.
### Visibility
The overall blade carrying a Lorentz factor \(\Gamma_{\mathrm{beam}}\geq\Gamma_{0}=10\) has half-opening angle \(\theta_{\mathrm{beam}}=0.2^{\circ}\), and it contains an ultra-relativistic core with \(\Gamma_{\mathrm{core}}\sim 30\) and half-opening angle \(\theta_{\mathrm{core}}=0.03^{\circ}\) at simulation end. If the engine was instantaneously shut off from the moment of breakout, the bulk flow, which moves with \(\langle\Gamma\beta\rangle_{E}=15\), propagates more than 5 orders of magnitude beyond the stellar radius before slowing down. To gauge when these outflows will become observable on Earth, we compute the photosphere radius assuming a grey atmosphere optical depth of 2/3,
\[\tau=\frac{2}{3}=f_{\mathrm{KN}}\kappa_{\mathrm{T}}\int_{R_{p}}^{\infty}\rho_ {\mathrm{wind}}(r)dr \tag{20}\]
Figure 3: The cumulative isotropic-equivalent kinetic energy as a function of polar angle \(\theta\) at 4 s. The solid, dashed, dotted, and dash-dotted lines mark the \(\Gamma\beta>\{1,10,15,30\}\) cutoffs, respectively. The inset is a zoom in around some small solid angle near the stellar equatorial plane. Some of the relativistic material is forced to spread laterally as shown by the area under the \(\Gamma\beta>1\) curve, but as we discriminate towards higher velocities, the bulk lamina structure (\(\Gamma\beta>10\)) is focused in the stellar equator into a thin layer with \(E_{k,\mathrm{iso}}\sim 5\times 10^{52}\,\mathrm{erg}\). At simulation end, only a small amount of material is accelerated near the maximum Lorentz factor, with \(E_{k,\mathrm{iso}}(>30)\sim 2\times 10^{51}\,\mathrm{erg}\).
\[\Rightarrow R_{P}=A_{*}f_{\rm KN}\times 1.5\times 10^{11}\,{\rm cm},\]
where \(f_{\rm KN}\) are the Klein-Nishina corrections, \(\kappa_{\rm T}=0.4/\mu_{e}\,{\rm cm}^{2}\) g\({}^{-1}\), \(\mu_{e}=2/(1+X_{H})\simeq 2\) is mean molecular weight per electron, and \(A=A_{*}\times 5\times 10^{11}\,{\rm g}\) cm\({}^{-1}\). This implies that the environment becomes optically thin within just a few stellar radii from the source, making the blade visible almost immediately post breakout.
### Observations
In the scenario where lamina jets are sources of long gamma-ray bursts (lGRBs), their observational implications are immediately realized due to simple geometric arguments (Thompson, 2005; Granot, 2005; DuPont et al., 2023). Once the jet slows down, it will spread in a single dimension, leading to a shallower break in the afterglow light curves that makes it clearly distinguishable from a classical GRB jet configuration. The characteristic afterglows in the low and high frequency bands for such rings were computed by DuPont et al. (2023) and show that, over both frequency ranges, the ring-like blast waves are intermediate between jet-like and spherical outflows. With the afterglow decay distributions being so varied (e.g., Panaitescu, 2005), we believe ring-like explosions are also candidate outflows in this arena. Note that in this work we took the extreme case of a split-monopole axisymmetric engine to achieve such relativistic outflows. A more modest dipolar engine might refocus the blade into bipolar bubbles (e.g., Bucciantini et al., 2007) if the magnetic hoop stress dominates once the magneto-centrifugal wind reaches the termination shock. Another outcome could be that if the engine power still dominates over the magnetic hoop stress, the equatorial wind might produce slower material at breakout due to having a smaller energy dissipation rate, but this likely broadens the types of transients created by the blast wave. In principle, Equations 1 - 7 are scale free, so rescaling opens avenues for providing explanations for transients like super-luminous supernovae, X-ray bursts, fast blue optical transients (FBOTs), and low-luminosity GRBs, to name a few.
## 5 Summary
We have demonstrated that an equatorial engine can produce an ultra-relativistic breakout focused into a very thin lamina structure. The fastest core is focused into an even thinner working surface due to the path ahead of the engine being evacuated so efficiently. Kelvin Helmholtz instabilities develop deep in the stellar mantle, and the cocoon-jet interface experiences rarefaction waves and/or shocks that lead to collimation shocks as usually seen for classical jets. However, the formation of "knots"caused by the collimation shocks along the beam do not form as they have for classical jet. Because of their geometry, the lamina outflow sweeps up more mass than classical jets by a factor \(\sim\theta^{-1}\), so to achieve similar ultra-relativistic breakouts, their engines must be highly focused at the outset. In the future, we plan to address this same problem, but in three dimensions to capture any instabilities such as corrugated (accordion-like) waves that might arise in the \(\phi\) direction due to causality effects or non-uniform circumstellar environments. Furthermore, evolving the lamina over thousands of decades in distance to understand the late-time geometry of the explosion once the relativistic beam slows down fully must also be done to capture distinctive features in the observational signatures of these types of outflows. We also suggest that a detailed resolution study of relativistic jet breakout is needed to further pin down the limitations imposed by these compact GRB progenitors and seeding of turbulence.
M.D. thanks Eli Waxman, Andrei Gruzinov, and Brian Metzger for helpful discussions, a James Author Fellowship from NYU's Center for Cosmology and Particle Physics, and the LSSTC Data Science Fellowship Program, which is funded by LSSTC, NSF Cyber-training Grant #1829740, the Brinson Foundation, and the Moore Foundation. We acknowledge support from NASA ATP grant 80NSSC22K0822.
|
2309.05154 | The functional generalization of the Boltzmann-Vlasov equation and its
Gauge-like symmetry | We argue that one can model deviations from the ensemble average in
non-equilibrium statistical mechanics by promoting the Boltzmann equation to an
equation in terms of {\em functionals} , representing possible candidates for
phase space distributions inferred from a finite observed number of degrees of
freedom.
We find that, provided the collision term and the Vlasov drift term are both
included, a gauge-like redundancy arises which does not go away even if the
functional is narrow.
We argue that this effect is linked to the gauge-like symmetry found in
relativistic hydrodynamics \cite{bdnk} and that it could be part of the
explanation for the apparent fluid-like behavior in small systems in hadronic
collisions and other strongly-coupled small systems\cite{zajc}.
When causality is omitted this problem can be look at via random matrix
theory show, and we show that in such a case thermalization happens much more
quickly than the Boltzmann equation would infer. We also sketch an algorithm to
study this problem numerically | Giorgio Torrieri | 2023-09-10T22:07:47Z | http://arxiv.org/abs/2309.05154v2 | # The functional generalization of the Boltzmann-Vlasov equation and its Gauge-like symmetry
###### Abstract
We argue that one can model deviations from the ensemble average in non-equilibrium statistical mechanics by promoting the Boltzmann equation to an equation in terms of _functionals_, representing possible candidates for phase space distributions inferred from a finite observed number of degrees of freedom.
We find that, provided the collision term and the Vlasov drift term are both included, a gauge-like redundancy arises which does not go away even if the functional is narrow. We argue that this effect is linked to the gauge-like symmetry found in relativistic hydrodynamics [1] and that it could be part of the explanation for the apparent fluid-like behavior in small systems in hadronic collisions and other strongly-coupled small systems[2].
When causality is omitted this problem can be look at via random matrix theory show, and we show that in such a case thermalization happens much more quickly than the Boltzmann equation would infer. We also sketch an algorithm to study this problem numerically
Introduction
The problem of apparent hydrodynamic behavior of small systems [2] is one of the most, if not the most important conceptual problem thrown at us by heavy ion collisions. Experimental data [3; 4] seems to suggest that "collectivity" (precisely defined as the number of particles present in correlations relevant for anisotropic flow) is remarkably insensitive to the size of the system produced in hadronic collisions, down to proton-proton and \(\gamma-nucleus\) collisions with 20 final state particles.
Most of the theoretical response to this has been centered around the concept of "hydrodynamic attractors"/hydronamization [5; 6], based on the idea of taking a "microscopic" theory (usually Boltzmann equation, a theory with a gravity dual or classical Yang-Mills) in a highly symmetric (lower dimensional) setup and showing that hydrodynamic behavior occurs for gradients much higher than those "naively expected". The basic issue is that the main puzzle of the onset of hydrodynamics in such Small systems is not the size of the gradients, but rather the small number of degrees of freedom [7; 8], which generate fluctuations in every cell even if the mean free path was zero [9]. Yet no indication exists that if we somehow tightly selected for initial geometry, we would not have extra uncertainties due to dynamics, the sign of "perfect" hydrodynamics [10]. In this regime most microscopic theories based on large \(N\) approximations (Boltzmann equation with its molecular chaos, AdS/CFT in the planar limit, classical Yang Mills theories with large occupation numbers, even Kubo formulae and Schwinger-Keldysh approaches requiring asymptotic limits for soft modes [11; 12; 13; 14]) become suspect.
Here one must remember that a universal hydrodynamic-like behavior in small systems has been noted in a much larger set of circumstances than the debate around heavy ion collisions usually includes: Cold atoms seem to have achieved the onset of hydrodynamics with comparatively few particles [7; 8]. It has long been known that Galaxies behave "as a fluid", even though the assumptions related to transport are highly suspect [15]. Even in everyday physics,phenomena such as the "Brazil nut effect" [16] point to a universality of the hydrodynamic description even in systems with few particles, provided they are strongly correlated. This apparent universality is cited by mathematicians such as [17] to study the multi-particle problem in depth.
Recently it was argued [1; 18] that a way to approach this conundrum is to think in a
Gibbsian rather than a Boltzmannian way (in this regime the two are not equivalent [19; 20]): Since only the energy-momentum tensor and conserved current components are measurable, fluctuations bring with them a redundancy of _hydrodynamic descriptions_, each with it's flow vector and Bayesian probability that this is the "true flow vector" and any anisotropy is due to a fluctuation. If the system is strongly coupled enough for the fluctuation-dissipation theorem to apply locally, each of these descriptions is as good as the others as long as the total energy-momentum is the same. It is not surprising therefore that as fluctuations become larger the probability of a good description near the ideal hydrodynamic limit could actually grow, or at least it does not go down [1]. In a sense this picture is the inverse of that of an attractor.
For now such a picture is still abstract and qualitative. In this work, we would like to make a link to microscopic theory, via a generalization of the Boltzmann-Vlasov equation which goes in the "Gibbsian sense" outlined earlier. The basic idea is that when the number of degrees of freedom is small, the phase space distribution \(f(x,p)\) will not be known but must be inferred by some kind of Bayesian reasoning. This is admittedly a very heuristic approach, and some further arguments motivating it and placing it within the more conventional transport theory have been left to the appendix.
The mathematical consequences of this are however clear: The fact that one does not know \(f(x,p)\) beyond a few data points can be represented by considering a _functional_ representing the probability of \(f(x,p)\) being what it is. In this case, the integrals corresponding to the Boltzmann collision operator and the Vlasov potential operator will not be two copies of \(f(x,p)\) but two different functions \(f(x,p)\) and \(f^{\prime}(x,p)\), the latter integrated over.
Let us start from the Boltzmann-Vlasov equation Eq. 1
\[\frac{p^{\mu}}{\Lambda}\frac{\partial}{\partial x^{\mu}}f(x,p)=C[f]-gF^{\mu} \frac{\partial}{\partial p^{\mu}}f(x,p) \tag{1}\]
where \(\Lambda\) is a generic IR momentum scale, which can be the particle mass, the virtuality, the Debye screening length and so on. We note that we wrote Eq. 1 in an unusual way for, other than the generic virtual scale, usually the Vlasov term is on the right-hand side. \(F^{\mu}\) is the four-force.
This way of writing,however, is physically justified as we will show. The Vlasov term is of the same order of magnitude in the Coupling constant as the Boltzmann term, and is thought to dominate for long-range correlations where, due to Bose-enhancement, the
occupation numbers of bosons are high requiring a semi-classical field. It is thought that instabilities due to thermal fluctuations and Debye screening make this term obsolete but, for "small" but highly correlated systems, there is no justification for this.
More formally, the Boltzmann equation is known to be a good approximation of a quantum field theory evolution in the "ensemble average". The wave functional of the quantum field \(\Psi\) converges to a well-defined distribution
\[\mathcal{F}(f(x,p))=|\Psi|^{2}(f^{\prime})\simeq\delta\left(f^{\prime}-f(x,p)\right) \tag{2}\]
One can relax this assumption using Wigner functionals [21], defined over field configurations \(f_{1,2}(x)\) in configuration space
\[W\left(f_{1}(x),f_{2}(x)\right)=\int\mathcal{D}\phi(x)\exp\left[-if_{2}(x) \phi(x)\right]\left\langle f_{1}(x)+\frac{1}{2}\phi(x)\,\right|\,\,\hat{\rho} \,\,\left|\,f_{1}(x)-\frac{1}{2}\phi(x)\right\rangle \tag{3}\]
where \(\rho\) is the density matrix, defined via the partition function (See [22]) This expression is exact at the quantum level, and hence it's momentum equivalent is a straight-forward infinitely dimensional Fourier transform with \(\tilde{f}_{1,2}(p)\). It also contains every possible correlation of the BBGKY hierarchy, encoded, in configuration space in "bunchings" between \(f_{1}(x)\) and \(f_{2}(x)\) as explained in the appendix (\(f(x_{1},x_{2},...,f_{n})\) would be related to the \(n-th\) cumulant of the functional).
Analogously how \(f\) is the \(\mathcal{O}\left(h^{0}\right)\) limit of \(W\)[23] one can imagine the Boltzmann functional is the corresponding limit of the Wigner functional in [21], a decohered system with an undefined probability density. More formally, the regime where the ansatze presented in this work are valid are in section 3 of the appendix.
In this regime, hen Eq. 2 is relaxed Eq. 1 would become something like
\[\frac{p^{\mu}}{\Lambda}\frac{\partial}{\partial x^{\mu}}f_{1}(x,p)=\left\langle \mathcal{C}[f_{1},f_{2}]\right\rangle_{f_{2}}-g\left\langle\mathcal{V}^{\mu}[ f_{1},f_{2}]\right\rangle_{f_{2}}\frac{\partial}{\partial p^{\mu}}f_{1} \tag{4}\]
where
\[\left\langle O\right\rangle\equiv\int\mathcal{D}f_{2}O(f_{1},f_{2})W(f_{1},f_{ 2}) \tag{5}\]
with \(\mathcal{C},\mathcal{V}^{\mu}\) being the generalizations to functional averages of Vlasov and Boltzmann collision operators. Note that [17] one can consider the latter as the UV completion of the former, scattering is the continuation "within the coarse grained cell" of the Vlasov evolution, in
creasingly unstable at smaller scales1. As "infinitely unstable infinitely local" interactions degenerate into random scattering, they are taken care of by the Boltzmann collision term, while long-range correlations are taken care of by the Vlasov drift term. Thus one expects, when one coarse grains, each term to be scale dependent but not the difference.
Footnote 1: [17] contrasts this instability in terms of KAM’s theorem, which on the contrary implies the existence of an \(\epsilon H_{0}+H_{I}\), where \(H_{0}\),\(I\) are respectively integrable and non-integrable hamiltonians, where integrability is not broken. However this \(\epsilon\) generally depends as \(\mathcal{O}\left(e^{-N}\right)\) and to make the transition to a probability density function, \(x_{i},p_{i}\to f(x,p)\) requires \(i\to\infty\), which nullifies the lower KAM limit. This is a heuristic explanation as to why Vlasov type equations are always unstable at all scales
To go further, we have to make some ansatze. We will assume any phases between degrees of freedom oscillate "fast" w.r.t. any time-scale, so position and momentum decouple into a classical probability in both position and momentum space. We will also assume, taking inspiration from field theory [25] a Gaussian ansatz for this probability. So
\[W(f_{1}(x),f_{2}(x))\simeq\rho[f(x,p),f^{\prime}(x,p)]=\frac{1}{\mathcal{Z}} \exp\left[\frac{D[f,f^{\prime}]}{2\sigma_{f}^{2}}\right] \tag{6}\]
where the obvious choice of a distance measure is
\[D[f,f^{\prime}]=\int d^{3}xd^{3}p\left(f(x,p)-f^{\prime}(x,p)\right)^{2} \tag{7}\]
with the Boltzmann-Vlasov equation recovered for \(\sigma_{f}\to 0\). The large number theorem makes it clear that \(\sigma_{f}\sim\sqrt{N_{DoF}}\), the square root of the number of degrees of freedom so it is certainly away from the ensemble average limit for the "small" fluids seen in hadronic collisions and ultra-cold atoms. The terms on the right hand side of Eq. 4 converge to
\[\mathcal{C}[f_{1},f_{2}]=\int d^{3}\left[k_{1,2,3}\right]\sigma_{scattering} \left(\quad f_{1}(x,p)f_{2}(x,k_{1})-f_{2}(x,k_{2})f_{1}(x,k_{3})\right) \tag{8}\]
where the Vlasov operator here \(\mathcal{V}\) is the Vlasov operator
\[\mathcal{V}^{\mu}[f_{1},f_{2}]=\int dx_{1,2}F^{\mu}(x_{1}-x_{2})\Theta((x_{1} -x_{2})^{2})f_{2}f_{1} \tag{9}\]
Where \(|M|^{2}\) is the scattering matrix element and \(F^{\mu}\) the force field, augmented by a \(\Theta\) function enforcing causality. Note that
* The integral in \(\mathcal{C}\) is in momentum space while in \(\mathcal{V}^{\mu}\) it is in position space. \(f_{1,2}(x_{1,2},p_{1,2})\) are of course defined in both but the gradients expected in any expansion will be,respectively,in position and momentum.
* For a consistent coarse-graing the scattering cross-section matrix elements and the long distance semi-classical potential are strictly related, as forces are related to scattering via potentials. For scalar particles \[F^{\mu}(x)=\partial^{\mu}V(x)\ \ \ \,\ \ \ \ \ \sigma_{scattering}\sim|M(k)|^{2}d \Omega(k)\ \ \ \,\ \ \ \ \ M(k)=\int d^{3}xe^{ikx}V(x)\] with the appropriate extension for vector potentials. Thus in general both terms are present.
The point here is that for finite functional width \(\sigma_{f}\) in equation 6, even away from the Gaussian parametrization, there arises a hidden "gauge" symmetry within the RHS of Eq. 1. Consider all possible transformations such that
\[f(x,p)\to f^{\prime}(x,p)\ \ \ \,\ \ \ \ \ \underbrace{\hat{C}\left(f(x,p),f^{ \prime}(x,p)\right)}_{lim_{f\to f^{\prime}}\sim\partial f/\partial x}= \underbrace{\hat{\mathcal{V}}^{\mu}\left(f(x,p),f^{\prime}(x,p)\right)\frac{ \partial f}{\partial p_{\mu}}}_{lim_{f\to f^{\prime}}\sim\partial f/ \partial p} \tag{11}\]
In the ensemble average Eq. 6 has no physical meaning because \(f(x,p)\to f(x^{\prime},p^{\prime})\) can only be a shift in phase space, not a shift _in functions_. Away from a full ensemble average,
In the Gibbsian picture, however, \(f(x,p)\) is itself _unknown_, and only estimated via a coarse-graining. Hence the RHS of Eq. 4 can be dominated by redundancies so as to be qualitatively _very_ different equation from Eq. 4, even for small \(\sigma_{f}\), ie narrow distributions in functional space. In other words, if only the Boltzmann term or the Vlasov term are present one assumes that as the Boltzmann functional converges to a \(\delta\)-functional,ie a function, the equation of motion for it converges to a Boltzmann or Vlasov equation. But if both terms are included, the redundancy in the difference spoils this convergence.
Physically, the manifestation of this is that the RHS of eq 1 vanishes in just two cases, free streaming and ideal hydrodynamics. However, for eq 4 there is a wealth of situations, parametrized by Eq. 11 where the system flow looks isentropic. In other words, there will be many configurations where the system will look like an ideal fluid, along the lines of [1]. This is shown schematically in Fig. 1 What happens is that close to the local equilibrium limit we do not know if the volume cell is being moved by _microscopic pressure_ (described by a Boltzmann type equation) and a _macroscopic force_ (described by a Vlasov term). The set of configurations where a pressure gradient is exchanged for a force corresponds exactly to the set of configurations where \(\mathcal{V}[f]\) does not change. In a Gibbsian picture, therefore,
all such \(f\) need to be counted in the entropy which generally results in differences w.r.t. the Boltzmannian entropy [20].
At the classical level, it has long been known [17] that the Boltzmann term can function as a "counter-term" cutting off the effect of short-range instabilities of the Vlasov term. At the quantum level this picture is indeed confirmed by the fact that the Boltzmann term describes "microscopic" and the Vlasov term "macroscopic" DoFs. The set of transformations leaving the RHS of eq. 4 invariant can be thought of as defining a "Wilsonian" flow across the space of \(f(...)\) which are part of the same Gibbsian ensemble.
Quantitatively, these redundancies should ensure that the system is indistinguishable from local equilibrium in a much wider array of circumstances than a purely Boltzmann description would suggest. As there is no small parameter in the functional expansion around the average \(f(x,p)\), an analytical quantification of this statement is non-trivial. However, the universality of random matrix theory could provide a quantitative validation of this point i
Figure 1: A representation of how when both Boltzmann and Vlasov terms are considered the limit of the Boltzmann functional converging to a function admits a continuum of minima where Eq. 4 is indistinguishable from an ideal fluid dynamic equation
A non-relativistic insight from random matrices
Let us discretize the system (using \(i\) for position and \(j\) for momentum variables) and use random matrix theory, \(f(x,p)\to f_{ij}(x_{i},p_{j})\)\(\mathcal{C}\rightarrow\mathcal{C}_{i_{1},i_{2}},\mathcal{V}\rightarrow\mathcal{V}_{j_{1},j_{2}}\). Of course we have neglected causality (the \(\theta\)-term in Eq. 9, but this is a round qualitative estimate, perhaps relevant for cold atom measurements such as [7; 8]
Equation 4 becomes of the form
\[\dot{f_{ij}}-\left[\frac{\vec{p}}{m}.\Delta_{i}^{\mu}\right]f_{ij}=\int d\left[ f_{i_{1}j_{1}}^{\prime}\right]\left[\mathcal{W}_{i_{1}j_{1}ij}\left(\mathcal{C}_{ jj_{1}}\left(f_{ij}f_{i_{1}j_{1}}^{\prime}-f_{ij_{1}}f_{i_{1}j}^{\prime} \right)-\mathcal{V}_{ii_{1}}f_{ij}f_{i_{1}j_{1}}^{\prime}\frac{\partial f}{ \partial p}\right)\right] \tag{12}\]
\(\mathcal{W}_{i_{i}i,j,j_{1}}\) is a discretized version of Eq. 6 (in agreement with the definition Eq. 5 ). Double summation is used in the \((...)\) bracket but \(\mathcal{W}_{...}\) is multiplied separately. As we describe in detail below, the RHS can be thought of as a Gaussian random matrix ensemble
* \(\mathcal{V},\mathcal{C}\) are deterministic matrices of \(i,j\). Hence, one can do a change of variables \[d\left[f_{i_{1}j_{1}}^{\prime}\right]\rightarrow\left\{\begin{array}{l} \mathcal{C}^{-1}\\ \mathcal{V}^{-1}\end{array}\right\}d\left[f_{i_{1},j_{1}}^{\prime}\right]\] (13) This results in a series of Gaussian ensembles, with a transformed \(\mathcal{W}\) as the weight, equivalent to a previous one up to a normalization factor
* \(\left\{f_{i_{1}j_{2}}f_{i_{2}j_{2}}^{\prime}\mathcal{V}_{i_{1}i_{2}}\right\} \propto\langle x-x^{\prime}\rangle^{-2}\) (contracted with \(\Delta f/\Delta p^{\mu}\)), is an antisymmetric ensemble in \(i_{1,2}\). \(j_{1,2}\) is traced over in a normalization factor
* \(\mathcal{C}_{j_{1}j_{2}}\left(f_{i_{1}j_{1}}f_{i_{2}j_{2}}^{\prime}-f_{i_{1}j _{2}}f_{i_{2}j_{2}}^{\prime}\right)\) is also an anti symmetric in \(j_{1}j_{2}\), \(i_{1,2}\) are traced over in a normalization factor.
* This is however a deformed ensemble, since the average \(\langle f_{ij}\rangle\) is non-zero.
* In the \(\sigma_{f}\to 0\) limit of Eq. 6 one expects \(\langle f_{ij}\rangle\) to reflect the general Boltzmann Vlasov estimate. More generally, we note \(\mathcal{C}_{j_{1}j_{2}}\) conserves momentum and \(\mathcal{V}_{i_{1}i_{2}}\) respects Lorentz invariance, so momentum conservation on average can be implemented via Lagrange multiplies. Thus one expects \(\langle f_{ij}\rangle\) away from \(\sigma_{f}\to 0\) will be of the form \[\langle f_{ij}\rangle\propto\exp\left[-d\Sigma_{\mu}(x_{i})\beta_{\nu}(x_{i}) \left(\langle p_{j}^{\mu}p_{j}^{\nu}\rangle-\langle T^{\mu\nu}\rangle\left(x_ {i}\right)\right)\right]\] (14) in line with the gauge-like expectations from [1]
This problem is the combination of ensembles studied for many decades [30; 31; 32] but an elegant solution was shown in [33], where it was shown that the distribution is that of the Wigner semi-circle and outliers.
\[\rho(\lambda)=\rho_{0}(\lambda)+\frac{1}{N}\sum_{k,\lambda_{k}^{*}>J}\left( \delta(\lambda-\mu_{k})+\delta(\lambda+\mu_{k})\right)\ \ \ \ \,\ \ \ \ \ \rho_{0}(\lambda)=\frac{1}{\pi J}\sqrt{1-\frac{ \lambda^{2}}{4J^{2}}} \tag{15}\]
the RHS of Eq. 4 will be the difference between two such "shifted" Gaussian ensembles.
\[\dot{f}(x,p,t)-\frac{\vec{p}}{m}.\nabla f(x,p,t)=N_{p}(t)F(J_{p}(f(x,p,t))-N_{x }(t)\frac{\partial f}{\partial p}F(J_{x}(f(x,p,t)) \tag{16}\]
where \(N_{p,x}\) are extra normalizations (from Eq. 13 ) and
\[J_{x,p}[f]\sim\left\{\begin{array}{c}\langle x\rangle\\ \langle p\rangle\end{array}\right.\ \ \,\ \ \ \ F(J)=J\int\rho_{0}(x)e^{-x^{2}}dx+ \sum\exp\left[-\mu_{k}^{2}[J]\right]\]
Thus, neglecting the "sparse" exponential terms, the evolution will be driven by a difference between two Bessel-function type terms, where \(N_{x,p}\) and \(J_{x,p}\) will bring the system to relaxation where the RHS is negligible and the system flows as an ideal fluid (see appendix section A.1). This is much faster than the relaxation time of the Boltzmann equation, where the corresponding equation to Eq. 16 is
\[\dot{f}(x,p,t)-\frac{\vec{p}}{m}.\nabla f(x,p,t)=\frac{f_{0}(x,p,t)-f(x,p,t)}{ \tau_{0}}\ \ \ \ \,\ \ \ \ \dot{f}_{0}(x,p,t)=\frac{\vec{p}}{m}.\nabla f_{0}(x,p,t)\]
Of course, the model presented here is highly acausal. Including causality in the Vlasov potential would add a non-trivial correlation to the random matrix which we do not at the moment know how to perform analytically.
## III A numerical algorithm
The discretized Eigenvalue analysis in [27] (but also [28]) allows us to make an estimate of Eq. 4 initializing it close to a fluctuating equilibrium and seeing at every step if something like "an ideal hydrodynamic evolution" is maintained on average even at gradients of \(\beta_{\mu}\) where a typical Boltzmann configuration would be far away from the hydrodynamic regime. The algorithm is summarized as follows
**(i):**: Start with an average \(\langle T_{\mu\nu}\rangle\). Create an ensemble \(\{f\}\) of configurations in every cell \(x_{i},p_{i}\)
\[P[f(x_{i},p_{i})]=\exp\left[-d\Sigma^{\mu}\beta^{\nu}(x_{i})\left(p_{\mu}p_{ \nu}-\langle T_{\mu\nu}\rangle\left(x_{i}\right)\right)\right] \tag{17}\]
An immediate issue is the choice of \(d\Sigma_{\mu}\), the foliation. Since we are simulating using a square grid around a hydrostatic limit [27], consistency requires \(d\Sigma_{\mu}=dV\left(1,\vec{0}\right)\). One might need to check that the gauge-like symmetries with respect to a reparametrization of \(\Sigma_{\mu}\) of [1; 18] will emerge. \(\beta_{\mu}\) is given by the Landau condition \(\beta_{\mu}T^{\mu\nu}\propto\beta^{\nu}\)
**(ii):**: Expand \(\left\{f\right\}\) in Eigenvalues and Eigenvectors, according to to [27] Note that \(p_{\mu}p_{\nu}\) is a symmetric real 4X4 matrix so Eigenvalues and Eigenvectors might have something to do with a Wigner-Dyson distribution [29],perhaps something analytical can be done.
**(iii):**: Evolve each \(\left\{f\right\}\rightarrow\left\{f\right\}^{\prime}\) according to the Boltzmann equation, using the Eigenvalue analysis of [27]. This time a Vlasov operator respecting causality can be constructed via Eq. 9
**(iv):**: Construct \(\left\langle T_{\mu\nu}\right\rangle\left(x_{i}\right)=\left\langle p_{\mu}p_ {\nu}\right\rangle\left(x_{i}\right)\) and \(\beta_{\mu}(x_{i})\) from \(\left\{f^{\prime}\right\}\) and return to step (i)
If the picture argued for in this work is correct then, while perhaps the typical f in the ensemble at each step is far from the equilibrium value, fluctuations within the \(\left\{f\right\}\) will smear out non-hydrodynamic effects and the evolution of \(\beta_{\mu}(x_{i})\) will follow the hydrodynamic description on average. Causality means this model would be different from the random matrix ansatz discussed in the previous section (the matrices would be "locally random" within a causal diamond) so a comparison would be interesting.
## IV Discussion
This has been a very speculative exercise. At the moment, we do not have a way to check quantitatively if a functional Boltzmann equation approach will give the desired result, an approach to local equilibrium which
* Is significantly faster than that of the Boltzmann equation
* Does not increase as the number of degrees of freedom goes down
At best, a "Galilean" model (instant signal propagation) can be looked at from a random matrix perspective, and universality seems to show that indeed this scenario is plausible. Nevertheless, heuristically when the number of degrees of freedom is small ensemble average notions such as "phase space distribution function" are obviously inadequate and must
be generalized, and a functional approach, with discretization, might be the best way to achieve this. Meanwhile,experimental tests of collectivity in smaller and smaller systems, both cold atoms [7; 8] and heavy ion collisions [3; 4] will tell us if a theoretical justification of hydrodynamics with small systems is worth pursuing.
GT thanks CNPQ bolsa de produtividade 306152/2020-7, bolsa FAPESP 2021/01700-2 and participation in tematico FAPESP, 2017/05685-2. We thank Igor Gorniy and Leonid Pastur for providing references and answering my newbie questions about random matrices, Peter Arnold for explaining some subtleties of thermalization in quantum field theory and Leonardo Tinti and Stanislaw Mrowczynski for helpful discussions. The initial part of this work was done when I was in Kielce under NAWA grant BPN/ULM/2021/1/00039
## Appendix A Viability of the Boltzmann-Gaussian functional ansatz:some further comments
### Free streaming, perfect hydrodynamics and ensemble averaging
Let us look at a well-known transport theory "paradox". Let us take eq 1 in the free streaming collisionless case, adding a mass for the result to have a good classical limit
\[\frac{p^{\mu}}{m}\partial_{\mu}f(x,p)=0 \tag{10}\]
Physically, an obvious solution corresponds to the Galilean motion of particles at constant velocity that have been released
\[f(x,p)=f\left(x_{0}+\frac{p}{m}t,p\right) \tag{11}\]
However, what is counter-intuitive is that not the only solution; Consider the case where the particles are in a thermal distribution according to some field \(\beta_{\mu}(x)\). In this case, it is trivial to check that
\[f(x,p)\sim\exp\left[-\beta_{\mu}p^{\mu}\right]\ \ \ \,\ \ \ \ \partial_{\mu}\beta_{\nu}+ \partial_{\nu}\beta_{\mu}=0 \tag{12}\]
also solves Eq. 10. In particular, an irrotational vortex \(\vec{v}\sim\frac{\Gamma}{2\pi r}\hat{\theta}\) will correspond to a gas rotating forever.
Mathematically, this is understandable: The right hand side of the transport equation vanishes for both free streaming and ideal hydrodynamic limits, and in the latter the flow
vector is the Killing vector. But physically, on the surface this makes no sense! How can a gas of non-interacting particles just rotate? There is no force keeping them rotating. More generally, the condition on Eq. A3 is that of a Killing vector, in line with the idea that flow is a killing vector of ideal hydrodynamics [24; 26]. But once again, these are non-interacting particles: Pressure gradients do not correspond to any force on neighboring volume elements since particles just propagate freely. Very clearly no system of non-interacting particles, when freely released, will start "flowing".
This paradox is resolved by remembering that \(f(x,p)\) is defined in an ensemble average limit where the number of particles is not just "large" but _uncountable_. Just like the limit of many straight segments is curved, once an infinite number of trajectories are summed over, the _maximum density_ of trajectories can be a curve even if trajectories are straight. This means that if we divide \(f(x,p)\) in any number of "physical" sub-events each with a finite number of particles, _none_ of the sub-events will look like Eq. A3, but each will look like some version of Eq. A2. However,the number of copies of each Eq. A2 _close to it's neighborhood in phase space_ will have a curvature, so when this is summed over a smooth "killing vector" emerges. This is illustrated in Fig. 2[34]2. In contrast, in the ideal hydrodynamic limit,even away from the pure ensemble averaging each microscopic particle will "flow" under the action of pressure gradients, and
Figure 2: A representation of how an uncountable number of physically sensible free-streaming configurations of finite trajectories can become a smooth but curved “fluid” when summed together as an ensemble average. In the ideal fluid dynamics limit, on the other hand, particle trajectories in each sub-event would follow the ensemble average
the probability that it flows differently goes to zero in the ensemble limit. Some put this as the real definition of hydrodynamics [10]: Initial conditions and conservation laws fix the final state _for individual particles_. Note that this is what seems to emerge from multi-particle cumulant analysis of experimental data [3].
What this suggests is that, analogously to the volume in phase transitions, the ensemble average limit is _non-analytic_. Being arbitrarily close to it does not necessarily give a qualitatively similar description w.r.t. it. In other words, the transport properties of a system of finite degrees of freedom need not be close to their Boltzmann equation results even if the number of degrees of freedom is large. It also suggests that away from the non-analytic limit stochasticity due to a limited number of degrees of freedom interplays with the Knudsen length scale in a highly non-trivial way: One can regard each sub-ensemble as a frequentist "world", random scattering as the interaction "between worlds" and Vlasov evolution as a semi-classical interaction "within a world". A functional picture, where "every world" corresponds to a probabilistic ensemble of phase space functions, might be the ideal way of dealing with this picture in a consistent manner.
### Transport in quantum mechanics and field theory
Let us review a little bit the current consensus of how transport theory fits in with quantum field theory [11; 12].
The current consensus is that in quark-gluon plasma physics the Vlasov terms are taken care of by resummation and screening. The idea is that such terms would be relevant at a "soft" scale \(k\sim gT\) (where \(g\) is the quantum field coupling constant), where field are classical. Thus, in a manner somewhat analogous to the argument in [17], the soft modes are taken care of by the Vlasov equation while the hard modes are put in the collision Kernel. For this to work within a quantum field theory perturbative expansion, one needs an intermediate scale, which can be the temperature or more generically the occupancy of soft states. We can then resum the soft modes
\[Vlasov\sim g^{2}\left\langle A^{2}\right\rangle\sim g^{2}\int\frac{d^{3}p}{E _{p}}f(E_{p},T)\ \ \ \,\ \ \ \ \ Boltzmann\sim g^{2}\left\langle A^{4}\right\rangle\sim f\times f \tag{10}\]
with all interactions between them counted as a correction to the propagator. The hard mode (\(k\sim T\)) interactions \(\sim g^{2}\left\langle A^{4}\right\rangle\) are then accommodated as a Boltzmann equation with
distributions and collision kernels calculated via such propagators [12]. In this regime, the main effect of the fields is Debye screening and if the mean free path is well above the Debye screening length the Vlasov terms become irrelevant.
There are two issues in this description when a finite number of degrees of freedom are excited and one is well away from a thermodynamic limit: The first is that the _ultra_ soft modes \(k\sim g^{2}T\) couple to the soft modes via Plasma instabilities and can not generally be treated perturbatively. If the boundary is fixed, for example by an asymptotic expansion around a hydrostatic state (as is done via Kubo/Schwinger-Keldysh formulae), this provides boundary conditions that render these ultra-soft modes irrelevant, but for small systems this is suspect.
The second is that while "to leading order" one can obtain a "resummed Boltzmann equation" it is not clear what the next to leading order is and how good is the convergence of this series [35]. The fact that to zeroth order in the collision term an infinite quantum thermal loop summation leads to a classical Vlasov equation [36] ilustrates how careful must we be with any such expansion: Basically the expansion in correlations, in \(\hbar\) and in temperature do not commute. Moreover, there is an ambiguity in where the "field description", relevant for low momenta and high occupancy numbers, gives into a "particle" description [37].
The functionals discussed in this paper are supposed to be an ansatz to solve the difficulties above, assuming these correlations are classical-probabilistic rather than quantum, and functionals parametrize our ignorance of the phase space distribution rather than quantum correlations. So the question is where would this ansatz lie in corrections of the coupling constant \(g\) and Planck scale \(\hbar\). The different regimes of many body theory are illustrated in
Figure 3: The domain of validity of the ansatz proposed here, in terms of fluid cell coarse graining and microscopic variables \(x_{i},p_{i}\)
Fig. 3. In practice, we hope to describe a system which is
**Strongly correlated:**:, so that the BBGKY hierarchy can not be used as an "expansion" but must be handled "non-perturbatively". This is the meaning of \(\alpha=\frac{\langle f^{n}(x,p)\rangle}{\langle f(x,p)\rangle^{n}}\gg 1\). Otherwise, one can try to truncate the BBGKY hierarchy and the limit of this is the Boltzmann equation. Vlasov terms together with functionals of \(f\) (including arbitrary \(n\)-point functions) will keep track of long-term correlations, while collision type terms will keep track of short-term ones
**Classical-probabilistic:**: in the sense that statistical independence has to hold. In other words the \(CHSH\) inequality [38]
\[\langle x_{i},q_{j}\rangle-\langle x_{i},p_{j}\rangle+\langle x_{i},q_{j} \rangle+\langle x_{i},p_{j}\rangle\leq 2 \tag{10}\]
must hold for any pair of conjugate observables (position,momentum, spin for fluids with polarization) from any cells \(i,j\) This is required for the probabilities of any field configuration must be classical functionals rather than quantum operator averages. In this sense, \(h\ll 1\) (or equivalently state occupancy \(\gg 1\)). Otherwise, phase space functions and functionals stop being classical objects. Note that the saturation of Eq. 10 might be done by decoherence with unseen degrees of freedom or by "Eigenstate thermalization" of a strongly coupled quantum evolution [39], something described in some detail in [18], and within effective field theory in [40]
|
2310.20274 | Extracting Entities of Interest from Comparative Product Reviews | This paper presents a deep learning based approach to extract product
comparison information out of user reviews on various e-commerce websites. Any
comparative product review has three major entities of information: the names
of the products being compared, the user opinion (predicate) and the feature or
aspect under comparison. All these informing entities are dependent on each
other and bound by the rules of the language, in the review. We observe that
their inter-dependencies can be captured well using LSTMs. We evaluate our
system on existing manually labeled datasets and observe out-performance over
the existing Semantic Role Labeling (SRL) framework popular for this task. | Jatin Arora, Sumit Agrawal, Pawan Goyal, Sayan Pathak | 2023-10-31T08:43:11Z | http://arxiv.org/abs/2310.20274v1 | # Extracting Entities of Interest from Comparative Product Reviews
###### Abstract.
This paper presents a deep learning based approach to extract product comparison information out of user reviews on various e-commerce websites. Any comparative product review has three major entities of information: the names of the products being compared, the user opinion (predicate) and the feature or aspect under comparison. All these informing entities are dependent on each other and bound by the rules of the language, in the review. We observe that their inter-dependencies can be captured well using LSTMs. We evaluate our system on existing manually labeled datasets and observe out-performance over the existing Semantic Role Labeling (SRL) framework popular for this task.
+
Footnote †: journal: Computer Vision and Pattern Recognition
## 2. Dataset
Full-scale annotation of informing entities in a comparison sentence is a difficult task due to the diversity of writing styles of the users. So, most existing annotated datasets in this domain are small and manually labeled. Since a deep learning framework generally requires training through a large number of samples, we combine the annotated data obtained from various existing sources and split it (60:40) for training and testing. In addition to this, we artificially annotate review sentences using a pattern matching technique (explained in the next section) and add these to the training set. We, then filter out and use only review sentences which have at-least one comparative predicate and have length less than or equal to 30. The manually labeled datasets used are explained below.
* **Jindal and Liu Corpus:** The corpus3 contains review sentences mostly of products in electronics domain, annotated and segregated into 4 comparison categories. This was used by Jindal and Liu (Jindal and Liu, 2016; Chen et al., 2017). We use all comparison sentences from the corpus except type 4 (non-gradable comparisons). Each comparison sentence is annotated with names of the products (Entity 1 and 2), the aspect (Entity 3) and the predicate is mentioned as a bracketed comparison phrase. Footnote 3: Can be downloaded here, [https://www.cs.uic.edu/~liub/FBS/data.tar.gz](https://www.cs.uic.edu/~liub/FBS/data.tar.gz)
* **Corpus by Kessler and Kuhn:** This corpus (Kessler and Kuhn, 2016) contains around 2200 manually annotated camera reviews. We use all the annotated sentences from here. The annotation scheme is the same as the one we use. Entities 1 and 2 are called products 1 and 2 in our nomenclature.
* **JDPA Corpus:** This corpus (Kessler and Kuhn, 2016) contains annotated blog posts containing user opinions about automobiles and digital cameras. We use only the sentences from the digital cameras domain which have the comparison class label in their annotation. The words marked by this class label bring out the user opinion and are marked as predicates. In addition, this class has 4 annotation slots, 'More', 'Less', 'Dimension' and 'Same'. We map the 'More' slot to Product 1, 'Less' slot to Product 2, 'Dimension' slot to Aspect and ignore the 'Same' slot which indicates if the two products are ranked as equal.
* **Self Manual Labeling:** To include latest review trends, we crawled digital camera reviews from Amazon4, for the year 2016. Then, we manually annotated 350 review sentences with the three entities of information, wherever available.
Footnote 4: [https://www.amazon.com](https://www.amazon.com)
Overall contribution of different corpora in our training and test data is summarized in Table 1.
## 3. Proposed Approach
### Generation of Labeled Data
We observe that there are some distinct styles for expressing comparison in product reviews, generally used by people. Based on this observation, we made 5 simple patterns using regular expressions. If an unlabeled review sentence matches a pattern, we narrow down the exact regions to look for different entities of information, based on the pattern. The predicate is then identified by a comparative POS tag (JJR, JJS, RBR, RBS - as per the Penn Treebank Tagging scheme). The aspect and product names are identified by dictionary matching. The aspects dictionary has 83 features for products in the electronics domain. The products dictionary is 11,126 entries long. Both these dictionaries are made semi-automatically, i.e., first using some heuristics to get a list with good accuracy and then manually correcting it. As an example, consider the pattern, [Aspect] [Proposition (\(of|in\))] [Product Name] [Opinion]. This pattern fits sentences like, "The zoom in Nikon S8100 is far better." and labels _zoom_ (Aspect), _Nikon S8100_ (Product1) and _better_ (Predicate). These patterns certainly do not exhaustively capture all possible
\begin{table}
\begin{tabular}{|c||c|c|c|} \hline Dataset & Train-Set & Test-Set & Total \\ \hline J\&L & 313 & 208 & 521 \\ \hline Kessler & 982 & 655 & 1637 \\ \hline JDPA & 133 & 90 & 223 \\ \hline Manual & 210 & 140 & 350 \\ \hline Pattern-Based & 24164 & 0 & 24164 \\ \hline
**Total** & **25802** & **1093** & **26895** \\ \hline \end{tabular}
\end{table}
Table 1. Datasets used in this study along with train-test split details
Figure 1. Proposed model for extracting information from an example review
comparisons, which is the final goal of this research work, but still give an annotated dataset with good precision, which can be used for training. We use this pattern fitting approach on electronic gadget reviews (Beng et al., 2015) from Amazon5. The labeled data hence generated is used in training only, as shown in Table 1.
Footnote 5: [http://snap.stanford.edu/data/web-Amazon-links.html](http://snap.stanford.edu/data/web-Amazon-links.html)
### Overall Framework
Our model consists of three layers. An input review sentence is first tokenized and then its words are embedded by passing through the embedding layer. The embedded sentence is then passed through a LSTM (Long Short Term Memory) layer, where corresponding to each word, we have one LSTM unit. For each word of the sentence, the output from the corresponding LSTM cell is converted to a 5-dimensional attribute vector by passing through a fully connected layer. The attribute vector has one dimension for each entity of information (Product1, Product2, Aspect, Predicate, None) and is converted to a probability distribution by passing through a softmax layer. Finally, we take the label for the word/token as the attribute having the maximum probability. An example review being processed by our model is shown pictorially in Figure 1.
### Embeddings
For a word/token in a sentence, the embedding layer finds out two embeddings, the word embedding and the one-hot POS (Part of Speech) embedding and concatenates the two, to be fed to the LSTM layer. We use the universal POS tags for POS embedding. For word embeddings, we try out 100-dimensional, and the standard 300-dimensional GloVe (Devlin et al., 2014) word embeddings trained on a general English corpus (Text8 Corpus6) and those trained specifically on electronics reviews from Amazon. We do not go for higher dimensional embeddings since that would increase the number of training parameters in our model and we may not be able to effectively train it using the current size of training data we have.
Footnote 6: Can be downloaded from here, [http://mattmahoney.net/dc/textdata](http://mattmahoney.net/dc/textdata)
### Training and Model Variants
We train our system to minimize the cross-entropy loss between the output probability distribution and the one-hot gold labels for tokens in sentences from the training set. There are several model variants that we test. We try out both unidirectional and bidirectional LSTMs. We work with both single and multiple LSTM layers. The specifications of all variants are shown in Table 2 and the results obtained by these variations are all reported in the next section. The model giving the best results is shown in bold (Model2).
## 4. Experimental Results
### Experimental Setup
For sentence and word tokenization as well as POS Tagging, we use Natural Language Toolkit (NLTK). The deep learning model implementation is done using Tensorflow. The embeddings are prepared using Glove and are kept frozen, not trained with the main model. All the parameters of the model are randomly initialized. For baseline approaches, using Semantic Role Labeling (SRL), we use the same settings as used by Kessler and Kuhn (Kessler and Kuhn, 2019). The SRL system takes as input, data in CoNLL format for which we use the MATE7 Dependency Parser (Beng et al., 2015).
Footnote 7: [https://code.google.com/archive/p/mate-tools/](https://code.google.com/archive/p/mate-tools/)
Footnote 8: This corresponds to Model2, shown in Bold.
### Evaluation Framework
We test our system as well as the baselines, using the manually labeled test data described in Table 1. We evaluate the systems on two tasks and in both cases, we calculate the Precision, Recall and F1-Scores. The first task is argument identification i.e. identifying if a word/token has _some_ entity of information. The second task is, argument classification, where for a given word, the system has to classify it with one of the 5 labels (Predicate, Product 1, Product 2, Aspect, None).
### Baseline Approaches
We compare our system with the approach presented in the paper by Kessler and Kuhn (Kessler and Kuhn, 2019). The SRL is a feature engineered machine learning based system. Their system uses standard SRL features for extracting all informing entities in a review using a 2-stage pipeline. It first identifies only the predicate using SRL. Then, in the second stage, uses predicate information (either gold labeled predicates, or those identified in the first stage) for identifying and classifying the other arguments. In their paper, the authors present their results using gold predicates and report a 10% decrease in the results if system identified predicates are used instead. We replicate their system and for a fair comparison with the proposed approach which is a single-stage model, we create two baselines. **Baseline1:** We use their method with the gold predicates information, and as mentioned in their paper, the results obtained from their system are decreased by 10% to compare with the proposed model. **Baseline2:** Instead of gold predicates, we feed in the system identified predicates from stage 1 of the pipeline to stage 2 of the SRL system and compare with our model's performance. The results for predicate identification, argument identification and classification are shown in Tables 3, 4 and 5 respectively. Note that we do not show Baseline1 results in Table 3 as the gold standard predicates were used.
We observe that a single layer of Bidirectional LSTMs, using 300 dimensional GloVe word embeddings prepared from general English (Text8) Corpus gives the best results overall and outperforms both baselines in all the tasks in terms of recall as well as F1-score9.
Footnote 9: [https://code.google.com/archive/p/mate-tools/](https://code.google.com/archive/p/mate-tools/)
Footnote 9: This corresponds to Model2, shown in Bold.
## 5. Discussions
* Since a large amount of training data is generated using patterns, we observe a relatively low recall from the models trained using the data, as expected. But Baseline2 reports a very low recall. This
\begin{table}
\begin{tabular}{|c||l|l|l|l|} \hline Model & LSTM Type & LSTM & Embedding & Embedding \\ & & Layers & Dimension & Source \\ \hline Model1 & Unidirectional & 1 & 300 & Text8 \\ \hline
**Model2** & **Bidirectional** & **1** & **300** & **Text8** \\ \hline Model3 & Unidirectional & 2 & 300 & Text8 \\ \hline Model4 & Unidirectional & 1 & 100 & Text8 \\ \hline Model5 & Unidirectional & 1 & 100 & Electronics \\ \hline \end{tabular}
\end{table}
Table 2. Specifications of the model variants, used in this study
is because, in the pipelined SRL approach, correct identification of the predicate (the event) is key to further identification of arguments (roles). Since Baseline2 gives a high precision and very low recall for predicate identification itself on the test data, hence same is the trend for argument identification and classification as well. Our system, on the other hand, overcomes the limitation of a pipelined approach by combined modeling of the informing entities.
* Increasing the number of hidden LSTM layers does not improve the results, thus confirming that a single layer LSTM rightly captures the dependencies among the informing entities in a comparison based review sentence.
* Using 100 dimensional word embeddings leads to a lower recall. But, since the embedding dimensions are proportional to trainable model parameters, smaller dimensional embeddings can give a good enough model even when the training set is small.
* Embeddings specifically prepared from the electronics corpus give a slightly better precision but compromise with the recall. Hence, general English text embeddings and electronics embeddings both give almost similar F1-score on both tasks.
## 6. Conclusions and Future Work
In this paper, we presented a simple framework which uses deep learning to annotate and hence, extract all important entities of information from comparative product reviews. This system saves the trouble of feature engineering and gives better results than the previously presented SRL based system.
We also developed simple patterns which capture some common styles of presenting comparisons in reviews. This pattern fitting technique proved beneficial in expanding our training data, making it possible for the deep learning model to effectively learn the sentential structure and inter-dependencies among the informing entities in comparative reviews.
There is still a lot of scope for improvement. In reviews, users often tend to use pronouns or refer implicitly to a product mentioned in the previous sentences. In such cases, a wrapper system needs to be developed which can capture the sentence-to-sentence dependencies and map the pronoun in the current sentence, to the corresponding noun mentioned in the previous sentences. This is an active area of research which we would like to explore. Peng et al. (2019) show the effectiveness of Graph LSTMs for such cross-sentence relation extraction.
###### Acknowledgements.
The authors would like to acknowledge the contributions of Yash Agrawal and Kushagra Aggarwal, CSE, IIT Kharagpur, in parsing the JDPA Corpus and manual annotation of datasets.
|
2301.13699 | Nanoscopic jets and filaments of superfluid He-4 at zero temperature: a
DFT study | Helium droplets produced by the instability of a cryogenic helium jet exiting
a source chamber leads to the formation of He drops which are considered as
ideal matrices for spectroscopic studies of embedded atoms and molecules. Here,
we present a He-DFT description of droplet formation resulting from jet
breaking and contraction of superfluid He-4 filaments. Whereas the
fragmentation of long jets closely follows the predictions of linear theory for
inviscid fluids, leading to droplet trains interspersed with smaller satellite
droplets, the contraction of filaments with an aspect ratio larger than a
threshold value leads to the nucleation of vortex rings which hinder their
breakup into droplets. | Francesco Ancilotto, Manuel Barranco, Marti Pi | 2023-01-31T15:17:44Z | http://arxiv.org/abs/2301.13699v2 | # Nanoscopic jets and filaments of superfluid \({}^{4}\)He at zero temperature:
###### Abstract
Helium droplets produced by the instability of a cryogenic helium jet exiting a source chamber leads to the formation of He drops which are considered as ideal matrices for spectroscopic studies of embedded atoms and molecules. Here, we present a He-DFT description of droplet formation resulting from jet breaking and contraction of superfluid \({}^{4}\)He filaments. Whereas the fragmentation of long jets closely follows the predictions of linear theory for inviscid fluids, leading to droplet trains interspersed with smaller satellite droplets, the contraction of filaments with an aspect ratio larger than a threshold value leads to the nucleation of vortex rings which hinder their breakup into droplets.
## I Introduction
Liquid helium droplets at low temperature offer a unique environment for molecular spectroscopy [1; 2; 3] and the study of superfluidity on the atomic scale, [4; 5; 6] including the study of quantum vortices. [7; 8; 9; 10] Usually, helium droplets are produced by expansion of cooled helium gas or by instability of a cryogenic helium jet exiting a source chamber into vacuum throughout a nozzle, whose temperature and pressure determine the appearance of the liquid jet and the droplet size and velocity distributions. [11; 12] Eventually, helium drops undergo evaporative cooling and become superfluid at a temperature of 0.4 K [11] on a \(\mu\)s time scale. [13]
Understanding the dynamical properties of liquid \({}^{4}\)He jets and the instabilities leading to their fragmentation is a relevant issue in the production and characterization of droplets made of \({}^{4}\)He. This unique fluid allows for a large variation of non-dimensional parameters related to the fluid viscosity and the velocity at which it exits the nozzle, which characterize its dynamical properties. [14] This understanding has also a primary application, namely to make available helium drops with the size and velocity required by the experiments, together with size and velocity distributions as narrow as possible. This has led to recent experimental studies on the disintegration of liquid helium jets. [15; 16] Besides, a liquid thread with finite length ("filament" in the following) with no external constraint is expected to contract trying to minimize its surface energy and eventually reach a spherical liquid drop. However, the outcome of the process is not always that simple, as an ample body of experiments and theoretical work on classical fluids has shown in the years. We notice that liquid \({}^{4}\)He filaments are regularly found in the experiments. [14; 15; 16]
Liquid jets and filaments and their dynamical instabilities are well established subjects of study in classical fluids dynamics because of practical questions and applications on the one hand, and because jet dynamics probes many physical properties and theoretical approaches on the other hand, see e.g. Refs. [17; 18; 19] and references therein. Most studies concentrate on viscous fluids because of practical implications. The underlying theoretical and numerical challenge is to solve the Navier-Stokes (NS) equation subject to appropriate boundary conditions.
The effect of viscosity and surface tension is embodied in the Ohnesorge number \(Oh\) defined as \(Oh=\mu/\sqrt{m\rho_{0}\gamma R_{0}}\), where \(m\) is the atom mass, \(\rho_{0}\) the atom density of the fluid, \(\gamma\) the surface tension, \(\mu\) the viscosity coefficient, and \(R_{0}\) the radius of the jet or filament. Inviscid filaments have been addressed in passing by Schulkes, [20] but owing to computational challenges, he could not simulate extreme interfacial deformations arising in crucial moments of the dynamics, as during filament breaking and pinch-off, i.e. the formation of two isolated drops from the opposite tips of the filaments. While it is naturally assumed that solving the NS equation for small enough viscosities the results should be nearly indistinguishable from the inviscid limit, see e.g., Refs. [19] and [21], a description of superfluid (i.e. inviscid _and_\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\
density excitations (ripplons, phonons and rotons) is naturally incorporated into the simulations. It also considers the possibility of atom evaporation from the He sample during the real-time dynamics,[29] which however has been found to be a negligible effect in the present study.
The He-DFT approach adds to the classical viscous fluids or molecular dynamics descriptions the possibility of disclosing purely superfluid effects in the dynamics, in particular quantized vortex nucleation. It has been recognized that a retracting viscous liquid filament may escape from pinch-off through the creation of vortex rings for Ohnesorge numbers in the \(0.002<Oh<0.1\) range.[21] Here we show that the same happens in the zero viscosity, irrotational superfluid case.
Due to the computational burden associated with fully three-dimensional He-DFT simulations as the ones discussed here, we address jets and filaments of nanoscopic size. Studies on the breakup of liquid nanojets are available in the literature; atomistic molecular dynamics simulations on the formation, stability and breakup of viscous fluids have been carried out.[30] To our knowledge, no simulations of breakup of superfluid nanojets and filaments have been published so far.
This work is organized as follows. In Sect. II we briefly present the He-DFT approach used in this work. In Sect III.A we discuss the results for the dynamics of \({}^{4}\)He jets, focusing on the conditions leading to fragmentation, and in Sect. III.B we study the contraction and possible break-up of \({}^{4}\)He filaments with finite length. A summary is presented in Sect. IV. In addition to the main text, we provide in the supplementary material the real-time dynamics of the \({}^{4}\)He jets and filaments addressed in this paper. This multimedia material constitutes an important part of this work, since it helps capture physical details which would otherwise escape the written account.
## II Theoretical approach
Density functional theory for liquid helium is a phenomenological approach which constitutes a good compromise between accuracy and feasibility. The parameters of the functional have been adjusted to reproduce various properties of the bulk superfluid such as equilibrium density, energy per atom and compressibility, as well as the main features of the dispersion relation of the elementary excitations of superfluid \({}^{4}\)He.[22] A detailed description of the method can be found in Refs. [23; 24; 25].
Within He-DFT, the energy of a \(N\)-atom sample is written as a functional of the \({}^{4}\)He atom density \(\rho(\mathbf{r})\) as
\[E[\rho]=T[\rho]+E_{c}[\rho]=\frac{\hbar^{2}}{2m}\int d\mathbf{r}|\nabla\Psi( \mathbf{r})|^{2}+\int d\mathbf{r}\,\mathcal{E}_{c}[\rho] \tag{1}\]
where the first term is the kinetic energy, \(m\) is the mass of the \({}^{4}\)He atom and \(\Psi(\mathbf{r})\) is the effective wave function (or order parameter) of the superfluid such that \(\rho(\mathbf{r})=|\Psi(\mathbf{r})|^{2}\) with \(\int d\mathbf{r}|\Psi(\mathbf{r})|^{2}=N\). The functional \(\mathcal{E}_{c}(\rho)\) we have used contains the He-He interaction term within the Hartree approximation and additional terms describing non-local correlation effects.[31]
The equilibrium configuration of the system is obtained by solving, using an imaginary-time relaxation method,[25] the Euler-Lagrange equation
\[\left\{-\frac{\hbar^{2}}{2m}\nabla^{2}+\frac{\delta\mathcal{E}_{c}}{\delta \rho}\right\}\Psi\equiv\mathcal{H}[\rho]\,\Psi=\zeta\,\Psi \tag{2}\]
where \(\zeta\) is the \({}^{4}\)He chemical potential corresponding to the number of He atoms in the sample.
Minimizing the action associated to Eq. (1) leads to the He-TDDFT equation
\[i\hbar\frac{\partial\Psi}{\partial t}=\left\{-\frac{\hbar^{2}}{2m}\nabla^{2} +\frac{\delta\mathcal{E}_{c}}{\delta\rho}\right\}\Psi\equiv\mathcal{H}[\rho]\,\Psi \tag{3}\]
from which one can simulate the real-time evolution of the system.
The above equations have been solved using the \({}^{4}\)He-DFT-BCN-TLS computing package,[32] see Refs. [24] and [25] and references therein for additional details. Briefly, we work in cartesian coordinates, with the effective wave function \(\Psi(\mathbf{r},t)\) defined at the nodes of a 3D grid inside a calculation box. Periodic boundary conditions (PBC) are imposed which allow to use the Fast Fourier Transform[33] to efficiently compute the convolutions needed to obtain the DFT mean field \(\mathcal{H}[\rho]\). The differential operators in \(\mathcal{H}[\rho]\) are approximated by 13-point formulas. Eqs. (2-3) have been solved using a space-step of 1.2 A, and the time-dependent Eq. (3) has been numerically integrated using a Hamming predictor-modifier-corrector initiated by a fourth-order Runge-Kutta-Gill algorithm[34] with a time-step of 2 fs. This time-step has been found to keep the energy of the jet and filaments properly conserved during the dynamics, as it corresponds to non-dissipative processes. We have also checked that the jet configurations obtained in the course of the dynamics are robust against reasonable changes of the chosen space-step.
Figure 1: Density profile in the radial direction of a cylinder of radius \(R_{0}=21.5\) Å representing a \({}^{4}\)He nanojet.
## III Results
We have considered jets and filaments of sharp radius \(R_{0}=21.5\) A, defined as the radius at which the density equals \(\rho_{0}/2\), \(\rho_{0}\) being the liquid \({}^{4}\)He atom density at zero temperature and pressure.
### \({}^{4}\)He nanoscopic jet
The physics of liquid jets has been reviewed by Eggers and Villermaux.[17] The thinning and breakup of a liquid jet is mainly determined by surface tension effects. The stability of an infinite fluid cylinder of radius \(R_{0}\) was studied by Plateau,[35] showing that it exists in an unstable equilibrium, and any perturbation with wavelength \(\lambda\) greater than \(2\pi R_{0}\) is unstable and allows the surface tension to break up the cylinder into droplets, thus decreasing the surface energy of the system. Lord Rayleigh later showed[36] that for an inviscid liquid the fastest growing mode occurs when the wavelength of the axial undulation that ultimately leads to the fragmentation of the jet into droplets is equal to \(\lambda_{\rm c}=9.01\,R_{0}\) (Rayleigh-Plateau instability). When the jet breaks up, one or more small satellite drops -resulting from the necks breaking- may form between the larger droplets.
The characteristic times for jet instability and breakup are set by the capillary time \(\tau_{\rm c}\) defined as
\[\tau_{\rm c}=\sqrt{\frac{m\rho_{0}R_{0}^{3}}{\gamma}} \tag{4}\]
with \(\gamma\) being the surface tension of the liquid. In the case of \({}^{4}\)He we have \(m=4.325\times 10^{13}\) K, \(\rho_{0}=0.021836\) A\({}^{-3}\) and \(\gamma=0.274\) K A\({}^{-2}\). Hence, \(\tau_{\rm c}(R_{0}=21.5)=61.7\) ps.
It is customary to define the aspect ratio as \(\Gamma=\tilde{L}/R_{0}\) where \(\tilde{L}=L/2\) is the half-length of the jet. Here \(L\) coincides with the length of the simulation cell in the jet direction. From the linearized fluid dynamics equations for an inviscid and incompressible fluid, a critical value \(\Gamma_{c}\) is predicted to trigger jet fragmentation.[36] It corresponds to the mode with wavelength \(\lambda_{\rm c}=2\pi/k\), where \(k\) is such that \(\omega(k)\) is maximum. Here[17]
\[\omega^{2}(k)=\left[\xi\,\frac{I_{1}(\xi)}{I_{0}(\xi)}(1-\xi^{2})\right]\, \frac{1}{\tau_{c}^{2}} \tag{5}\]
where \(I_{0}(x)\) and \(I_{1}(x)=dI_{0}(x)/dx\) are modified Bessel functions of the first kind and \(\xi\equiv kR_{0}\). From the maximum of \(\omega\) one finds \(kR_{0}=0.697\) and thus
\[\Gamma_{c}=\pi/0.697=4.505 \tag{6}\]
Figure 2: Breaking dynamics of a cylinder subject to an axial perturbation of wavelength \(\lambda=2\pi R_{0}/0.697\). The color bar shows the atom density in units of Å\({}^{-3}\).
In correspondence with it one has
\[\omega_{max}=0.343\sqrt{\gamma/(m\rho_{0}R_{0}^{3})}=\frac{0.343}{\tau_{c}} \tag{7}\]
Similarly to what occurs in classical liquid jets, we have shown that the He-TDDFT approach yields the Rayleigh-Plateau instability for the superfluid nanojet when it is subject to a perturbation with the right wavelength. Since the evolution takes place in vacuum, i.e. in the absence of ambient gas embedding the jet, the velocity of the jet itself is not playing any role and therefore we perform our simulations in a reference frame where the jet is at rest (comoving frame). To this end, we simulate the jet by a cylindrical filament in a simulation box subject to PBC along the cylinder axis (the \(x\)-axis in the following). Its equilibrium structure has been obtained by solving Eq. (2); a plot of the jet density profile in the transverse direction is shown in Fig. 1. The jet displays a bulk region of fairly constant density (slightly higher than the bulk \({}^{4}\)He density \(\rho_{0}\) due to the compressive effect exerted by the surface tension on the lateral surface of the cylinder), delimited by a surface with a finite width. As mentioned above, the radius of the cylinder \(R_{0}\) is defined as the distance from the symmetry axis of the point where \(\rho(r)=\rho_{0}/2\).
We have verified by imaginary-time dynamics that the cylinder is indeed unstable against a small initial axial perturbation with the proper wavelength. To do so, we consider a (periodically repeated) cylinder made of \(N=12076\)\({}^{4}\)He atoms and length \(L=387.2\)\(\mathrm{\AA}\). We have first found the equilibrium geometry with a resulting radius \(R_{0}=21.5\)\(\mathrm{\AA}\). Therefore the aspect ratio of the cylinder is \(\Gamma=\bar{L}/R_{0}=9.01\), i.e., twice the critical aspect ratio Eq. (6); in this way the axial undulation caused by the mode with the Rayleigh-Plateau wavelength \(\lambda_{c}\) will produce two necks along the jet inducing the fragmentation into two droplets, as shown in the following.
Next, we performed an imaginary-time dynamics starting from a configuration corresponding to a a slightly perturbed axially symmetric cylinder of radius \(R_{0}\), where the initial density profile is given by:
\[\rho(\mathbf{r})=\frac{\rho_{0}}{exp\{[\sqrt{y^{2}+z^{2}}-R(x)]/0.5\}+1} \tag{8}\]
with
\[R(x)=R_{0}[1-\varepsilon\cos(4\pi x/L)] \tag{9}\]
and \(\varepsilon\ll 1\). With our choice of the length \(L\) and radius \(R_{0}\), the wavelength of the resulting density modulation is precisely equal to \(\lambda_{c}=9.01\,R_{0}\). The form in Eq. (9) ensures that the perturbed density is normalized so to have the same number of atoms as the unperturbed cylinder. If \(\delta_{0}=\varepsilon\,R_{0}\), the maximum excursion of the radius along the cylinder axis is thus \(R=R_{0}\pm\delta_{0}\).
The total energy of the axially perturbed cylinder turns out to be lower than that of the unperturbed cylinder, i.e. the system is energetically unstable toward a deformation leading to fragmentation. Starting from this configuration, we have performed an imaginary-time relaxation during which two necks develop eventually leading, as they shrink to zero, to two identical spherical droplets as the lowest energy state.
Next, we have studied, by solving the He-TDDFT Eq. (3), the actual real-time dynamics of the fragmentation process, starting from the axially perturbed cylindrical jet. Following Ref. [37], the perturbation is applied both to the density, as described above, and to the axial velocity of the jet as well, using the linearized solution of the Rayleigh-Plateau instability
\[v=v_{0}\sin(4\pi x/L) \tag{10}\]
where \(v_{0}=2\,\delta_{0}\,v_{max}/R_{0}\). Notice that the radial perturbation is symmetric in the origin, whereas the velocity fluctuation is anti-symmetric. Here \(v_{max}=\omega_{max}R_{0}/\xi_{max}\) is calculated from Eq.(5) using \(\xi=\xi_{max}=0.697\), giving \(v_{max}\sim 17\)\(\mathrm{\,m/s}\). Our starting value for the perturbation amplitude is \(\delta_{0}=0.452\)\(\mathrm{\AA}\), corresponding to the choice \(\varepsilon=0.021\) in Eq. (9).
In order to apply this velocity field to the superfluid jet, we multiply the initial axially perturbed cylinder wave function \(\Psi(\mathbf{r})=\rho^{1/2}(\mathbf{r})\) by the phase \(e^{i\phi}\) with
\[\phi=-2\frac{\delta_{0}}{R_{0}}v_{max}\left(\frac{L/2}{2\pi}\right)\cos(4\pi x /L) \tag{11}\]
and proceed with the real-time evolution.
Figure 2 shows snapshots of the jet density during the real-time dynamics. It can be seen that, starting from the perturbed cylinder, undulations whose amplitude increases with time appear along the jet. The instability is caused by the fact that the Laplace pressure increases in constricted regions, driving out the fluid and hence reducing further the neck radius. The jet evolves into density bulges connected by thin threads. Threads eventually break up and isolated drops appear instead. Figure 2 also shows that the threads between drops contract developing small end droplets displacing against each other whose collision yields a peak density. Not surprisingly,
Figure 3: Neck shrinking as a function of time, shown on a log-scale (see the text for the definitions of \(\delta\) and \(\delta_{0}\)). The points are the numerical values obtained from the simulation, whereas the dashed line shows the prediction of linear theory.
Figure 4: Breaking dynamics of a cylinder subject to multiple wavelength axial perturbations as explained in the text. The color bar shows the atom density in units of Å\({}^{-3}\).
Figure 5: Contraction of a filament with aspect ratio \(\Gamma=4\). The color bar shows the atom density in units of Å\({}^{-3}\).
Figure 6: Contraction of a filament with aspect ratio \(\Gamma=5\). The color bar shows the atom density in units of Å\({}^{-3}\).
threads behave as the filaments described in Sect. III.B. A similar pattern of alternating droplets and threads was observed in the study of the breakup of inviscid and irrotational capillary jets discussed in Ref.[38].
The lack of dissipation makes droplets and threads oscillate during the time elapsed by the simulation. Once formed, they execute a series of vibrations, being alternately compressed and elongated in the jet direction with an expected frequency of the order of \(\omega=\sqrt{8\gamma/m\rho_{0}R_{0}^{3}}\).[39] It has been pointed out that no obvious effects due to superfluidity have been observed on the breakup behavior of a liquid He jet.[14] Yet, Kolatzki et al.[16] have found that He droplets undergo shape oscillations that persist for much longer times than in the case of viscous drops, a signature of the superfluid character of these droplets.
We would like to mention that if only the cylinder density is perturbed and no axial velocity field is applied to it, we find that jet breaking proceeds as in Fig. 2, the only difference being that it takes more time for the instability to fully develop and eventually lead to jet fragmentation.
The actual time taken for the jet to break into droplets depends upon the amplitude of the initial density perturbation. It is defined as the time \(\tau_{b}\) it takes for the wave amplitude with the largest frequency to grow up to \({R_{0}}^{21,40}\)
\[R_{0}=\delta_{0}e^{\alpha_{max}\tau_{b}} \tag{12}\]
where \(\omega_{max}\) is given in Eq. (7). With our choice for the initial perturbation amplitude \(\delta_{0}\) we have \(\tau_{b}=[ln(R_{0}/\delta_{0})]/\omega_{max}=695\,\)ps.
We have computed the dynamics of neck shrinking by monitoring during the real-time evolution the quantity \(\delta(t)=(R_{max}-R_{min})/2\), where the radii \(R_{max}\) and \(R_{min}\) are measured at the two positions \(x=L/4\) and \(x=L/2\) (see Fig. 2). The calculated values for \(\delta(t)/\delta_{0}\) are shown in Fig. 3 on a logarithmic scale as a function of time, and are compared with the quantity \(e^{\alpha_{max}t}\) predicted from linear theory. It is remarkable the good agreement between both for the whole duration of the breaking process.
Finally, we have also investigated another scenario when the jet is subject to a more general perturbation on the equilibrium density, i.e. we have started the real-time dynamics with the cylinder simultaneously perturbed by several axisymmetric perturbations of different wavelengths. In order to accommodate a reasonable number of modes with different wavelengths compatible with the PBC used here, we perform the simulation in a cell longer than the one shown in Fig. 2, with \(L\) equal to three times the critical wavelength associated with the fastest mode, \(\lambda_{c}=2\pi R_{0}/0.697\). We therefore consider an axially symmetric density perturbation given by a linear combination of six modes with small random amplitudes \(\epsilon=\delta_{0}/R_{0}\) in the \((-0.03,0.03)\) range, and wavelengths \(\lambda_{c},3\lambda_{c},3\lambda_{c}/2,\lambda_{c}/4\), and \(3\lambda_{c}/4\), and perform a real-time
Figure 7: Contraction of a filament with aspect ratio \(\Gamma=6\). The color bar shows the atom density in units of Å\({}^{-3}\).
simulation starting from such initial state (\(t=0\) panel in Fig. 4).
We show in Fig. 4 some snapshots taken during the real-time evolution of this system, where it appears that among the various modes, the one eventually dominating in the course of time is indeed the critical one, dictated by \(\lambda_{\rm c}\), which leads to the formation of three necks, eventually resulting in the fragmentation into three droplets. However, at variance with the case where the critical mode is the only one present (as shown in Fig. 2) the jet does not break up into equal-size droplets. For much longer filaments than the one investigated here, one might expect a distribution of slightly different drop sizes, some drops coming from the crests of the primary waves and others from the ligaments linking them. Determining the disturbance frequencies for jet breaking leading to the production of uniformly sized equidistant He drops has been one of the main concerns of a recent work [16] in view of their experimental use in e.g. coherent diffraction imaging at x-ray free electron lasers.
We have thus seen that the He-DFT approach is able to address jet breaking yielding results in agreement with linear theory. This is a needed first step before carrying out the study of contracting He filaments which we address in the following.
Figure 8: Contraction of a filament with aspect ratio \(\Gamma=8\). The color bar shows the atom density in units of \(\AA^{-3}\).
Figure 9: Superfluid streamlines corresponding to the configurations \(\Gamma=8\) at \(t=290\) ps (top), and \(\Gamma=10.5\) at \(t=462\) ps (bottom). The color bar shows the atom density in units of \(\AA^{-3}\).
Figure 11: Contraction of a filament with aspect ratio \(\Gamma=15\). The color bar shows the atom density in units of Å\({}^{-3}\).
Figure 10: Contraction of a filament with aspect ratio \(\Gamma=10.5\). The color bar shows the atom density in units of Å\({}^{-3}\).
### Contraction and fragmentation of free-standing filaments
As with classical fluids, He jet breaking may lead not only to droplets but also to filaments, as observed in experiments.[14; 15; 16] We model here these filaments as cylinders of radius \(R_{0}\) delimited by two hemispherical caps[19] and study, using the He-TDDFT approach, their contraction due to the effect of the surface tension for different values of the aspect ratio \(\Gamma=L/R_{0}\),[18] where \(L\) is the half-length of the filament from end-to-end. The configuration from which the real-time dynamics is initiated is a free-standing ideal filament (i.e. no density perturbation is applied), as usually done in numerical simulations of the contraction of viscous fluid filaments.[19; 20; 21]
We have investigated filaments with different values of the aspect ratio, namely \(\Gamma=4,5,6,8,10.5\), and \(15\). Some of these values coincide with those studied in Ref. [19] at \(Oh=0.001\), which is considered to correspond to the inviscid regime. Our goal is to study how the initial aspect ratio \(\Gamma\) determines the fate of the filament, i.e. either contraction into a single liquid body (stable state) or breaking into two or more droplets. Experimentally,[18] it has been found for classical fluids that there is a critical initial aspect ratio \(\Gamma=6\pm 1\) below which a liquid filament is stable irrespective of the \(Oh\) value, and above which the filaments tend to break into separate droplets.
In the following we describe the most salient features found during the real-time evolution of superfluid He filaments. All simulations discussed below are displayed as movies in the supplementary material accompanying this work. These movies last for longer times than those reported in the following figures. We do not discuss the filament appearance for such long times because undamped excitations and especially the annihilation of vortex rings, as discussed in the following, tend to produce turbulence[41; 42] whose description is beyond the scope of this paper.
#### iv.2.1 Filament with \(\Gamma=4\)
This is the shortest filament that we have investigated. Its time evolution is similar to that predicted for short filaments by classical calculations and experiments,[18; 19] i.e. the filament contracts and oscillates back and forth without breaking. In the presence of some viscosity, the final configuration would be a single spherical droplet.
As shown by the temporal sequences in Fig. 5, a blood-cell shape develops in the transverse direction (y-z plane) at around \(t=240\) ps, which develops an almost empty hole in the center at \(t=254\) ps (toroidal shape) before becoming compact again (frame at \(t=300\) ps). After recovering the peanut-like shape along the filament axis (frame at \(t=400\) ps), the filament extends transversally at later times (\(t=756\) ps, not shown), and it is drawn again into a compact droplet, originating a high density spot at the touching region. The high density spot relaxes and launches a series of density waves propagating inside the filament. This effect has been observed in previous simulations of the merging of two \({}^{4}\)He nanodroplets.[41; 42]
#### iv.2.2 Filament with \(\Gamma=5\)
According to classical calculations and experiments, a filament with this value of \(\Gamma\) is also expected to display a stable dynamics, oscillating back and forth without breaking.[18; 19] Interestingly, simulation of this filament has disclosed the nucleation of quantized vortex rings, which appear in Fig. 6 as dark spots in the snapshots at \(t=315\) ps and \(t=380\) ps. For symmetry reasons, only pairs of quantized vortex-antivortex rings (vortex ring pairs with opposite circulation) can be nucleated. No such rings have been found in classical simulations carried out in the range of Ohnesorge numbers corresponding to the inviscid regime (below \(\sim 2\times 10^{-3}\)). Yet, Hoepffner and Pare have found classical vortex rings for Ohnesorge numbers in the \(0.002<Oh<0.1\) range but surprisingly enough not in the inviscid regime.[21] We will highlight the role of these vortices for the longer filaments discussed in the following, where they become effective in preventing the filament breaking.
In the case displayed in Fig. 6, vortices are nucleated during the contraction dynamics at surface indentations appearing between the end droplets or blobs and the rest of the cylindric filament (see the panel at \(t=315\) ps); this requires some inertia which can only be acquired when the filament is larger
Figure 12: Contraction of filaments with different aspect ratios as a function of time. The slope of the solid line is the theoretical contraction velocity \(R_{0}/\tau_{c}\).
than a critical value. Since this is the calculated filament with smallest \(\Gamma\) value for which we see vortex rings nucleation, one should expect the appearance of vortex rings for filaments with \(\Gamma\geq 5\). Once nucleated, vortices move to the bulk of the filament.
The filament end caps collapse (panel at \(t=315\) ps) and launch additional vortex rings. One may see multiple vortex-antivortex ring pairs in a small volume which eventually annihilate, yielding an intense burst of density waves at later times (panel at \(t=380\) ps). Eventually, the filament oscillates between the longitudinal and transverse directions, filled with density waves propagating inside the formed droplet (panel at \(t=450\) ps). We find similarities with the \(L_{0}=5\) case in Ref. [19], but at variance with that reference, where breakup appears by complex oscillations at \(t=5.2\,\tau_{\rm c}\), in our simulation the end caps are reabsorbed in the bulk of the resulting stable droplet.
#### iv.2.3 Filament with \(\Gamma=6\)
As shown in Fig. 7, the filament retracts and two end drops appear at the tips, clearly visible at around \(t=150\) ps. Drops grow in size and the filament between them contracts and shrinks into a thread, see e.g. the configuration at \(t=218\) ps. This thread collapses at \(t=236\) ps, and the two droplets are temporarily apart, as shown in the panel for \(t=260\) ps. However, due to the kinetic energy gained during the previous contraction stage, the two highly deformed fragments collide immediately after and merge again at \(t\sim 280\) ps to produce a single deformed droplet.
The collision of the fragments produces a high density spot at the contact region (between \(t=262\) ps and \(t=272\) ps, see movie in the supplementary material) which expands yielding density waves propagating inside the filament, see the \(t=300\) ps frame in Fig. 7. The merged drop presents surface indentations as those appearing e.g. at \(t=300\) ps. These indentations act as nucleation sites for quantized vortex rings, which remain close to the droplet surface. The cores of some of these vortices are clearly visible in the frame at \(t=376\) ps. Notice that these vortices do not contribute to the escape from pinch-off since the thread connecting the end drops has collapsed before. The density is no longer smooth; rather, it is strongly perturbed by the presence of density waves produced by the merging of the two fragments.
The evolution of this filament is similar to the \(L_{0}=6.0\) filament shown in Fig. 5 of Ref. [19]. For superfluid \({}^{4}\)He we have found that the filament temporarily breaks into two deformed drops at \(t=3.8\,\tau_{\rm c}\), similar to the value one can read in Fig. 5 of that reference. However, in our case drops collide and merge again, whereas in Ref. [19] they seem to remain separated. Another difference between classical and superfluid filaments is the appearance of quantized vortex rings and their subsequent annihilation.
#### iv.2.4 Filament with \(\Gamma=8\)
The dynamical evolution of this filament is shown in Fig. 8. As for the previous case, end drops develop, clearly visible already after \(t\sim 100\) ps. The main filament connecting the end drops shrinks and a thin neck develops at the drop-filament contact region, which start pinching off the filament with two necks that reach their smallest radius at \(t=254\) ps. Before they completely shrink, vortex rings nucleate close to the necks at about \(t=258\) ps, being clearly formed at \(t=270\) ps. The streamlines of the superflow are drawn in the top panel of Fig. 9 for the configuration at \(t=290\) ps, clearly showing the characteristic pattern of lines wrapping the vortex core positions.
These vortex rings prevent necks from pinching, as they reopen immediately after their appearance (see e.g. the frame at \(t=290\) ps), similarly to the mechanism discussed by Hoepffner and Pare.Hoepffner and Pare (1993) A flow through the neck develops because of the retraction and, according to these authors, this flow may detach into the jet downstream of the neck when fluid viscosity exceeds a threshold (\(Oh\gtrsim 2\times 10^{-3}\));Hoepffner and Pare (1993) this sudden detachment creates a vortex ring which strongly modifies the flow pressure: fluid is transported back into the neck which in turn reopens. It is remarkable that the same happens in the case of superfluid \({}^{4}\)He in spite of the lack of viscosity. At \(t=330\) ps, another pair of vortex rings is nucleated at the droplet-filament indentation preventing pinching again. Finally, vortex-antivortex rings annihilate and disappear from the system producing as a result a burst of density waves.
The movie in the supplementary information shows the appearance of surface protrusions at \(t=452\) ps which act as vortex nucleation sites, and their collapse yields a high density spot. Eventually, the contracted filament is permeated by a large number of vortex rings at \(t=488\) ps. This is at variance with the classical, inviscid fluid description.
The evolution of this filament can be compared to that corresponding to \(L_{0}=8.0\) shown in Fig. 5 of Ref. [19]. Besides the vortex rings phenomenology, which is absent in the simulations of that reference, in our case end-pinching strictly never happens. The closer the \({}^{4}\)He filament gets to it is at \(t=4.1\,\tau_{\rm c}\), whereas the time for the filament breakup by end-pinching read from Fig. 5 of Ref. [19] is \(t\sim 4.6\,\tau_{\rm c}\).
#### iv.2.5 Filament with \(\Gamma=10.5\)
Similarly to the previous cases, end drops develop as shown in Fig. 10. A more violent approach is expected because the filament is longer and end drops have more time to accelerate under the traction exerted by surface tension. The filament connecting the end drops contracts and necks appear at the drop-filament contact region, as shown at \(t=250\) ps, which start pinching. The neck shrinks to a minimum at \(t=256\) ps, escaping from pinch-off again because vortex rings are nucleated at \(t\sim 260\) ps.
Vortex rings detach from the neck and move towards the bulk of the end drops (frame at \(t=300\) ps). The remaining filament develops bulges, which evolve to a more complex
structure (frame at \(t=400\) ps).
The snapshot at \(t=440\) ps shows an almost complete fragmentation. However, due to the opposite velocities acquired during the early stages of the contraction, the three fragments merge again. Other vortex rings are created in the process, nucleated at the necks during the re-merging, as shown in the frame at \(t=480\) ps. The streamlines of the superflow are drawn in the bottom panel of Fig. 9 for the configuration at \(t=462\) ps.
Vortex ring annihilation at later times (see movie in the supplementary material) produces density waves arising from the collapse of their cores. This is a phenomenon that we have not observed in the merging of He droplets,[41; 42] nor the shrinking of a vortex ring up to it collapses. It is interesting to see that these small vortex rings travel towards the tips of the filament, evaporating from them. Eventually, vortex rings disappear and the contracted filament enters a complex dynamic regime, hosting plenty of density waves until the end of the simulation.
The evolution of this filament should be similar to that corresponding to \(L_{0}=10.0\) shown in Fig. 5 of Ref. [19]. Besides the vortex rings phenomenology and wave dynamics, in our case end-pinching strictly never occurs. The filament gets close to it at \(t=4.1\)\(\tau_{\rm c}\) (254 ps) and especially at \(t=7.0\)\(\tau_{\rm c}\) (432 ps), whereas the breakup time by end pinching read from Fig. 5 of Ref. [19] is \(t\sim 4.8\)\(\tau_{\rm c}\).
#### iii.2.6 Filament with \(\Gamma=15\)
This is the largest filament we have investigated. In classical simulations of sufficiently long filaments (like the one shown in Fig. 11) and small \(Oh\) numbers, as the filament contracts it will succumb to end pinching[20; 43; 18] even in cases where the Rayleigh-Plateau instability is expected to develop, subsequently resulting in the filament to break up into several drops. However, this instability does not occur, suggesting that the timescale for the Rayleigh-Plateau instability to grow is much larger than the timescale for the filament to fully contract even for long filaments.
In the case of superfluid \({}^{4}\)He, the sequence is similar to the \(\Gamma=8\) and \(\Gamma=10.5\) cases, except that the number of necks has increased. Well developed end drops appear at \(t=100\) ps, with a well developed necks at \(t=160\) ps. Figure 11 shows that end drops nearly pinch-off at \(t=248\) ps, but at \(t=264\) ps one may see vortex rings appearing at the necks, hindering pinch-off. The vortex rings detach from the neck and move towards the bulk of the end drops and bulges appear in the filament close to the end drops (panel at \(t=290\) ps). Bulges evolve to bulbs and, similarly to the \(\Gamma=8\) and \(\Gamma=10.5\) cases, intermediate drops develop during the time evolution whose number increases with the length of the filament, as also observed in the simulations of classical low viscosity (\(0.003\leq Oh\leq 0.02\)) filaments.[44]
The evolution of this filament should be compared to that corresponding to \(L_{0}=15.0\) shown in Fig. 5 of Ref. [19]. Besides the phenomenology of vortex rings proliferation, also in this case end-pinching never occurs. End drops are close to detach at \(t=4.02\)\(\tau_{\rm c}\) (247 ps) but escape pinch off because of vortex ring nucleation, whereas the filament breakup time read from Fig. 5 of that reference is \(t\sim 4.8\)\(\tau_{\rm c}\).
Finally, we have computed the contraction velocity for all the investigated filaments. We have defined the position of the tip of the filament as the location of its sharp surface (that at which the density equals \(p_{0}/2\)) on the \(x\)-axis.
Figure 12 shows the displacement of the tip position as a function of time for the studied filaments. It appears that all curves collapse onto the same curve up to \(t\sim 170\) ps (2.76 \(\tau_{\rm c}\)). Consequently, within this range of time the retracting velocity is independent from the aspect ratio \(\Gamma\). For times in the \(50\)ps \(\leq t\leq 170\) ps range, all filaments accurately follow the line with the slope equal to the Taylor-Culick velocity \(v=R_{0}/\tau_{\rm c}=0.348\) A/ps, which is the relevant velocity scale expected for the retraction process, originally proposed[45; 46] as the steady-state velocity of a capillary-driven retracting inviscid planar liquid where inertia effects balance the capillary forces acting on the system. For longer times the behavior changes because there are either filament oscillations, changes in the tip shape, or both. The shorter the filament, the earlier these deviations start to show up. The retracting velocity of liquid filaments has been studied for Ohnesorge numbers \(Oh\geq 0.1\),[47] finding that the tip dynamics is characterized by an oscillating velocity whose mean value is close to the Taylor-Culick prediction. These oscillations have also been found for \(Oh=0.05\) in the \(\Gamma=20\) case.[47] In superfluid helium, though, we do not observe any oscillation with time of the tip retraction velocity.
## IV Summary
We have studied the instability and breakup of nanoscopic superfluid \({}^{4}\)He jets and filaments within He-DFT at zero temperature. We find that the fragmentation of long cylindrical jets closely follows the predictions of linear theory for inviscid fluids, resulting in the formation of larger droplets intercalated with smaller satellite droplets.
While some of our results for the contraction of free-standing filaments are consistent with those obtained in the inviscid regime which corresponds to Ohnesorge numbers smaller than \(2\times 10^{-3}\),[19] the novelty with respect to previous calculations for classical inviscid filaments is the appearance of quantized vortex rings in filaments with aspect ratio \(\Gamma>5\).
Non-quantized vortex ring nucleation in the region connecting the end drops with the rest of the filament plays a central role in escaping filament breakup in the low-to-intermediate viscosity regime characterized by Ohnesorge numbers in the \(0.002<Oh<0.1\) range.[21] Our simulations show that a similar mechanism, associated with quantized vortex rings, is active in the superfluid regime at zero temperature, mostly preventing the droplet formation through end-pinching. Vortices are also nucleated at surface protrusions appearing in the course of filament oscillations, similar to those found in the merging of He droplets. As a result, filaments are permeated by vortex-antivortex ring pairs whose annihilation yields phonon/roton bursts which may leave the filament in a turbulent state.[41; 42]
A key question is why vortex rings, which have appeared in the solution of the Navier-Stokes equation in the \(0.002<Oh<0.1\) regime, cease to appear in the inviscid regime [19; 21] whereas we have found them in the superfluid regime within the He-DFT approach. It is known that the Gross-Pitaevskii and He-TDDFT equations, appropriated for superfluids, do not reduce to the zero-viscosity limit of the Navier-Stokes equation (Euler equation) for a barotropic fluid in irrotational flow. [27] In the superfluid case, an extra term appears involving the gradient of the expression
\[Q=\frac{\hbar^{2}}{2m}\frac{\nabla^{2}\rho^{1/2}}{\rho^{1/2}} \tag{13}\]
the so-called quantum pressure term. This term, which is missing in any classical approach, plays an important role when the density is highly inhomogeneous, as it happens near the core of a quantized vortex. At variance, it is an ingredient naturally included in the Schrodinger He-TDDFT Eq. (3).
We have thus seen that the He-DFT approach, which is a suitable method to describe pure and doped superfluid He nanodopplets, can also address superfluid \({}^{4}\)He jet breaking and the contraction of superfluid \({}^{4}\)He filaments. Yet, we have found that upon filament breaking, the resulting fragments have a tendency to merge again. Two effects combine to favor this behavior. On the one hand, fragments, which are nanoscopic, have a non-zero surface width that helps recombination due to the overlap of the densities tails. On the other hand, the contraction velocity acquired by the filament in the early stages of the contraction tends to push together the two highly deformed drops even if they are temporarily apart. One should also consider the role of long-range van der Waals attractive interaction between separated fragments, which may also contribute to their merging. For the much larger sizes in the experiments, however, the vdW forces are expected to be negligible. In fact, the force between two spherical particles of diameter \(D\) made of \(q\) atoms per unit volume interacting via the two-body vdW interaction \(\lambda/r^{6}\) is [48]\(F\propto-\tilde{F}(x)/D\), where \(x=d/D\), \(d\) being the distance of closest approach between the spheres surfaces and \(\tilde{F}(x)\sim-1/(24x^{2})\) (\(x\ll 1\)). Therefore, for the sizes encountered in experiments the vdW attraction between fragments will be much reduced if not negligible, meaning that once a filament breaks into two fragments, recombination into a single droplet due to the vdW attraction is unlike.
## Supplementary Material
See supplementary material for the video files showing the real time evolution of the processes discussed in the present work.
###### Acknowledgements.
We thank Rico Tanyag for useful exchanges. This work has been performed under Grant No. PID2020-114626GB-I00 from the MICIN/AEI/10.13039/501100011033.
## Author Declarations
### Conflict of Interest
The authors have no conflicts to disclose.
### Author Contributions
All authors contributed equally to this work.
## Data Availability
The data that support the findings of this study are available from the corresponding author upon reasonable request
|
2309.11874 | Cuprate universal electronic spin response and the pseudogap from NMR | High-temperature superconductivity, in particular in the cuprates, is central
to condensed matter physics, and telltale experimental laws for guiding theory
are desirable. Here we report on such a universal property from the linear
response of the electronic matter to a homogeneous static magnetic field. From
it, two different types of carriers are identified. The universal behavior
concerns the carriers from hybridized copper and oxygen orbitals that span the
defining element, the CuO$_2$ plane, of the superconducting cuprates. Their
spin response is similar to that of a material independent metallic density of
states which carries a temperature independent, but doping dependent pseudogap
that closes beyond optimal doping. The second electronic spin component has a
strong family and doping dependent density of states, and it involves only Cu
(isotropic orbitals, except for \lsco). The condensation of both types of
carriers is interconnected and sets the critical temperature of
superconductivity ($T_\mathrm{c}$). The inter-planar component can condense at
the same or lower temperatures compared to that of the planar component, and a
certain match in density of states seems to be required for the highest
$T_\mathrm{c}$. The second component reminds one of the proposed involvement of
another Cu axial orbital that relates to the distance or presence of the apical
oxygen \cite{Ohta1991,Pavarini2001,Mahony2022} and the charge distribution in
the CuO$_2$ plane \cite{Kowalski2021,Jurkutat2023}, which correlates with the
maximum $T_\mathrm{c}$, as well. | Daniel Bandur, Jakob Nachtigal, Abigail Lee, Stefan Tsankov, Juergen Haase | 2023-09-21T08:20:32Z | http://arxiv.org/abs/2309.11874v3 | # Cuprate universal electronic spin response and the pseudogap from NMR
###### Abstract
It is shown that three independently measured NMR shifts in the cuprates in their whole temperature dependence are linearly related to each other with a doping and family dependent, but temperature independent constant. It is the Cu shift anisotropy that changes in proportion to the planar O shift for all materials found in the literature, independent of sample origin or details of the measurements. Such a relation involving three shifts rules out a single spin component description of the cuprates. It is argued that the relation is so robust since it depends for Cu on \((A_{\perp}-A_{\parallel})\), the hyperfine coefficient \(A_{\alpha}\) for the Cu \(3d(x^{2}-y^{2})\) hole, and not on the isotropic Cu term \(B\) from transferred spin. The Cu \(3d(x^{2}-y^{2})\) spin together with a second spin that determines the planar O shift can explain the data. For overdoped metallic samples, both become temperature dependent only at the critical temperature of superconductivity, \(T_{\rm c}\), where both begin to decrease. However, the Cu spin component turns increasingly negative until the second spin has disappeared. In the presence of a small pseudogap the onset temperature of this process coincides with the onset of the temperature dependence of the shifts, which is now above \(T_{\rm c}\). As the pseudogap increases further, the behavior does not change even as \(T_{\rm c}\) begins to decrease again. The temperature independent constant in the linear relationship describing the negative spin is related to the size of the uncoupled spin components and depends on the planar oxygen hole content that is known to correlate with the maximum \(T_{\rm c}\). The Cu spin component does not appear to carry significant entropy as the nuclear relaxation ceases in the condensed state.
Nuclear spin states with their typical radio frequency splittings (\(\mu\)eV) and up to seconds life times are extremely powerful observers of material properties, from local chemistry to extended electronic structures [1]. Therefore, from the beginning of cuprate high-temperature superconductivity [2] one expected fundamental insight into these materials from nuclear magnetic resonance (NMR). The focus was, in particular, on the electron spin susceptibility that can be probed by NMR. For metals it results in the Knight shift, \(K_{\alpha}=A_{\alpha}\cdot\chi_{\rm P}\), where \(\chi_{P}\) is the temperature independent Pauli susceptibility that is proportional to the density of electronic states (DOS) near the Fermi surface. A possible Knight shift anisotropy (\(\alpha\)) follows from that of the magnetic hyperfine coefficient (\(A_{\alpha}\)) for an isotropic spin response \(\chi_{\rm P}\). Thus, no matter which nucleus is observed, and of course independent of the direction of the magnetic field, one finds the same (temperature independent) shift apart from a proportionality constant. When the material becomes superconducting below a critical temperature \(T_{\rm c}\), the spin shift begins to fall and vanishes at the lowest temperatures for spin singlet pairing [5] and only the chemical shift remains. The latter is notoriously difficult to discern from the normal state Knight shift as both are temperature independent.
With that mindset, NMR data were inspected for clues about the cuprate properties, in particular in the conducting and superconducting region of the phase diagram. Indeed, after the resonances from different nuclear sites in the rather large unit cells of the cuprates had been identified, NMR reported on profound findings. Among them was spin singlet pairing, since some shifts vanished at low temperatures, cf. Fig. 1(a-c). Additionally, the spin- or pseudogap was discovered by NMR: the shifts, \({}^{89}K(T)\) and \({}^{17}K(T)\), measured for \({}^{89}\)Y and \({}^{17}\)O, near or in the CuO\({}_{2}\) plane of YBa\({}_{2}\)Cu\({}_{3}\)O\({}_{6+y}\), began to decrease already far above \(T_{\rm c}\), increasingly so as the doping levels decrease [6]. The normal state Knight shift is only metal-like at high doping levels, cf. Fig. 1(a).
Data of planar Cu, expected to show essentially the same behavior, were rather conflicting [7]. For YBa\({}_{2}\)Cu\({}_{3}\)O\({}_{6+y}\) and La\({}_{2-x}\)Sr\({}_{x}\)CuO\({}_{4}\), important early materials, \({}^{63}K_{\perp}(T)\), the shift when the field is in the CuO\({}_{2}\) plane (\(c\bot B_{0}\)) appeared similar in its temperature dependence to the shifts of planar O, but for the other field direction, \({}^{63}K_{\parallel}\) is by and large not at all temperature dependent, in particular for La\({}_{2-x}\)Sr\({}_{x}\)CuO\({}_{4}\), cf. Fig. 1(b). This missing spin shift in one direction was explained by an accidental cancellation of hyperfine coefficients. Indeed, since transferred spin density from the 4 neighboring Cu atoms appeared mandatory, hyperfine coefficient \(B\), one assumed \(A_{\parallel}+4B\approx 0\) for \(c\parallel B_{0}\). The coefficient \(A_{\alpha}\) for the \(3d(x^{2}-y^{2})\) orbital can be estimated from the atomic structure [8] and is negative, leading to a strong negative spin shift for \(c\parallel B_{0}\) (the coefficient \(B\) is harder to determine with higher precision). \(A_{\perp}\) is significantly smaller in magnitude compared to \(A_{\parallel}\).
This was the reasoning for the famous equation for the NMR spin shift at planar Cu, \({}^{63}K_{\parallel,\perp}(T)=(A_{\parallel,\perp}+4B)\cdot\chi(T)\), with \(A_{\parallel}+4B=0\) and \(A_{\perp}+4B\sim 4B\). With the planar O spin shift \({}^{17}K(T)\) similar to \({}^{63}K_{\perp}(T)\), the cuprate shifts were explained within a single band scenario.
Note that the expression above also serves another purpose, unwittingly. Near the antiferromagnetic wave vector one has an effective sign change, \(A_{\parallel}-4B\approx-8B\), so the chosen hyperfine scenario sets a convenient filter of spin fluctuations contributing to relaxation, i.e., only (the
expected) antiferromagnetic fluctuations can contribute to planar Cu relaxation. Furthermore, these fluctuations would be suppressed at planar O due to symmetry as well.
While a number of inconsistencies remained, this was the prevailing NMR model for the cuprates, which also influenced theory considerably (the chain of accidental consequences of the hyperfine scenario did raise concerns in the community).
Over the years [9; 10; 11], it became apparent that the above phenomenology does not hold for a number of materials. A breakthrough could only be achieved more recently, after collecting and inspecting all literature shift and relaxation data of the cuprates, which luckily were acquired by groups all over the world over the years. For example, from the bare data (only corrected for a common shift reference [4]) one immediately recognizes that a large number of cuprates have a sizable temperature dependent shift for \(c\parallel B_{0}\), \({}^{63}K_{\parallel}(T)\), which is clearly outside what any reasonable variation of hyperfine coefficients can explain, cf. Fig. 1(b) (or [4; 11]). In fact, the Cu shift anisotropy, \({}^{63}K_{\perp}(T)-^{63}K_{\parallel}(T)\), varies with family, doping, and even temperature, but in very restricted ways [4], cf. Fig. 1(f).
Then, from the collection of all planar O shift and relaxation data, fundamental information about the pseudogap appeared readily: all planar O data can be understood with a metallic density of states, universal to the cuprates, which carries a temperature independent but doping dependent gap at the Fermi surface [3], cf. Fig. 1(a,d). The gap closes at higher doping levels and the maximum size at low doping levels is similar to the size of the exchange coupling. Such a pseudogap is very similar to what Loram et al. [12; 13] proposed from the specific heat data. Thus, there are apparently metallic carriers at all relevant doping levels (of the metallic samples) [14]. Interestingly, the planar O shift anisotropy is rather temperature independent, very different from planar Cu, cf. Fig. 1(e,f). Finally, the planar Cu relaxation measured for \(c\bot B_{0}\), data that were hardly available in the early days, shows that this rate is largely metallic and universal near the critical temperature \(T_{\rm c}\), with \(1/T_{1\bot}T_{\rm c}\sim 21/{\rm Ks}\), and this value drops readily below \(T_{\rm c}\) if plotted as \(T/T_{\rm c}\)[15]. It is the temperature independent relaxation anisotropy that varies between materials. Given this proportionality (\(T_{1\bot}/T_{1\parallel}=const.\)) and a doping independent \(1/T_{1\bot}T\), there is no evidence for a pseudogap from Cu nuclear relaxation at all [15]. Rather, for planar Cu the pseudogap leads to a suppression of the shifts so that the Korringa relation fails, while the spin fluctuations remain unchanged [16]. With such pressing evidence from all cuprate data, elements of a different NMR scenario have been proposed recently [17; 18; 19].
Here we report on a remarkably robust property that is indicated in Fig. 1(i) and concerns a linear relation between three independently measured temperature dependent shifts (from groups and samples around the globe). Such a relation cannot follow from a single spin component description of the cuprates. After discussing the basic implications, we describe the findings in the framework of the Cu onsite and planar O hyperfine constants. Furthermore, we just assume a symmetric, second Cu term, but not a particular size of either constant. With two spin components we explain the data and discuss how both behave with temperature and doping. While
Figure 1: (a-c) Typical set of cuprate NMR shifts; (a) planar oxygen temperature dependence with the magnetic field parallel to the crystal \(c\)-axis; (b) and (c), planar Cu total shift (including orbital terms) temperature dependence for \(c\parallel B_{0}\) and \(c\bot B_{0}\); arrows indicate \(T_{\rm c}\). (d) Spin shift scenarios: temperature independent Knight shift of ordinary metal, and for two different sizes of a (pseudo) gap at the Fermi surface; the gap size is indicated in the inset and determines the temperature dependence due to the Fermi function [3]. (e) Two planar O shifts for different field directions (parallel and perpendicular to the \(\sigma\)-bond) plotted against each other for each temperature, both are essentially proportional to each other. (f) Plot of the two Cu shifts from (c) and (b) against each other for each temperature (3 different slopes can be observed indicated by the lines [4]). (g), (h) Plots of the two respective Cu shifts against that of planar O for \(c\parallel B_{0}\), also showing that proportionality is violated. (i) The linear dependence of the 3 shifts at all temperatures according to (1) makes all shifts from (a-c) fall on one line.
one of them disappears at the lowest temperatures, the other turns increasingly negative. The offset in this linear dependence is shown to relate to the planar O hole content and thus the maximum \(T_{\rm c}\)[20; 21; 22].
The shift data used here will be referenced in the Supplementary, but have been published previously. The planar Cu data were collected and summarized in 2017 [4] and 2019 [15], and the planar O data in 2020 and 2022 [3; 23].
## III Shifts and Pseudogap
A representative set of NMR shifts is summarized in Fig. 1(a-c) (for more shifts see [4; 18; 23]). All shifts are in general temperature dependent for superconducting samples. The strongly doped materials become temperature dependent only below \(T_{\rm c}\). The materials with a pseudogap show the typical temperature dependence already above \(T_{\rm c}\). The planar O shifts for different field directions are nearly proportional to each other, cf. Fig. 1(e), despite the pseudogap that leads to a shift suppression (the planar O shift with the field parallel to the crystal \(c\)-axis is chosen, i.e. perpendicular to the \(\sigma\) bond, since data for this orientation are much more abundant, for more details see [23]). This is not at all the case for the planar Cu shifts, cf. Fig. 1(f). While the effect of the pseudogap is obvious (high temperature offsets in Fig. 1(a-c)), when the field is along the crystal \(c\)-direction (\({}^{63}K_{\parallel}(T)\)) there are obvious differences between materials, which are not easily seen when the field is in the plane, \({}^{63}K_{\perp}\). As can be seen in Fig. 1(f-h), both shifts are neither proportional to each other nor proportional to that of planar O. Nevertheless, as we prove in Fig. 1(i), all 3 shifts obey a simple linear relation for all points in temperature.
In Fig. 2 we show this linear dependence in greater detail. If we assume the universal slope to hold over the entire range of planar O shifts, we have,
\[{}^{63}K_{\perp}(T)-{}^{63}K_{\parallel}(T)=1.6\cdot{}^{17}K_{\rm c}(T)+\delta, \tag{1}\]
with the temperature independent offset \(\delta\) that depends on doping and family. The behavior at the lowest temperatures (red shaded area in Fig. 2) is not well known since data are lacking (measurements become difficult due to penetration depths issues, as well as large linewidths and decreasing nuclear relaxation rates). In that sense the offset \(\delta\) is most reliable as offset above those temperatures.
In order to absorb the importance of the plot in Fig. 2 or relation (1), one has to recognize that it relates three independent shifts that are in general temperature dependent and that have been measured by groups around the world on samples of different origin. Furthermore, the temperature dependence of a shift critically depends on the size of the pseudogap, so straight lines in Fig. 2 require that the involved materials have a rather similar pseudogap. None of these factors seem to matter for the relationship (1).
The shift \({}^{63}K_{\parallel}(T)\), originally omitted in the NMR analysis, is obviously a vital ingredient in the universal behavior.
Note that we show the bare shifts for planar Cu in Fig. 2, i.e. no orbital shift is removed, to keep the discussion transparent. As remarked before [4], the low-temperature shift for \(c\bot B_{0}\) is rather similar for all cuprates, \({}^{63}K_{\perp}(T\to 0)\approx 0.35...0.40\%\), and thus likely to be close to the true orbital shift. However, \({}^{63}K_{\parallel}(T\to 0)\) varies among the materials so that the originally chosen value for \({}^{63}K_{\perp}\) for La\({}_{2-x}\)Sr\({}_{x}\)CuO\({}_{4}\) is not reliable. In fact, the orbital shift anisotropy is mainly set by matrix elements that can be used for a more reliable estimate. First principle calculations [24] find a shift anisotropy of about 2.4, and \({}^{63}K_{\rm L\perp}=0.30\%\) and \({}^{63}K_{\rm L\parallel}=0.72\%\). The value for \(c\bot B_{0}\) is similar to the experiment (expected to be slightly lower), but not at all for \(c\parallel B_{0}\). If we assume the orbital shift pair from first principles, the reference on the abscissa in Fig. 2 would
Figure 2: Total planar Cu shift anisotropy (axial shift) plotted against the shift of planar O. The orbital shift for O was removed [23], but for planar Cu the full shifts are shown. The arrow denotes the expected orbital shift anisotropy \({}^{63}\Lambda=-0.42\%\)[24]. For a given doping and family, the anisotropy is proportional the planar O shift, cf. (1). Note that the relation holds above and below \(T_{\rm c}\), full and open circles, respectively. Full lines are fits to the data, dashed lines their extension to lower temperatures. The offset will be discussed with Fig. 3.
be -0.42% (see horizontal arrow in Fig. 2). Thus, mainly the differences in \({}^{63}K_{\parallel}(T\to 0)\) are responsible for the different offsets \(\delta\) in Fig. 2. Given that the charge distribution in the CuO\({}_{2}\) plane varies between materials [20], the orbital shift could also vary, but likely not just for \(c\parallel B_{0}\) (we doubt that orbital currents could explain the scenario, but we cannot rule this out at this point [25]).
As the pseudogap closes with increasing doping, the high-temperature O shift increases in Fig. 2 for all materials [3]. However, for Cu the influence of the pseudogap is not universal. For most materials (e.g. HgBa\({}_{2}\)CuO\({}_{4+\delta}\)) the pseudogap changes the isotropic Cu shift (\(\Delta_{x}\)\({}^{63}K_{\perp}\approx\Delta_{x}\)\({}^{63}K_{\parallel}\)), not the anisotropy, so one moves horizontally in Fig. 2 as given by the planar O shift. For La\({}_{2-x}\)Sr\({}_{x}\)CuO\({}_{4}\) this is not the case, as \(\Delta_{x}\)\({}^{63}K_{\parallel}\approx 0\) for these materials. Then, only \({}^{63}K_{\perp}\) changes and we move vertically by about 0.35% in Fig. 2 plus horizontally by about 0.10% for planar O (\(\Delta\delta\approx 0.15\%\)).
Another important observation is the following. Relation (1) can be viewed as a relation between the two Cu shifts, parallel and perpendicular to the field. The experimental plot of \({}^{63}K_{\perp}\) vs. \({}^{63}K_{\parallel}\) introduced in 2017 to summarize all Cu shift data [4] describes this relation. There it was found that about 35 materials show only 3 different slopes as a function of temperature or doping, which can also be seen in Fig. 1(f). The slopes are,
\[\Delta^{63}K_{\perp}/\Delta^{63}K_{\parallel}\gtrsim 10,\ \mathrm{or}\ \approx 1,\ \mathrm{or}\ \approx 5/2. \tag{2}\]
For the first slope this means \(\Delta_{T,x}\)\({}^{63}K_{\parallel}\approx 0\) and we have with (1) as a function of temperature \(\Delta_{T}\)\({}^{63}K_{\perp}\approx 1.6\Delta_{T}\)\({}^{17}K_{c}\). As a function of doping we have, \(\Delta_{x}\)\({}^{63}K_{\perp}\approx\Delta_{x}\left[1.6^{17}K_{c}+\delta\right]\). The slope 1 in temperature follows for \(\Delta_{T}\)\({}^{17}K_{c}=0\) and in doping for \(\Delta_{x}\left[{}^{17}K_{c}+\delta\right]=0\). Finally, the dominant slope of 5/2 that is only found as a function of temperature follows for \(\Delta_{T}\)\({}^{63}K_{\parallel}\approx\Delta_{T}\)\({}^{17}K_{c}\) (or \(\Delta_{T}\)\({}^{63}K_{\perp}\approx 5/2\Delta_{T}\)\({}^{17}K_{c}\)).
In a wider sense, in every region of a given slope the changes in shift are nearly proportional to each other, but the proportionality factors can suddenly change at certain temperatures, as seen in figure 7 of [4] or Fig. 1(f). These different slopes are thus related to the planar O shift, as well. Given that our sets that contain planar O data are typical examples of the general behavior, there is reason to believe that what we discuss here is relevant for the 35 materials from [4].
Note also that La\({}_{2-x}\)Sr\({}_{x}\)CuO\({}_{4}\) has the same slope in Fig. 2 (except for a low-temperature point for \(c\parallel B_{0}\) that we verified experimentally [26]). Thus, one would argue that the shift anisotropy of La\({}_{2-x}\)Sr\({}_{x}\)CuO\({}_{4}\) also increases in proportion to the planar O shift. Of course, we know that \({}^{63}K_{\parallel}\) is temperature (and doping) independent for this material. Thus, \(\Delta^{63}K_{\parallel}=0\) for La\({}_{2-x}\)Sr\({}_{x}\)CuO\({}_{4}\) must be due to a cancellation with the isotropic part. Note that we arrived at this conclusion without any assumptions about a special hyperfine scenario.
## III Spin component description
It is certain that since there is a hole in the Cu \(3d(x^{2}-y^{2})\) orbital, there is a related anisotropic hyperfine constant \(A_{\alpha}\)[8]. There is no doubt that \(A_{\parallel}\) is negative (from the dipolar contribution) and that \(|A_{\parallel}|\) is significantly larger than \(|A_{\perp}|\). \(A_{\alpha}\) is a sum of three terms: an isotropic core polarization, a large traceless dipolar term, and an anisotropic term from spin-orbit coupling (the least well known) [8]. (First principle calculations with estimates of the spin-orbit contribution report that \(a_{\parallel}=-3.0\) and \(a_{\perp}=0.47\) in atomic units [27].) Therefore, a strong negative spin shift was expected for the cuprates, but was never found, as mentioned above.
In addition to the on-site term \(A_{\alpha}\), one expects another isotropic contribution \(4B\) (4 nearest Cu neighbors) since there is super-exchange. The magnitude of \(4B\) is not clear, but it should be positive. Originally, from the apparently missing shift for \({}^{63}K_{\parallel}\) for La\({}_{2-x}\)Sr\({}_{x}\)CuO\({}_{4}\) it was concluded that \(A_{\parallel}=-4B\). Given this uncertainty, we follow a discussion that avoids the isotropic shift term for planar Cu.
For planar O we saw that we need only one hyperfine coefficient [23]. The coefficient consists of core polarization and a dipolar contribution, and we denote the total coefficient with \(C\) (first principle calculations yield \(c_{\mathrm{c}}=1.1\) and \(c_{\sigma}=1.67\) in atomic units [27]; the anisotropy is in agreement with experimental data [23]).
One can be certain that the planar O data exclude a large negative spin at low temperatures. So the planar Cu anisotropy must have its origin in another spin component. To explain Fig. 2 by an accidental correlation between both shifts can also be rejected since both share the same pseudogap and temperature dependence. Furthermore, a negative spin at planar Cu can only be caused by another positive component coupled to it. As we argued before [4; 16], one is forced to assume two coupled spin components with the susceptibilities, \(\chi_{\mathrm{A}}=\chi_{\mathrm{AA}}+\chi_{\mathrm{AB}}\) and \(\chi_{\mathrm{B}}=\chi_{\mathrm{BB}}+\chi_{\mathrm{AB}}\), and we choose for planar O,
\[{}^{17}K_{\mathrm{c}}(T,\Delta_{\mathrm{g}})=C_{\mathrm{c}}\chi_{\mathrm{B}} \equiv C_{\mathrm{c}}[\chi_{\mathrm{BB}}+\chi_{\mathrm{AB}}]. \tag{3}\]
The Cu anisotropic shift must then follow from the other component.
\[{}^{63}K_{\perp}-{}^{63}K_{\parallel}=\left(A_{\perp}-A_{ \parallel}\right)\chi_{\mathrm{A}}+{}^{63}\Lambda\\ \equiv\left(A_{\perp}-A_{\parallel}\right)[\chi_{\mathrm{AA}}+ \chi_{\mathrm{AB}}]+{}^{63}\Lambda, \tag{4}\]
where \({}^{63}\Lambda\) is the Cu orbital shift anisotropy. Fig. 2 is then described by,
\[{}^{63}K_{\perp}-{}^{63}K_{\parallel}=\frac{\left(A_{\perp}-A_{ \parallel}\right)}{C_{\mathrm{c}}}\,{}^{17}K_{\mathrm{c}}\\ +\left(A_{\perp}-A_{\parallel}\right)(\chi_{\mathrm{AA}}-\chi_{ \mathrm{BB}})+{}^{63}\Lambda. \tag{5}\]
In this notation we conclude with Fig. 2 that \(\chi_{\mathrm{AA}}\) and \(\chi_{\mathrm{BB}}\) are temperature independent and only \(\chi_{\mathrm{AB}}\) changes,
and the offset follows as,
\[\delta=\left(A_{\perp}-A_{\parallel}\right)(\chi_{\rm AA}-\chi_{\rm BB})+{}^{63}\Lambda. \tag{6}\]
Note also that the different slopes, seen in Fig. 1(f), are in agreement with (1) or Fig. 2, and that we have not made any assumption about the precise size of \(A_{\alpha}\), and did not invoke Cu \(B\)-term.
If we do choose a perhaps reasonable description of the Cu shift by,
\[{}^{63}K_{\alpha}(T)=A_{\alpha}(\chi_{\rm AA}+\chi_{\rm AB})\\ +4B(\chi_{\rm BB}+\chi_{\rm AB})+{}^{63}K_{\rm La}, \tag{7}\]
and investigate the various slopes of Fig. 1(f) based on Cu alone, we would get contradicting constraints for the hyperfine coefficient \(B\). One could argue that it is likely that \(B\) varies; this has to be investigated further. It may be related to stripes [28], which we believe play a role in the NMR data as well [29; 30]).
We now address the offsets in Fig. 2 with (6). Comparison of \(\delta\) with \({}^{63}K_{\parallel}(\sim 300K)\), i.e. the high temperature shift for \(c\parallel B_{0}\), reveals that most materials fall on straight line through the origin (within error), thus \({}^{63}K_{\parallel}(\sim 300K)\approx-1.5\delta\), cf. Fig. 3(b). Of course, \({}^{63}K_{\parallel}\) also includes the isotropic shift component, cf. (7). At high temperatures with temperature independent shifts we expect \(\chi_{\rm AB}\) to be temperature independent as well, and thus indistinguishable from the other components. With (6) we can try to replace, e.g., \(\chi_{\rm BB}\) by \(\delta\). This gives \({}^{63}K_{\parallel}=-4B/(A_{\perp}-A_{\parallel})\delta\) with other terms suppressed by \((1-|A_{\parallel}|/4B)\). Then, the deviation for La\({}_{2-x}\)Sr\({}_{x}\)CuO\({}_{4}\) suggests that one should replace \(\chi_{\rm AA}\) with (6). In this case if only \(\chi_{\rm AA}\) changes with doping we find \({}^{63}K_{\parallel}=-|A_{\parallel}|/(A_{\perp}-A_{\parallel})\delta\), which should give a similar slope, but that is not observed in Fig. 3(b). Rather \(\chi_{\rm AA}\) and \(\chi_{\rm BB}\) must be changing such that \(\delta\) in (6) changes, but not \({}^{63}K_{\parallel}\) (\(A_{\parallel}\sim-4B\) would serve the purpose).
If we compare \({}^{63}K_{\parallel}(\sim 300K)\) to the oxygen hole content \(2n_{\rm O}\) (as measured with NMR and determined from the oxygen quadrupole splitting [20; 22]) we obtain Fig. 3(a). It appears that samples with a relatively small \(2n_{\rm O}\) have a reduced temperature dependence for \(c\parallel B_{0}\). In addition, we find that the high-temperature shifts (that we connected to the offset \(\delta\) in the inset) increase monotonically with the oxygen hole content. This means that the maximum critical temperature \(T_{\rm c,max}\) of the cuprates is correlated with the magnetic response described above (\(T_{\rm c,max}\approx 200\,{\rm K}\cdot 2n_{\rm O}\)[21]). However, there are some exceptions (Tl2212-OP115 and Tl2212-UN112 vary in the maximum \({}^{63}K_{\parallel}\) by about 0.15 % despite being the same material) that need to be better understood.
As we remarked earlier, the shifts at low temperature are not very reliable for a number of reasons (even classical metals often have residual spin shifts at low temperatures [32]). As an example, we remeasured La\({}_{2-x}\)Sr\({}_{x}\)CuO\({}_{4}\) (\(x=0.15\)) [26] and ascertained that the shift for \(c\parallel B_{0}\) does not change between 300 K and 20 K (the sensitivity for these single crystal experiments prevented acquiring signal at lower temperatures; large linewidths were encountered as well). Therefore, in Fig. 2 La\({}_{2-x}\)Sr\({}_{x}\)CuO\({}_{4}\) has a different low-temperature point. We also know that HgBa\({}_{2}\)CuO\({}_{4+\delta}\) shows special behavior at low temperatures [11], which could change the slope for the overdoped material in Fig. 2. However, there are not enough planar O data available for this material. While the behavior at the lowest temperature may hold important clues about the condensate or pairing, the picture presented here is not affected.
## IV Discussion and conclusion
We have shown that basically all available planar Cu and planar O shift data of the cuprates reveal a very simple, universal property apparent in Fig. 2 or expressed by (1). For a particular family and doping level even data of different groups or sample origin may be used in such a plot. The relation holds above and below \(T_{\rm c}\).
This linear relation (1) between three independently measured, temperature dependent shifts that are in general not proportional to each other rules out a description of the cuprates in terms of a single spin component.
Eq. (1) means that the difference between the two Cu shifts measured for \(c\bot B_{0}\) and \(c\parallel B_{0}\), respectively, i.e. the temperature dependent shift anisotropy, is proportional to the planar O shift, with the same proportion
Figure 3: (a) Plot of \({}^{63}K_{\parallel}(T)\) vs. the temperature independent oxygen hole content \(2n_{\rm O}\) of the material (as determined from the oxygen quadrupole splitting [20]). The latter is known to be correlated with \(T_{\rm c,max}\)[21; 31]. (b) \({}^{63}K_{\parallel}(T)\) as a function of the offset \(\delta\) determined from Fig. 2, cf. (1). The dotted line is the fit to the highest temperature shift values (near 300 K). It yields \({}^{63}K_{\parallel}\approx-1.50\cdot\delta\) if La\({}_{2-x}\)Sr\({}_{x}\)CuO\({}_{4}\) is excluded.
ality constant for all materials. Only an additional doping and family dependent (but temperature independent) offset appears between different doping levels or families. It is a low-temperature offset since the planar O shift is monotonically decreasing with temperature. However, all 3 shifts have not been followed to very low temperatures due to experimental constraints. This means we do not know the behavior of the offset at those temperatures. Since the offset depends on doping, and it explains the missing Cu orbital shift in the cuprates for \(c\parallel B_{0}\), one would assume it is caused by electronic spin. Therefore, it does not seem likely that it is caused by special orbital moments (Varma currents) [25], while they would lead to such a positive shift change for \(c\parallel B_{0}\).
For the discussion of the spin polarization we used only the more reliable Cu hyperfine constant \(A_{\alpha}\) from the hole in the \(3d(x^{2}-y^{2})\) shell and \(C_{\rm c}\) from planar O, so we only rely on the symmetry of the transferred hyperfine term (\(B\)), not its size. With it we find that the shift offsets in Fig. 2 are due to negative (with respect to the magnetic field) spin density at planar Cu. This also solves the orbital shift conundrum of the cuprates, i.e. why the previously assumed orbital shift for \(c\parallel B_{0}\) is in stark disagreement with simple estimates as well as first principle calculations [24].
Furthermore, since we know from (1) that a single component description cannot work, we used two coupled spin polarizations, one from the Cu \(3d(x^{2}-y^{2})\) spin and another from metallic carriers (that affect both nuclei). Such a scenario can explain the observations. At very high doping levels, in the absence of the pseudogap, planar O is affected by metallic carriers so that shift and relaxation are even related by the Korringa relation [3]; also the shifts at planar Cu are rather temperature independent (seen as clustering of high-temperature data in Fig. 2). Upon lowering the temperature, as one reaches \(T_{\rm c}\), both spin polarizations decrease. That from the \(3d(x^{2}-y^{2})\) orbital drops in proportion to that from the metallic carriers. When the latter disappears near \(T=0\) the former reaches its peak negative value (low-temperature offset). That means, even deep in the condensed state negative spin remains, however, since Cu and O relaxation nearly vanish as well [15], this negative spin component is certainly not from metallic carriers and carries no entropy, but rather it must be part of the condensate.
If we now turn to a material with a small pseudogap, e.g. by lowering the doping compared to the just discussed case, the same process happens as we see with Fig. 2. It begins at lowered shift values (due to the pseudogap) but at a higher temperature, i.e. the onset temperature where the shifts become temperature dependent. This onset temperature exceeds \(T_{\rm c}\). Apparently, the described behavior continues to occur as the pseudogap increases even further. Now, \(T_{\rm c}\) decreases (on the low doping side of the dome) but the onset temperature, and perhaps the pairing [33] increase with the size of the pseudogap.
Differences between the materials, from a shift point of view, are in the maximum high-temperature and minimum low-temperature spin shifts predominantly at Cu from the \(3d(x^{2}-y^{2})\) spin. For planar O the shifts vary between zero and a maximum of about 0.22% for most materials. It is thus not surprising to find the relations in Fig. 3 since there is also a relation to the total planar O hole density as measured by NMR [20], which was shown to set the maximum \(T_{\rm c}\) of a material [21; 22; 31].
Finally, there is the experimental fact that the planar Cu relaxation, in contradiction to that of planar O, is not affected by the pseudogap. In a coupled scenario one would argue that it is the local Cu spin with its coupling to the universal metallic carriers that is responsible for this behavior [17], while the planar O relaxation is affected by the coupling to two neighboring Cu spins and therefore shows pseudogap behavior.
## Acknowledgements
We acknowledge help from Crina Berbecariu (Leipzig) with finalization of the manuscript, and financial support from Leipzig University.
### Author contributions
D.B., J.N., J.H. contributed nearly equally to data analyses, and D.B. and J.H. also in the preparation of the manuscript. J.H. had the main leadership; S.T. contributed new measurements on La\({}_{2-x}\)Sr\({}_{x}\)CuO\({}_{4}\), and with A.L. helped with readying the manuscript.
|
2309.05801 | No low index critical points for the systole function and sys_T
functions in large M_{g,n} | We show for each $k$, any critical point for the $C^2$-Morse function $\syst$
or the systole function that is topologically Morse on $\mathcal M_{g,n}$ has
index greater than $k$ when $g$ or $n$ is sufficiently large. In other words,
there are no critical points of index $\le k$ in those moduli spaces, and all
critical points for $\syst$ of index $\le k$ live in the Deligne-Mumford
boundary. In the Morse handle decomposition given by $\syst$, all $k'$-handles
live in the boundary of such $\overline{\mathcal M}_{g,n}$ for $k'\le k$. | Changjie Chen | 2023-09-11T20:12:34Z | http://arxiv.org/abs/2309.05801v3 | No low index critical points for \(\mathrm{sys}\) and \(\mathrm{sys}_{\mathrm{T}}\) in large \(\mathcal{M}_{g,n}\)
###### Abstract.
We show for each \(k\), any critical point for the \(C^{2}\)-Morse function \(\mathrm{sys}_{\mathrm{T}}\) or the systole function that is topologically Morse on \(\mathcal{M}_{g,n}\) has index greater than \(k\) when \(g\) or \(n\) is sufficiently large. In other words, there are no critical points of index \(\leq k\) in those moduli spaces, and all critical points for \(\mathrm{sys}_{\mathrm{T}}\) of index \(\leq k\) live in the Deligne-Mumford boundary. In the Morse handle decomposition given by \(\mathrm{sys}_{\mathrm{T}}\), all \(k^{\prime}\)-handles live in the boundary of such \(\overline{\mathcal{M}}_{g,n}\) for \(k^{\prime}\leq k\).
## 1. Introduction
In [1], the author introduces a series of \(C^{2}\)-Morse functions, that is closely related to the systole function. The _systole_ function takes the value on a hyperbolic surface \(X\) of the length of shortest geodesics, namely
\[\mathrm{sys}(X)=\min_{\gamma\;\mathrm{s.c.g.\ on}\;X}l_{\gamma}(X),\]
and
\[\mathrm{sys}_{\mathrm{T}}(X):=-T\log\sum_{\gamma\;\mathrm{s.c.g.\ on}\;X}e^{- \frac{1}{T}l_{\gamma}(X)},\]
where s.c.g. stands for simple closed geodesic. We review the key properties of \(\mathrm{sys}_{\mathrm{T}}\) as the main result of the previous paper in the following section. For Morse theory, one can see [12].
In \(\mathcal{M}_{g,n}\), the critical points of \(\mathrm{sys}_{\mathrm{T}}\) are in natural bijection with those for \(\mathrm{sys}\) by the critical point attracting property, which enables us to study one through the other. We prove a result on low index critical points in large \(\mathcal{M}_{g,n}\). Let \(\mathrm{Crit}(f,\leq k)\) be the set of critical points of \(f\) of index \(\leq k\), then
**Main Theorem** (\(=\)**Theorem 5.6)**.: _For any \(k\), there exists \(g_{0}=g_{0}(k)\) and \(n_{0}=n_{0}(k)\), such that_
\[\operatorname{Crit}(\operatorname{sys_{T}},\leq k)\cap\mathcal{M}_{g,n}=\emptyset\]
_and_
\[\operatorname{Crit}(\operatorname{sys},\leq k)\cap\mathcal{M}_{g,n}=\emptyset\]
_for \(g\geq g_{0}\) or \(n\geq n_{0}\). As a result,_
\[\operatorname{Crit}(\operatorname{sys_{T}},\leq k)\subset\partial\mathcal{M}_ {g,n}.\]
Another way to state this result is, all critical points in \(\mathcal{M}_{g,n}\) have index greater than \(k\).
One can construct a handle decomposition of the compactified moduli space \(\overline{\mathcal{M}}_{g,n}\) based on the \(C^{2}\)-Morse function \(\operatorname{sys_{T}}\). The main theorem implies that all \(k\)-handles live in the Deligne-Mumford boundary for \(k\) small compared to \(g\) or \(n\).
The way we show the main theorem is by proving a more general statement, on the rank of gradient vectors of geodesic length functions. The main theorem is a corollary through Akrout's rank theorem about \(\operatorname{sys}\) and the author's rank theorem comparing \(\operatorname{sys_{T}}\) to \(\operatorname{sys}\).
Let a \(j\)-_curve set_ on a hyperbolic surface be a set of simple closed geodesics that pairwise intersect each other at most \(j\) times, then
**Theorem 1.1** (\(=\)**Theorem 4.7)**.: _For any \(j\), there exists a series \((r_{i})\) such that for any \(j\)-curve set \(S\) of \(r_{i}\) curves on any hyperbolic surface \(X\) with \(g(X)\) or \(n(X)\) large depending on \(i\), we have_
\[\operatorname{rank}\{\nabla\gamma\}_{\gamma\in S}\geq i.\]
The proof is mainly about topological constructions and estimates. Besides the notion of \(j\)-curve set, we will introduce and study _subsurface hull_, _filling sets_, \(j\)-_capacity_, _essentialness of subsurface_.
This paper is organized to use Section 2 to review Morse property of the systole function and \(\operatorname{sys_{T}}\) functions, and results on the index of a critical point. In Section 3, we will study some basic concepts that will be used in later proofs, and in Section 4 we show a rank result on curves filling non-essential subsurfaces, that will lead to the main theorem after a study of shortest geodesics in Section 5. As an application, we will classify all index 0,1,2 critical points in the last section.
## 2. Morse Properties of \(\operatorname{sys}\) and \(\operatorname{sys_{T}}\)
Here we review Akrout's eutacticity conditions and his theorem on the systole function, and the author's theorem on \(\operatorname{sys_{T}}\) functions. Definition of the two functions can be found at the very beginning of this paper.
**Definition 2.1** (Eutacticity).: A point \(X\in\mathcal{T}\) is called _eutactic_ (_semieutactic_) if the origin is contained in the interior (boundary) of the convex hull of \(\{\nabla l_{\gamma}\}_{\gamma\in S(X)}\), the set of gradient vectors of the geodesic length functions associated to the shortest geodesics, in the tangent space \(T_{X}\mathcal{T}\).
**Definition 2.2** (Topological Morse function).: Let \(f:M^{n}\to\mathbb{R}\) be a continuous function. A point \(x\in M\) is called _\((C^{0}\)-)ordinary_ if \(f\) is a coordinate function under some \(C^{0}\)-chart near \(x\), otherwise it is called _\((C^{0}\)-)critical_. A critical point \(x\) is _nondegenerate_ if there is a local \(C^{0}\)-chart \((x^{i})\) such that \(f-f(x)=(x^{1})^{2}+\cdots+(x^{r})^{2}-(x^{r+1})^{2}-\cdots-(x^{n})^{2}\). In this case the _index_\(\operatorname{ind}_{f}(x)\) of \(f\) at \(x\) is defined to be \(n-r\). A continuous function is called _topologically Morse_ if all critical points are nondegenerate. For more, see [10].
**Theorem 2.3** ([1]).: _The systole function is topologically Morse on \(\mathcal{M}_{g,n}\). \(X\) is a critical point if and only if \(X\) is eutactic, and in that case the index is equal to \(\operatorname{rank}\{\nabla l_{\gamma}\}_{\gamma\in S(X)}\)._
In [1], we prove
**Theorem 2.4**.: _As \(T\) decreases to 0, \(\operatorname{sys_{T}}\) decreases and converges to \(\operatorname{sys}\). Moreover, for all sufficiently small \(T\), \(\operatorname{sys_{T}}\) has the following properties:_
_(1) Every \(\operatorname{sy_{T}}\) is a \(C^{2}\)-Morse function on the Deligne-Mumford compactification \(\overline{\mathcal{M}}_{g,n}\) (with altered differential structure). (2) \(\operatorname{Crit}(\operatorname{sy_{T}}:\mathcal{M}_{g,n}\to\mathbb{R})\) with \(\operatorname{ind}(\operatorname{sy_{T}})\) respects the stratification: More precisely, let \(\mathcal{S}\subset\overline{\mathcal{M}}_{g,n}\) be a stratum that is isomorphic to \(\mathcal{M}_{g^{\prime},n^{\prime}}\), then under the isomorphism,_
\[\operatorname{Crit}(\operatorname{sy_{T}}:\mathcal{S}\to\mathbb{R})\text{ and } \operatorname{Crit}(\operatorname{sy_{T}}:\mathcal{M}_{g^{\prime},n^{\prime}}\to \mathbb{R})\]
_are the same, counted with index. (3) There is a natural stratum-wise correspondence:_
\[\operatorname{Crit}(\operatorname{sy_{T}})\leftrightarrow\operatorname{Crit} (\operatorname{sy_{S}}).\]
_More precisely, let \(\mathcal{S}\subset\overline{\mathcal{M}}_{g,n}\) be a stratum that is isomorphic to \(\mathcal{M}_{g^{\prime},n^{\prime}}\), then there is a bijection_
\[\operatorname{Crit}(\operatorname{sy_{T}}|_{\mathcal{S}}) \leftrightarrow\operatorname{Crit}(\operatorname{sys}|_{\mathcal{ M}_{g^{\prime},n^{\prime}}})\] \[p_{T} \leftrightarrow p\]
_with the properties_
\[d_{\text{WP}}(p,p_{T})<CT,\]
_which implies \(p_{T}\to p\) and consequently \(\operatorname{Crit}(\operatorname{sy_{T}}|_{\mathcal{S}})\to\operatorname{ Crit}(\operatorname{sys}|_{\mathcal{M}_{g^{\prime},n^{\prime}}})\), and_
\[\operatorname{ind_{\operatorname{sy_{T}}}}(p_{T})=\operatorname{ind_{ \operatorname{sy}}}(p).\]
_(4) The Weil-Petersson gradient flow of \(\operatorname{sy_{T}}\) on \(\overline{\mathcal{M}}_{g,n}\) is well defined._
_Remark 2.5_.: The rank statement in both Akrout's and the author's theorem is what we will use to calculate the index at a critical point.
As the \(\operatorname{sy_{T}}\) functions are Morse on the compactified Moduli space, on top of itself, by (2) and (3) in the theorem, we give a description of critical points in the Deligne-Mumford boundary \(\partial\mathcal{M}_{g,n}\). For a stratum \(\mathcal{S}\subset\partial\mathcal{M}_{g,n}\), write \(\mathcal{S}=\oplus\mathcal{S}_{i}\) as a decomposition by connected components of the base surface away from the nodes, with each \(\mathcal{S}_{i}\) isomorphic to some moduli space \(\mathcal{M}_{i}\). Any critical point \(X\in\mathcal{S}\) is a nodal surface that has the decomposition \(X=\cup X_{i}\) plus the nodes, such that each \(X_{i}\) is a critical point in \(\mathcal{M}_{i}\). This way we can decompose a critical
point in the boundary as the union of smaller surfaces that are critical in respective moduli spaces. We can also construct a critical point by connecting critical points in smaller \(\mathcal{M}_{g,n}\)'s by nodes. Because of that, the study of critical points on \(\overline{\mathcal{M}}_{g,n}\) can come down to on each smaller \(\mathcal{M}_{g,n}\).
## 3. Subsurface Hull, Filling Set and \(j\)-capacity
**Convention**.: _A lot of notions on hyperbolic surfaces to be defined below are invariant under diffeomorphisms or hyperbolic isometries. If \(P\) is such a notion and \(X\) is a \([g,n]\)-surface, we make the convention \(P(g,n)=P(X)\) by abuse of notation._
**Definition 3.1**.: A \((g,n)\)-surface is a complete hyperbolic surface of genus \(g\) with \(n\) punctures. A \((g,n,b)\)-surface is a hyperbolic surface of genus \(g\) with \(n\) punctures and \(b\) geodesic boundary components. A \([g,n]\)-surface is a hyperbolic surface of genus \(g\) with the number of punctures and geodesic boundary components totalling \(n\). For convenience, we use \([0,2]\)-surface to refer to a circle or an annulus.
**Definition 3.2**.: A _subsurface_ of a hyperbolic surface \(X\) is some \([g,n]\)-surface whose interior is isometrically embedded in \(X\). The _subsurface hull_\(\operatorname{SSH}(S)\) of a set of simple closed geodesics \(S=\{\gamma_{1},\cdots,\gamma_{r}\}\) on \(X\) is the minimal subsurface that contains \(S\).
_Remark 3.3_.: The definition is valid in terms of uniqueness of such minimal subsurface. If two subsurfaces \(X_{1}\) and \(X_{2}\) intersect, there is the unique subsurface \(X_{0}\) that'supports' \(X_{1}\cap X_{2}\) by pulling straight the piecewise geodesic boundaries. Note that if a simple closed geodesic \(\gamma\subset X_{1}\cap X_{2}\), then \(\gamma\subset X_{0}\).
**Definition 3.4**.: (1) A _\(j\)-curve set_ on a hyperbolic surface is a set of simple closed geodesics that pairwise intersect at most \(j\) times.
(2) A set of simple closed geodesics _fills_ a surface if every complementary region is a polygon or once-punctured polygon. In the case of the base surface being one with geodesic boundary, a complementary region is also allowed to be a once-holed polygon where the hole is a
boundary component of the surface.
(3) A set of simple closed geodesics is _minimal filling_ if no proper subset is filling.
_Remark 3.5_.: A set \(S\) of curves is minimal filling if and only if for any \(\gamma\in S\), \(S\setminus\{\gamma\}\) is not filling.
**Lemma 3.6**.: _A filling set of simple closed geodesics on a (connected) hyperbolic surface is connected as a graph._
Proof.: Note that the boundary of any complementary region is a path in the graph of the simple closed geodesics. If the graph is not connected, then the surface that can be reassembled with the complementary regions along the graph is not connected.
**Definition 3.7**.: (1) For a subsurface \(Y\) of a hyperbolic surface, let \(\#^{p}(Y)\) be the number of pants in a pants decomposition of \(Y\).
(2) Let \(M(g,n)\) be the maximum cardinality of a minimal filling set, and \(m^{j}(g,n)\) the minimum cardinality of a filling \(j\)-curve set, on a \([g,n]\) surface, when \([g,n]\neq[0,3]\).
_Remark 3.8_.: \(\#^{p}(Y)=-e(Y)\), where \(e\) is the Euler characteristic.
**Lemma 3.9**.: _We have the following estimate on the size of the subsurface hull:_
\[\#^{p}(\mathrm{SSH}(\{\gamma_{1},\cdots,\gamma_{r}\})\leq j\binom{r}{2}.\]
_for a \(j\)-curve set \(\{\gamma_{1},\cdots,\gamma_{r}\}\)._
Proof.: We calculate its Euler characteristic:
\[\#^{p}=-e= -V+E-F\] \[= V-F\leq V\leq j\binom{r}{2}.\]
**Lemma 3.10**.: _We have the following estimate:_
\[m^{j}(g,n)>\sqrt{\frac{4g-4+2n}{j}}.\]
Proof.: Let \(S=\{\gamma_{1},\cdots,\gamma_{r}\}\) be a \(j\)-curve set that is filling a \([g,n]\)-surface \(X\), then \(\operatorname{SSH}(S)=X\). By the remark and lemma above we have
\[2g-2+n\leq j\binom{r}{2},\]
which implies
\[r>\sqrt{\frac{4g-4+2n}{j}}.\]
_Remark 3.11_.: For better estimates, one can see [1] and [15].
**Lemma 3.12**.: _Suppose \([g,n](X)\neq[0,3]\), then there exists a proper \([g^{\prime},n^{\prime}]\)-subsurface of \(X\), unless \([g,n](X)=[0,2]\), such that_
\[M(g,n)\leq 1+M(g^{\prime},n^{\prime}).\]
Proof.: Let \(S=\{\gamma_{1},\cdots,\gamma_{r}\}\) be a minimal filling set such that \(r=M(g,n)\), and set \(Y_{1}=\operatorname{SSH}(S\setminus\{\gamma_{r}\})\), then \(Y_{1}\subsetneqq X\) by minimality. To show the minimality of \(S\setminus\{\gamma_{r}\}\), we take out a curve, say \(\gamma_{r-1}\), and set \(Y_{2}=\operatorname{SSH}(S\setminus\{\gamma_{r-1},\gamma_{r}\})\). Note that \(Y_{2}\subsetneqq Y_{1}\), otherwise we have
\[\operatorname{SSH}(S\setminus\{\gamma_{r-1}\})=\operatorname{SSH} (S\setminus\{\gamma_{r-1},\gamma_{r}\},\gamma_{r})\] \[= \operatorname{SSH}(Y_{2},\gamma_{r})=\operatorname{SSH}(Y_{1}, \gamma_{r})=X,\]
which is contradictory to minimality of \(S\). Let \([g^{\prime},n^{\prime}]\) be the type of \(Y_{1}\), then minimality of \(S\setminus\{\gamma_{r}\}\) implies that
\[M(g,n)=\#(S)=1+\#(S\setminus\{\gamma_{r}\})\leq 1+M(g^{\prime},n^{\prime}).\]
_Remark 3.13_.: This process will not yield any \([0,3]\)-subsurfaces.
**Theorem 3.14**.: _We have the following estimate_
\[M(0,2)=1\]
_and_
\[M(g,n)\leq 3g+n.\]
Proof.: For a \([g,n]\)-surface \(X\), there are two types of maximal proper subsurfaces: \([g-1,n+2]\) and \([g,n-1]\), as long as the numbers are nonnegative. They are obtained by cutting \(X\) along a non-separating or separating curve. Note that any proper subsurface can be obtained through a chain of maximal proper subsurfaces. Let \(f(g,n)=3g+n\), then \(f(Y)<f(X)\) for any proper subsurface \(Y\subset X\). We use Lemma 3.12 to get a sequence of subsurfaces \(Y_{k}\subsetneqq Y_{k-1}\subsetneqq\dots\subsetneqq Y_{1}\subsetneqq X\), where \(Y_{k}\) is a \([0,2]\)-subsurface. Therefore,
\[3g+n=f(X)\geq f(Y_{1})+1\geq\dots\geq f(Y_{k})+k=2+k\]
and
\[M(X)\leq 1+M(Y_{1})\leq\dots\leq k+M(0,2)=k+1\leq 3g+n-1.\]
Note that \([0,3]\) is skipped in the descending process so a modification yields the final estimate
\[M(g,n)\leq 3g+n.\]
**Definition 3.15**.: The _j-capacity_\(\operatorname{Cap}^{\mathrm{j}}(Y)\) of a subsurface \(Y\) is the maximum cardinality of a \(j\)-curve set on \(Y\).
**Theorem 3.16**.: _We have the following estimate on \(j\)-capacity:_
\[\operatorname{Cap}^{\mathrm{j}}(g,n)\leq M(g,n)+(2jM(g,n)(M(g,n)-1))^{jM(g,n)}.\]
Proof.: Note that given a filling \(j\)-curve set, the element in all filling subsets that has the smallest cardinality is always a minimal filling subset. Let \(S\) be a \(j\)-curve set on a \([g,n]\)-surface \(X\), then there exists a minimal filling subset \(S_{0}\subset S\), and then any \(\gamma\in S\setminus S_{0}\) can be obtained this way:
List all the curves in \(S_{0}\) that intersect \(\gamma\) in an order of intersection:
\[\delta_{1},\delta_{2},\cdots,\delta_{l},\]
where \(\delta_{i}\)'s are not necessarily distinct but each appears at most \(j\) times. Consequently, \(l\leq jM\). Let \(\gamma\setminus\cup S_{0}=\cup\gamma_{i}\), where \(\gamma_{i}\) is a segment of
\(\gamma\) that connects \(\delta_{i}\) and \(\delta_{i+1}\). The segment \(\gamma_{i}\) lives in a convex polygon or once punctured convex polygon that has segments of \(\delta_{i}\) and \(\delta_{i+1}\) cut by \(S_{0}\) as two sides. Note that there are at most \(M\cdot j(M-1)\) segments of the graph \(S_{0}\). Given the initial and terminal point of \(\gamma_{i}\), there are at most two topological possibilities for \(\gamma_{i}\) as \(\gamma\) is simple, therefore we get an upper bound of all topological possibilities of such \(\gamma\): \((jM(M-1))^{l}\cdot 2^{l}\), and thus
\[\operatorname{Cap}^{\mathrm{j}}(g,n)\leq M+(2jM(M-1))^{jM}.\]
## 4. Non-essential subsurfaces
On a hyperbolic surface \(X\), going from a subsurface \(Y_{1}\) to another \(Y_{2}\) that \(Y_{1}\) is properly contained, it obviously increases the dimension of the tangent subspace of the Teichmuller space. That can be observed by taking enough geodesics and then the rank of the gradient vectors of the associated geodesic length functions. If we take a curve set \(S_{i}\) on \(Y_{i}\) (a special case is when \(\operatorname{SSH}(S_{i})=Y_{i}\)), with \(S_{1}\subset S_{2}\), we hope to find a way to determine when the rank will get strictly larger, i.e., when we have
\[\operatorname{rank}\{\nabla l_{\gamma}\}_{\gamma\in S_{1}}<\operatorname{ rank}\{\nabla l_{\gamma}\}_{\gamma\in S_{2}}.\]
**Definition 4.1**.: (1) A subsurface is called _essential_ if no complementary region contains a [1,1] or [0,4]-subsurface, otherwise it is _non-essential_. See Figure 1 for an example.
(2) For a subsurface \(Y\subset X\), the _essential closure_\(\overline{Y}\) of \(Y\) is the largest subsurface of \(X\) that \(Y\) is essential in with \(\partial\overline{Y}\subset\partial Y\). We also write \(\overline{\operatorname{SSH}}(\cdot)=\overline{\operatorname{SSH}(\cdot)}\).
**Lemma 4.2**.: _Let \(Y\) be a subsurface, then \(\#^{p}(\overline{Y})<2\#^{p}(Y)\)+2._
Proof.: Note that to get \(\overline{Y}\), one attaches \([0,3]\)-complements to \(Y\) along its boundary components, and every attaching operation decreases the number of boundary components by 1 or 2 and increases \(\#^{p}\) by 1.
Therefore, there can be at most \(n(Y)\) attaching operations, and thus
\[\#^{p}(\overline{Y})\leq n(Y)+\#^{p}(Y)=2g(Y)+2n(Y)-2\leq 2\#^{p}(Y)+2.\]
We show there is a 'leap' of the rank of the gradient vectors of enough geodesic length functions when expanding a subsurface non-essentially.
**Lemma 4.3**.: _Let \(S_{1}\subset S_{2}\) be two sets of curves on \(X\) and \(Y_{i}=\operatorname{SSH}(S_{i})\), \(i=1,2\). Suppose (1) \(Y_{1}\subsetneqq Y_{2}\), (2) \(Y_{1}\) is not essential in \(Y_{2}\). Then_
\[\operatorname{rank}\{\nabla l_{\gamma}\}_{\gamma\in S_{1}}<\operatorname{ rank}\{\nabla l_{\gamma}\}_{\gamma\in S_{2}}.\]
If there are two curves \(\alpha\in S_{2}\setminus S_{1}\) and \(\delta\subset Y_{2}\setminus Y_{1}\) such that they intersect each other exactly once and non-orthogonally, then by Kerckhoff's geodesic length-twist formula that can be found in [10],
\[\langle\nabla l_{\alpha},\tau_{\delta}\rangle=\cos\theta(\alpha,\delta)\neq 0.\]
In plain words, \(\nabla l_{\alpha}\) will create an extra dimension on top of the space spanned by \(S_{1}\). However, that is not always the case for randomly picked \(\alpha\) and \(\delta\). To create such a pair with that nonzero Weil-Petersson pairing, we pick an auxiliary curve \(\lambda\) and do Dehn twists on \(\delta\) along \(\lambda\) until we find an eligible curve.
For this purpose, we have the following lemma:
Figure 1. The right subsurface is non-essential
**Lemma 4.4**.: _Suppose \(\alpha\), \(\delta\) and \(\lambda\) are three simple closed geodesics on a hyperbolic surface, as shown in Figure 1, satisfying: (1) \(\delta\) and \(\alpha\) intersect, (2) \(\delta\) and \(\lambda\) intersect. Let \(\alpha^{\prime}\) be the geodesic arc obtained from \(\alpha\) by twisting the base surface along \(\lambda\) by \(t\), and \(\theta\) be the angle of \(\delta\) and \(\alpha^{\prime}\) at a given intersection, then \(\theta\) is monotone along the earthquake path \(\mathcal{E}_{\lambda}(t)\)._
Note that \(\alpha^{\prime}=\alpha\) if \(\alpha\) and \(\lambda\) are disjoint. This can be seen as a corollary to Lemma 3.6 in [10] where Kerckhoff proved the Nielsen realization theorem. We restate that lemma as the following with our notations. Figure 2 below is modified on Kerckhoff's original picture, in which \(\tilde{\lambda}\) is the preimage of \(\lambda\), \(\tilde{\delta}_{i}\)'s are segments of a lift of \(\delta\) cut by \(\tilde{\lambda}\) and \(\tilde{\alpha}^{\prime}\) is a lift of \(\alpha\). The earthquake is realized on the picture by shearing the components complementary to \(\tilde{\lambda}\) along \(\tilde{\lambda}\) where we fix the component containing \(\tilde{\delta}_{0}\). Let \(\bar{\delta}(t)\) be the corresponding lift of \(\mathcal{E}_{\lambda}(t)(\delta)\), i.e., the geodesic with endpoints being \(\lim_{n\to\pm\infty}\tilde{\delta}_{n}\). \(\theta\) is an intersection angle of \(\bar{\delta}\) and \(\tilde{\alpha}^{\prime}\).
**Lemma 4.5** ([10]).: _The endpoints of \(\bar{\delta}(t)\) move strictly to the left when \(t\) increases._
Figure 2. New endpoints are to the left to the old ones
Proof of Lemma 4.3.: Let \(Z\) be a connected component of \(Y_{2}\setminus Y_{1}\) that contains a \([1,1]\) or \([0,4]\)-subsurface, then for any geodesic on \(Z\), there exists a geodesic intersecting it. As \(Y_{2}\supsetneqq Y_{1}\), pick \(\alpha\in S_{2}\) crossing \(Z\), then there exists \(\delta\) on \(Z\) intersecting \(\alpha\). Pick \(\lambda\) on \(Z\) intersecting \(\delta\). The conditions in Lemma 4.4 are satisfied. Let \(\theta_{i}\) be the intersection angles measured from \(\alpha\) to \(\delta\), then \(\theta_{i}\)'s have the same monotonicity along the earthquake path \(\mathcal{E}_{\lambda}(t)\), and therefore \(\sum\cos\theta_{i}\) is monotone. There are only finitely many \(t\)'s to make \(\sum\cos\theta_{i}=0\), so there exists an integer \(t=n\) such that \(\langle\nabla l_{\alpha},\tau_{\delta_{n}}\rangle=\sum\cos\theta_{i}\neq 0\) on \(X\), where \(\delta_{n}=\mathcal{E}_{\lambda}(n)(\delta)\). On the other hand, \(\langle\nabla l_{\gamma},\tau_{\delta_{n}}\rangle=0\) for any \(\gamma\in S_{1}\). The lemma follows.
The proof implies the following:
**Lemma 4.6**.: _Let \(Y\subset X\) be a subsurface, and \(\gamma\) a simple closed geodesic. Suppose \(\gamma\not\subset\overline{Y}\), then \(\nabla l_{\gamma}\not\in T_{X}^{Y}\mathcal{T}\), the tangent subspace given by \(Y\) of the Teichmuller space of \(X\)._
Given a hyperbolic surface \(X\), we take the set of shortest geodesics on it, denoted by \(S(X)\). To prove the main theorem, we shall show this more general statement below on \(j\)-curve sets, as we will see that \(S(X)\) is a 2-curve set in the following section.
**Theorem 4.7**.: _For any \(j\), there exists a series \((r_{i})\) such that for any \(j\)-curve set \(S\) of \(r_{i}\) curves on any hyperbolic surface \(X\) with \(g(X)\) or \(n(X)\) large depending on \(i\) (and \(j\)), we have_
\[\operatorname{rank}\{\nabla\gamma\}_{\gamma\in S}\geq i.\]
Proof.: We construct the series by induction.
The case of \(i=1\) is trivial for \(r_{1}=1\).
For any \(S_{i}\) of \(r_{i}\) curves on \(X\), when \(g(X)\) or \(n(X)\) is large depending on \(i\), the inductive assumption gives that
\[\operatorname{rank}\{\nabla l_{\gamma}\}_{\gamma\in S_{i}}\geq i.\]
Note that \(\#^{p}\operatorname{SSH}(S_{i})\) is bounded from above in \(r_{i}\) by Lemma 3.9. By 4.2, there exist \(g(r_{i})\) and \(n(r_{i})\) such that \(\operatorname{SSH}(S_{i})\) is not essential in any \((g,n)\)-surface \(X\) when \(g>g(r_{i})\) or \(n>n(r_{i})\). \(\operatorname{Cap}^{\mathrm{j}}(\overline{\operatorname{SSH}}(S_{i}))\) is bounded in \(r_{i}\) (and \(j\)) uniformly in \(S_{i}\) by Theorem 3.16. Pick
\[r_{i+1}>\max_{S_{i}}\operatorname{Cap}^{\mathrm{j}}(\overline{\operatorname{SSH }}(S_{i})),\]
then for any \(S_{i}\subset S_{i+1}\subset S(X)\), by definition of \(j\)-capacity,
\[\operatorname{SSH}(S_{i+1})\supsetneqq\overline{\operatorname{SSH}}(S_{i}),\]
and by Lemma 4.3 and Lemma 4.6,
\[\operatorname{rank}\{\nabla l_{\gamma}\}_{\gamma\in S_{i+1}}>\operatorname{ rank}\{\nabla l_{\gamma}\}_{\gamma\in S_{i}}\geq i.\]
Therefore,
\[\operatorname{rank}\{\nabla l_{\gamma}\}_{\gamma\in S_{i+1}}\geq i+1.\]
Induction completes.
## 5. Shortest Geodesics and Main Theorem
Given a hyperbolic surface \(X\), \(S(X)\) denotes the set of shortest geodesics on it. As a curve set, \(S(X)\) satisfies certain conditions for combinatorial and geometric reasons. We say a curve bounds two cusps, if they together form a pair of pants, then
**Lemma 5.1**.: _Suppose \(\gamma_{1},\gamma_{2}\in S(X)\), then \(i(\gamma_{1},\gamma_{2})\leq 2\), i.e., \(S(X)\) is a 2-curve set. If \(i(\gamma_{1},\gamma_{2})=2\), then at least one of them bounds two cusps._
For proof one can see [10].
_Remark 5.2_.: \(S(X)\) is a 1-curve set when \(n(X)=0,1\).
**Corollary 5.3**.: _Suppose \(S(X)\) is filling, then \(\gamma\in S(X)\) is separating if and only if it bounds two cusps._
**Lemma 5.4**.: _If two distinct geodesics \(\gamma_{1}\) and \(\gamma_{2}\) bound the same two cusps on a surface \(X\), then \(i(\gamma_{1},\gamma_{2})\geq 4\)._
Proof.: For \(i=1,2\), since \(\gamma_{i}\) bounds two cusps, say \(p\) and \(q\), it separates the surface into two parts. Consider \(p\) and \(q\) as two mark points on the surface. Let \(X_{i}\) denote the closed \([0,3]\)-subsurface bounded by \(\gamma_{i}\) that contains the cusps \(p\) and \(q\), then \(p,q\in X_{1}\cap X_{2}\). Note that \(X_{1}\cap X_{2}\) is not path connected, as \(p\) and \(q\) cannot be joined by a path, otherwise both \(X_{1}\) and \(X_{2}\) contract to that path. Since the boundary of the path component containing \(p\) or \(q\) is contributed by both \(\gamma_{1}\) and \(\gamma_{2}\), it contains at least two intersections of \(\gamma_{1}\) and \(\gamma_{2}\). The lemma follows.
**Corollary 5.5**.: _Let \(\gamma_{1},\gamma_{2}\in S(X)\), then \(\gamma_{1}\) and \(\gamma_{2}\) cannot bound the same two cusps._
Per Remark 2.5, we apply Theorem 4.7 onto \(S(X)\) which is a \(2\)-curve set, for a hyperbolic surface \(X\) that is large enough. We have
**Theorem 5.6**.: _For any \(k\), there exists \(g_{0}=g_{0}(k)\) and \(n_{0}=n_{0}(k)\), such that_
\[\operatorname{Crit}(\operatorname{sys_{T}},\leq k)\cap\mathcal{M}_{g,n}=\emptyset\]
_and_
\[\operatorname{Crit}(\operatorname{sys},\leq k)\cap\mathcal{M}_{g,n}=\emptyset\]
_for \(g\geq g_{0}\) or \(n\geq n_{0}\). As a result,_
\[\operatorname{Crit}(\operatorname{sys_{T}},\leq k)\subset\partial\mathcal{M} _{g,n}.\]
Proof.: Let \(p\) be a critical point for the systole function, and \(p_{T}\) a critical point for \(\operatorname{sys_{T}}\), as in Theorem 2.4. Note that
\[\operatorname{ind_{\operatorname{sys_{T}}}}(p_{T})=\operatorname{ind_{ \operatorname{sys}}}(p)=\operatorname{rank}\{\nabla l_{\gamma}\}_{\gamma\in S (p)}.\]
Theorem follows from Theorem 4.7.
## 6. Classification of Low Index Critical Points
Based on the discussion at the end of Section 2, we shall study critical points of the \(\operatorname{sys_{T}}\) functions in the main stratum \(\mathcal{M}_{g,n}\), so we introduce the following definition:
**Definition 6.1**.: A critical point of \(\operatorname{sys_{T}}\) on \(\overline{\mathcal{M}}_{g,n}\) is _primitive_ if it is in \(\mathcal{M}_{g,n}\).
Following Theorem 4.7, we have more delicate results on some special cases below for shortest geodesics, and we will give a classification of primitive critical points of some low indices.
**Corollary 6.2**.: _Suppose \((g,n)(X)\neq(1,1),(0,4)\), then for distinct \(\gamma_{1},\gamma_{2}\in S(X)\),_
\[\operatorname{rank}\{\nabla l_{1},\nabla l_{2}\}=2.\]
Proof.: It is trivial that
\[\operatorname{rank}\{\nabla l_{1}\}=\operatorname{rank}\{\nabla l_{2}\}=1.\]
Consider \(\gamma_{1}=\operatorname{SSH}(\gamma_{1})\). Note that \(\gamma_{1}\) is non-essential in any hyperbolic \(X\) except when \((g,n)(X)=(1,1)\) or \((0,4)\). In any non-exceptional case, follow Lemma 4.6, we have
\[\operatorname{rank}\{\nabla l_{1},\nabla l_{2}\}>\operatorname{rank}\{\nabla l _{1}\}=1,\]
i.e., \(\operatorname{rank}\{\nabla l_{1},\nabla l_{2}\}=2\).
**Corollary 6.3**.: _Suppose \((g,n)(X)\neq(1,1),(0,4),(1,2),(0,5)\), then for distinct \(\gamma_{1},\gamma_{2},\gamma_{3}\in S(X)\),_
\[\operatorname{rank}\{\nabla l_{1},\nabla l_{2},\nabla l_{3}\}=3.\]
Proof.: If \(\gamma_{1},\gamma_{2},\gamma_{3}\) are not connected as a graph, it reduces to the case of two curves for the same reason as above. Suppose \(\gamma_{1}\) intersects \(\gamma_{2}\) and \(\gamma_{3}\). We take \(Y_{12}:=\operatorname{SSH}\{\gamma_{1},\gamma_{2}\}\), and consider the following two cases:
(1) When \(\#\gamma_{1}\cap\gamma_{2}=1\), \([g,n](Y_{12})=[1,1]\), and \(Y_{12}\) is non-essential in any \(X\) when \((g,n)(X)\neq(1,2)\).
(2) When \(\#\gamma_{1}\cap\gamma_{2}=2\), \([g,n](Y_{12})=[0,4]\), and at least one of \(\gamma_{1}\) and \(\gamma_{2}\) bounds two cusps, so \(Y_{12}\) has at most two punctures. \(Y_{12}\) is non-essential in any \(X\) when \((g,n)(X)\neq(1,3),(0,5)\) or \((0,6)\).
If \(\gamma_{3}\in Y_{12}\), then given the equal length of them, \(Y_{12}\) as a \((1,0,1)\) or \((0,3,1)\)-subsurface is determined and has a \(\mathbb{Z}/3\) rotational symmetry, then \(\nabla l_{1},\nabla l_{2},\nabla l_{3}\) have rank 2 when projected onto the 2-dimensional
tangent subspace at \(X\) of \(\mathcal{T}(X)\) given by \(Y_{12}\) (boundary not considered). Let \(\delta\) be the geodesic boundary of \(Y_{12}\), then \(\langle\nabla l_{i},\nabla l_{\delta}\rangle>0\) by Riera's formula, see [10] or [11]. Therefore,
\[\operatorname{rank}\{\nabla l_{1},\nabla l_{2},\nabla l_{3}\}=3.\]
Now suppose \(\gamma_{3}\not\in Y_{12}\). For any type of \(X\) other than those mentioned above, the conclusion follows from the previous corollary and Lemma 4.3. There are two types still to be considered to complete the proof:
When \(X\) is (1,3): Consider \(S^{2}:=\{\gamma_{i},\gamma_{i}\) bounds 2 cusps\(\}\). If \(\#S^{2}=0\) or 1, suppose \(\gamma_{1},\gamma_{2}\not\in S^{2}\), then \(\gamma_{1}\) and \(\gamma_{2}\) are non-separating. If \(\gamma_{1}\) and \(\gamma_{2}\) intersect then \(Y_{12}=\operatorname{SSH}\{\gamma_{1},\gamma_{2}\}\) is non-essential in \(X\) as the complement is \([0,4]\). If \(\gamma_{1}\) and \(\gamma_{2}\) are disjoint, then \(Y_{12}\) is non-essential as a component of the complement is \([0,4]\).
If \(\#S^{2}=2\) or 3, suppose \(\gamma_{1},\gamma_{2}\in S^{2}\), then each of \(\gamma_{1}\) and \(\gamma_{2}\) bounds two cusps, which are not the same two by Lemma 5.5. Therefore, \(Y_{12}\) is a \((0,3,1)\)-surface, and is non-essential in \(X\).
When \(X\) is (0,6): Since any closed geodesic is separating, and in any pair there is at least one that bounds two cusps by Lemma 5.1, at least two of \(\gamma_{i}\)'s bound two cusps, say \(\gamma_{1}\) and \(\gamma_{2}\). Then \(\operatorname{SSH}\{\gamma_{1},\gamma_{2}\}\) is \((0,3,1)\) and therefore is non-essential in \(X\). As \(\gamma_{3}\not\subset\overline{\operatorname{SSH}}\{\gamma_{1},\gamma_{2}\}\),
\[\operatorname{rank}\{\nabla l_{1},\nabla l_{2},\nabla l_{3}\}\geq\operatorname {rank}\{\nabla l_{1},\nabla l_{2}\}+1=3.\]
The two corollaries above imply that no primitive critical points of respective index exist in those non-exceptional moduli spaces. For the exceptional cases, the critical points for the systole function are known thanks to Schmutz-Schaller in his paper [12]. We are going to classify primitive critical points of some low indices, namely 0,1 and 2, by listing all such surfaces. For each figure in the following theorems,
there exists a unique surface with the colored curves as the shortest geodesics, with the given information on intersection or symmetry.
**Theorem 6.4**.: _Index 0 primitive critical points: Figure 3_
**Theorem 6.5**.: _Index 1 primitive critical points: Figure 4, 5_
**Theorem 6.6**.: _Index 2 primitive critical points: Figure 6, 7, 8, 9, 10_
Figure 5. \((g,n)=(0,4),\#S(X)=2\), \(\frac{\pi}{2}\)-intersection
Figure 6. \((g,n)=(1,1),\#S(X)=3\)
Figure 7. \((g,n)=(0,4),\#S(X)=3\)
Figure 9. \((g,n)=(1,2),\#S(X)=3\), \(\mathbb{Z}/2\) rotational and \(\mathbb{Z}/3\) permutational symmetry |
2310.20512 | Near-Petahertz Fieldoscopy of Liquid | Measuring transient optical field is pivotal not only for understanding
ultrafast phenomena but also for quantitative detection of various molecular
species in a sample. In this work, we demonstrate near-petahertz electric field
detection of a few femtosecond pulses with 2oo attosecond temporal resolution,
10$^8$ detection dynamic range in electric field and sub-femtojoule detection
sensitivity, exceeding those reported by the current methods. By field-resolved
detection of the impulsively excited molecules in the liquid phase, termed
'femtosecond fieldoscopy', we demonstrate temporal isolation of the response of
the target molecules from those of the environment and the excitation pulse. In
a proof-of-concept analysis of aqueous and liquid samples, we demonstrate
field-sensitive detection of combination bands of 4.13 {\mu}mol ethanol for the
first time. This method expands the scope of aqueous sample analysis to higher
detection sensitivity and dynamic range, while the simultaneous direct
measurements of phase and intensity information pave the path towards
high-resolution biological spectro-microscopy | Anchit Srivastava, Andreas Herbst, Mahdi M. Bidhendi, Max Kieker, Francesco Tani, Hanieh Fattahi | 2023-10-31T14:55:41Z | http://arxiv.org/abs/2310.20512v1 | # Near-Petahertz Fieldoscopy of Liquid
###### Abstract
Measuring transient optical field is pivotal not only for understanding ultrafast phenomena but also for quantitative detection of various molecular species in a sample. In this work, we demonstrate near-petahertz electric field detection of a few femtosecond pulses with 20o attosecond temporal resolution, 10\({}^{\mathbf{8}}\) detection dynamic range in electric field and sub-femtojoule detection sensitivity, exceeding those reported by the current methods. By field-resolved detection of the impulsively excited molecules in the liquid phase, termed "femtosecond fieldoscopy", we demonstrate temporal isolation of the response of the target molecules from those of the environment and the excitation pulse. In a proof-of-concept analysis of aqueous and liquid samples, we demonstrate field-sensitive detection of combination bands of 4.13 \(\upmu\)mol ethanol for the first time. This method expands the scope of aqueous sample analysis to higher detection sensitivity and dynamic range, while the simultaneous direct measurements of phase and intensity information pave the path towards high-resolution biological spectro-microscopy.
**Keywords:** Near-infrared spectroscopy, Field-resolved spectroscopy, ultrashort pulses, time-domain spectroscopy
## 1 Introduction
Laser-based, label-free quantitative determination of sample composition has proven to be a potent tool across a wide spectrum from fundamental research to real-life applications [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]. For accurate and delicate spectroscopic measurements, it has been
crucial to isolate the sample from environmental interferences. For instance, water comprises approximately \(60\,\%\) of the human body, envelops \(70\,\%\) of the Earth's surface, and permeates our surroundings through the atmosphere. Water has a broad absorption spectrum spanning from the visible to mid-infrared (MIR) and is a persistent component on our detectors. Due to its strong absorption cross-section at MIR, sensitive spectroscopy of samples at their resonance frequencies is challenging, as water dominates other, more subtle, absorbance features arising from other molecules. Moreover, the excessive absorbed energy by water at MIR is left in the sample as thermal energy, limiting non-invasive analysis. On the other hand, NIR spectroscopy provides a fingerprint of sample constituents similar to MIR spectroscopy. It distinguishes itself by offering higher spatial resolution and enhanced penetration depth, afforded by the lower absorption cross-section of water in the NIR region [12, 13, 14]. This feature makes NIR spectroscopy particularly suitable for the non-invasive and label-free examination of soft matter and large-volume aqueous samples [15, 16].
Overtone and combination vibrations of molecules, which are primarily detected in NIR spectroscopy extend beyond \(0.12\) petahertz (PHz). Spectrometers have been used for frequency domain detection in this range. However, their detection sensitivity is constrained by the excitation light, which manifests as a background within the same spectral range [17, 18, 19]. Since the resonance frequencies of overtone and combination bands surpass the sensitivity of silicon-based detectors, employing a second detector, generally more prone to noise in this range, is necessary to capture the system's entire response. Fourier transform spectroscopy is an alternative method allowing for precise spectroscopic detection. Recent advancements in NIR dual-comb spectroscopy have paved the way for precision Fourier transform spectroscopy, albeit primarily in the gas phase [20, 21, 22, 23, 24]. Nonetheless, the technique remains constrained by detectors' spectral response and the existence of a background signal analogous to frequency domain detection [13, 25].
In contrast, field-resolved detection allows for the direct measurement of light-matter interactions with attosecond precision in a sub-cycle regime, capturing both amplitude and phase information [26, 27]. For decades attosecond streaking was the sole method to probe the electric field of light with a bandwidth approaching the PHz [28, 29]. A significant drawback was its confinement to vacuum operations. Over the past decade, various techniques have been developed that enable the near-PHz field-resolved detection of light in ambient air [30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46]. Among these techniques electro-optic sampling (EOS) stands out for its unparalleled detection sensitivity [27, 47, 48]. In EOS a short probe pulse is employed to resolve the cycles of the electric field of light by up-converting its spectral bandwidth to higher frequencies, making it possible to apply silicon detectors for broadband NIR detection [49]. Moreover, the combination of bright, ultra-short pulses [50, 51, 52, 53] and heterodyne detection allows for higher detection signal-to-noise ratio and higher detection sensitivity, leaving the shot noise of the probe pulse the primary source of noise [54].
In this work, we report on the direct detection of the electric field of light at near-PHz frequencies with unparalleled sensitivity and dynamic range. This has been
enabled by developing a unique laser source delivering broadband pulses with carrier-to-envelope phase (CEP) stability, which were intrinsically synchronized to near-single-cycle pulses at megahertz (MHz) repetition rates. The unique frontend enabled direct electric-field detection of CEP-stable, few-cycle pulses with unprecedented detection sensitivity and dynamic range via EOS with attosecond temporal resolution.
Employing the bright ultrashort pulses, we report on the field-sensitive detection of molecular response at the NIR region for the first time. Few-femtosecond, phase-coherent pulses were utilized for both broadband molecular excitation and the near-PHz electric-field detection of their response. Here, the confinement of the excitation pulses allows for temporal gating of the molecular response, while accessing the electric field enables the precise detection of the response of the target molecules from those of the environment. We evaluated our approach by conducting field-resolved detection of water vibration modes in both gas and liquid phases in the NIR region. Additionally, we detected the subtle combination bands of ethanol in the liquid phase. These results show to the best of our knowledge the first field-detection of the NIR molecular response in ambient air, paving the path for the emergence of innovative, field-sensitive, label-free spectroscopy and microscopy techniques.
## 2 Results
Fig. 1 illustrates the measurement concept. Few-cycle, CEP-stable, NIR pulses, are utilized to extract sensitive spectroscopic information from liquid samples under atmospheric conditions. The electric field of the molecular response, in the wake of the excitation pulse, is resolved via EOS. Due to the high detection sensitivity, we can resolve not only the response of the molecular vibrations of the sample at their overtone and combination resonances but also the response of atmospheric molecules along the beam path. The temporal confinement of the excitation pulses ensures that the molecular responses from both the sample and ambient air are temporally separated from the excitation pulse. Moreover, the response of the liquid sample is temporally distinguished from the ambient air's fingerprint due to the faster dephasing in the liquid phase. Realizing such a concept requires the generation of intrinsically synchronized bright few-cycle pulses with at least one-octave separation and sub-cycle temporal synchronization.
The pink-shaded region in Fig. 2(a) shows the optical setup for near single-cycle pulse generation at MHz repetition rates. Single-ring hollow-core photonic crystal fibers (SR-PCF) are used due to their relatively low-loss, broadband guidance, and tunable dispersion [55, 56], allowing the generation of ultrashort, bright pulses containing tens of microjoules of energy [53, 57]. In the first stage, 20 \(\upmu\)J, 1 MHz laser pulses were compressed from 255 fs to 25 fs at full width at half maximum (FWHM), by self-phase-modulation based spectral broadening in Argon-filled SR-PCF (see supplementary information (SI) Fig. 5 and Fig. 6), followed by group-delay dispersion compensation by a chirped mirror (CM) compressor. Subsequently, in the second stage, the 25 fs pulses were compressed to a near single-cycle duration via soliton-effect self-compression in a similar fiber (Fig. 2(b)). In both stages, the gas species used as a medium were selected
to minimize photoionization and subsequent long-lived effects occurring at MHz repetition rates [58]. The accumulated dispersion on the near single-cycle pulses due to propagation in media after the fiber was compensated in a CM compressor to 4.8 fs at FWHM, limited by the bandwidth of the CM compressor (Fig. 2(c) and Fig. 2(d)). This corresponds to 3.5 GW of peak power with 69 % of the energy in the main pulse. The retrieved spectrum shown in Fig. 2(e) spans over 300 THz bandwidth supporting 3 fs pulses at FWHM.
Figure 1: **Near-Petahertz fieldoscopy.** (a) An ultrashort pulse excites molecules at their NIR resonances. Here, the molecules inside a cuvette represent the sample under scrutiny, while the surrounding molecules represent atmospheric water vapor molecules. The transmitted field contains the global molecular response of both the sample and the environment. A second short pulse at higher frequencies is used for up-conversion and generation of a delay-dependent signal in a nonlinear crystal, where the correlation signal is directly proportional to the electric field of the excitation pulse. The measured electric contains the ultrashort excitation pulse, the delayed response of the liquid spanning over several picoseconds, and a long-lasting response of atmospheric gases lasting for hundreds of nanoseconds. By time filtering and subsequent data analysis, the molecular response can be decomposed to the short-lived liquid and long-lived gas responses. (b) The biological relevant vibrational modes at NIR spectral range. Different compounds like proteins, carbohydrates, lipids, polyphenols, and alcohols are associated with the shown bands. Associated bonds are described in the legend below the plot where “str” refers to stretching vibration, whereas “bend” refers to bending vibration. The numbers one and two indicate the first and second overtones, while the plus sign (+) indicates combination bands. The values were taken from [16].
Intrapulse difference frequency generation (IPDFG) was employed to generate broadband NIR pulses with a passive CEP-stability (blue region in Fig. 2(a)) [59]. A 500 \(\upmu\)m-thick bismuth borate (BiBO) crystal was pumped by 4.8 fs pulses at 1 \(\upmu\)m to generate a broadband spectrum spanning from 0.1 PHz to 0.23 PHz and 160 mW of average power. A custom-made dichroic beam splitter with the cut-off at 0.2 PHz was used to separate the residual pump from the CEP-stable pulses. After dispersion management with a custom-made CM compressor, the NIR pulses along with the fraction of 4.8 fs pulses were sent to a 20 \(\upmu\)m beta barium borate (BBO) crystal for electric-field sampling. The green-shaded region in Fig. 2(a) highlights the schematic of the NIR EOS. The up-converted signal is spectrally filtered with a pass band filter from 0.425 PHz to 0.5 PHz to enhance the detection sensitivity by eliminating the spectral components that do not carry specific field information. The measured electric field of the broadband CEP-stable pulses and its temporal intensity profile with a 15 fs pulse duration at FWHM are shown in Fig. 3(a) and Fig. 3(b), respectively. The corresponding spectral intensity and phase obtained through the Fourier transform shown in Fig. 3(c) reveals residual higher-order dispersion, which can be compensated by optimizing the design of the CM compressor. While the high-frequency cut-off of the broadband CEP-stable pulses is not sufficient to achieve a high-frequency cut-off, the high-frequency cut-off of the broadband CEP-stable pulses is not sufficient to achieve a high-frequency cut-off.
Figure 2: **Experimental setup.** (a) The shaded regions highlight different parts of the optical setup: nonlinear fiber stages in pink, IPDFG in blue, and EOS in green. (b) Spectrum of the laser (red), the first fiber stage (blue), and the second fiber stage (black). (c) Measured (top) and retrieved (bottom) spectrograms. The near-single-cycle pulses were measured after the nonlinear compression stages via second-harmonic generation frequency-resolved optical gating (SHG-FROG). (d) The retrieved temporal pulse duration at the output of the second fiber at a 1 MHz repetition rate. (e) Retrieved spectral intensity and phase. SR-PCF: single-ring hollow-core photonic crystal fiber; DM: dichroic mirror; WP: wedge pair; BS: beam splitter; WGP: wire grid polarizer; FEL: long pass filter; FES: short pass filter; QWP: quarter waveplate; BD: balanced photodiode; BBO: beta barium borate; BiBO: bismuth borate; EDFA: erbium-doped fiber amplifier.
the spectrum is limited by the beam splitter roll-off, the crystal absorption constrains the low-frequency cut-off to 0.1\(\,\)PHz. To verify the detection sensitivity and dynamic range of the detector, the energy of the IPDFG pulses before EOS crystal was reduced from 80\(\,\)nJ to sub-femto joule energies by using a series of neutral density filters. Fig. 3(a) inset shows two field-resolved measurements at 8\(\,\)pJ (magenta) and 0.7\(\,\)fJ (in yellow), respectively. The spectral intensity counterpart at the three different pulse energies is shown in Fig. 3(d) corresponding to a 110 dB dynamic range. To establish the ability of the system to detect the response of minute quantities of molecules, we resolved the electric field of the atmospheric water vapor molecules, liquid water, and ethanol in ambient air after excitation by femtosecond NIR pulses. The temporally gated molecular response and their frequency counterparts are shown in Fig. 4.
Figure 3: **NIR electric field sampling.** (a) The measured electric field of CEP-stable pulses via EOS. The inset displays two low-energy fields in the presence of ND4 (magenta) and ND8 filters (yellow). Both fields are longer than the blue curve due to additional material dispersion caused by the ND filters. (b) The temporal profile of the CEP-stable pulses with a pulse duration of 15\(\,\)fs FWHM. (c) Retrieved spectrum and phase of the CEP-stable pulses. (d) Retrieved spectrum of the CEP-stable pulses after attenuation with the ND4 and ND8 filters. The spectrum of the unattenuated pulse is shown in blue for comparison. The dashed black line represents the measured noise floor in the absence of the CEP-stable pulses on the balanced detector. The legend displays the acquisition time for each field, along with the ND filter label. This highlights the system’s ability to measure fields with a dynamic range greater than \(10^{8}\) in the electric field. ND: neutral density. The retrieved spectra are not corrected for the spectral response of the filters.
## 3 Discussion
Water has two prominent absorption peaks within the spectral coverage of the excitation pulses: i) an asymmetric stretch centered at 0.115\(\,\)PHz (3836\(\,\)cm\({}^{-1}\)) and ii) a combination resonance of bending and asymmetrical stretch centered at 0.16\(\,\)PHz (5337\(\,\)cm\({}^{-1}\)) [60]. Fig. 4(a) shows the electric field of atmospheric water vapor molecules at two different laboratory relative humidity (RH). The measurements at 50\(\,\)% RH and 8\(\,\)% RH, correspond to 16.5\(\,\)\(\upmu\)mol and 2.64\(\,\)\(\upmu\)mol of atmospheric water vapour molecules interacting with the broadband 15\(\,\)fs excitation pulses, respectively (see SI). The minimum laboratory achievable RH of 8\(\,\)% was reached by purging the beam path with dry air and nitrogen. Fig. 4(b) shows the corresponding absorption frequencies for both concentrations. The spectra are achieved by Fourier-transformation of the temporally
Figure 4: **Benchmarking measurements.** Each row shows the time-gated electric field on the left and its Fourier-transformed spectrum on the right. (a) The light grey field represents atmospheric water molecules at 50\(\,\)% RH (16.5\(\,\)\(\upmu\)mol), while the black curve represents 8\(\,\)% RH (2.64\(\,\)\(\upmu\)mol). (b) Two absorption modes are visible: a fundamental mode at 0.115\(\,\)PHz and a combination band at 0.16\(\,\)PHz. (c) Liquid water molecules at 5.23\(\,\)\(\upmu\)mol (red) and 2.64\(\,\)\(\upmu\)mol (pink) along with pure acetic acid (brown) as a reference measurement. (d) The molecular response of pure acetic acid is compared to that of aqueous solutions. (e) Pure liquid ethanol at different volumes of 4.13\(\,\)\(\upmu\)mol, 16.5\(\,\)\(\upmu\)mol, and 82.5\(\,\)\(\upmu\)mol. (f) The weak combination peak centered at 0.130\(\,\)PHz is observed at all three volumes.
gated molecular response at 1 ps after the excitation pulse for a temporal window of 30 ps and agree with the HITRAN database [61] (see SI Fig.7). At the 2.64 mol level, the absorption peak at 0.16 PHz is barely resolved due to the low absorption cross-section of the water's combination band (65\(\times\)10\({}^{-21}\)cm\({}^{2}\)/molecule) compared to its counterpart at the fundamental resonance (600\(\times\)10\({}^{-21}\)cm\({}^{2}\)/molecule) [61].
Fig. 4(c) shows the measured molecular response of 5.23 mol and 2.64 mol diluted water in the liquid phase, which were prepared by mixing 20 \(\upmu\)L and 10 \(\upmu\)L of deionized water in 1 mL acetic acid. We examined the molecular response in the liquid phase in the temporal window of 0.3 ps to 1 ps, as dephasing occurs faster (Fig. 4(d)) [62, 63]. The absorption amplitude of both concentrations was normalized to 0.2 PHz peak, which is present at this time scale due to the cut-off of the dichroic beam splitter used in our setup. Water's asymmetric stretch resonance at 0.115 PHz (3836 cm\({}^{-1}\)) in Fig. 4(d) is distinct for 5.23 mol (red curve) and 2.64 mol (magenta curve). The pure acetic acid response in the presence of water vapor molecules at RH of 8 % is shown in Fig. 4(d) (black curve). To evaluate the detection sensitivity of our setup in environmental conditions, ethanol was measured due to its distinct resonance frequency in comparison to water and its low absorption cross-section. Fig. 4(e) and Fig. 4(f) show the resonance of pure liquid ethanol at different concentrations and 8 % RH, in time and frequency domain, respectively. The weak absorption at 0.13 PHz (4336 cm\({}^{-1}\)) is due to the combination band resonance arising from C-H stretch and C-H bend mode, which was resolved very clearly in our measurement at the minimum detectable amount of 4.13 mol [64].
We define a figure of merit (FOM) to allow detection sensitivity comparison of absorption bands between different species and various absorption cross-sections. The FOM is defined as:
\[\mathrm{FOM=n\times\sigma} \tag{1}\]
Where n is the amount of substance in mol and \(\sigma\) is the absorption cross-section of the absorption band in cm\({}^{2}\)/molecule. Given the absorption cross-section values of water (600\(\times\)10\({}^{-21}\) cm\({}^{2}\)/molecule) and ethanol (3.2\(\times\)10\({}^{-21}\) cm\({}^{2}\)/molecule) [61, 65], the calculated FOM value for water at 2.64 mol is \(9\times 10^{-28}\) m\({}^{2}\) mol/molecule, while FOM for ethanol's combination band at 4.13 mol is \(1.322\times 10^{-30}\) m\({}^{2}\) mol/molecule with merely 25 \(\upmu\)m path length.
In conclusion, we report on field-sensitive, near-PHz detection of molecular fingerprints in the liquid phase for the first time. To this end bright, intrinsically synchronized CEP-stable, 15 fs pulses at 2 \(\upmu\)m, and 4.8 fs pulses at 1 \(\upmu\)m were generated for impulsive excitation and probing of the molecular response via EOS. Using the MHz ultrashort laser pulses, we demonstrated the ambient air field-resolved detection of femtosecond pulses with sub-femtojoule energy, and 10\({}^{8}\) detection dynamic range in the electric field, enhancing the near-PHz field-detection sensitivity by three orders of magnitude compared to other field-resolved techniques [47]. The source's MHz repetition rate augments both the signal-to-noise ratio and detection sensitivity, while the signal upconversion in our scheme alleviates the bandwidth constraint inherent in silicon-based detectors. In a proof of concept, we reported on the sensitive detection of the vibration modes of atmospheric and aqueous water molecules at 2.64 mol, and
ethanol combination band at 4.13 \(\upmu\)mol. To the best of our understanding, these measurements mark the first field-resolved detection of both fundamental and combination bands in the liquid phase.
Femtosecond pump-probe spectroscopy has provided evidence that rapid dynamics occurring within a time scale of fewer than 100 fs of liquid water have a significant impact on chemical reactions taking place in the aqueous phase [66]. This underscores the vital importance of ultrafast processes in comprehending aqueous phase chemistry for example on grasping how water molecules dissipate energy [67]. The electric field measurements presented in Fig. 4 thus establish a foundation for studying aqueous solutions with enhanced sensitivity and dynamic range compared to femtosecond intensity pump-probe techniques [68, 69], rooted in the higher amplitude of the molecular response relative to the excitation pulses in field-resolved detection (see SI Fig. 8) [70, 71]. Furthermore, the high repetition rate of near-single-cycle pulses not only lays the groundwork for single-shot monitoring of chemical reactions in liquids [72] but also presents intriguing possibilities for exploring nonlinear interactions due to the unique combination of peak and average power in the near-single-cycle domain.
Stimulating the molecular composition of a sample with phase-coherent femtosecond excitation pulses leads to temporal gating between the molecular response from the excitation pulses. Moreover, accessing the sub-cycle electric field of light allows for decomposing the short-lived liquid molecular response from the long-lived ambient gas responses (see SI Fig. 9). Femtosecond fieldoscopy expands the scope of aqueous sample analysis and paves the path toward novel methods for multi-dimensional spectroscopy and high-resolution biological spectro-microscopy.
## 4 Materials and Methods
### Near single-cycle pulse generation
A commercially available Yb:KGW amplifier (CARBIDE Light Conversion) delivering 255 fs pulses at 1030 nm with 20 W of average power, and at 1 MHz repetition rate, is used as the source laser. In the first nonlinear fiber stage, we used a 100 mm focal length (Thorlabs LA1509-B-ML) lens to couple 20 \(\upmu\)J circularly polarized pulses into a 50 cm long SR-PCF with a core diameter of 55 \(\upmu\)m (see SI Fig. 10) filled with 15 bar of Argon. The spectrally broadened laser pulses were compressed to 25 fs (FWHM) using a CM compressor (UltraFast Innovations GmBH PC1611) with 12 bounces (see SI Fig. 11). The total group delay dispersion compensated by the CM compressor was -1800 fs\({}^{2}\). Subsequently, the 25 fs compressed pulses were coupled (via Thorlabs LA1509-BML) to a 31 cm long SR-PCF (parameter same as before), filled with 15 bar of Helium. After the second stage, a 2-inch off-axis silver parabola (Edmund 36-598) with an effective focal length of 177.8 mm was used to collimate the beam. The two gas cells were mounted on a 3-axis stage (MDE122) from Elliot Scientific. The two-stage fiber system achieves a total throughput of 85 % before the collimating parabola. Afterward, the soliton-compressed pulses were sent to a CM compressor (UFI PC105) consisting of four double-angle CM (total group delay dispersion of -160 fs\({}^{2}\)) and a pair of wedges (Altechna M0067705) to compensate for the dispersion caused by the two mm-thick
MgF\({}_{2}\) output window of the second gas cell and to pre-compensate for the accumulated dispersion on the pulse before reaching the IPDFG crystal. After the compressor, we placed a 1 mm AR-coated window (UFI AR7203) that reflects 5 % of the beam, to separate the probe pulses for EOS. We employed a home-built second-harmonic generation frequency-resolved optical gating (SHG-FROG) for temporal characterization. To ensure accurate measurements, the device utilized all-reflective dispersion-free optics in a non-collinear geometry with 10 \(\upmu\)m-thick BBO crystal (Castech) cut for type I phase matching.
### CEP-stable, NIR pulse generation
For the IPDFG, we focused 12 \(\upmu\)J of the compressed pulses from the fiber stages to 40 \(\upmu\)m by using a 1-inch off-axis parabola. A type II BiBO crystal with a phase matching angle of 12\({}^{\circ}\)(Castech) was placed a few millimeters behind the focus to avoid white light generation in the crystal. A half-wave plate was introduced into the beam path before the second fiber stage to rotate 1 % of the input 'p' polarisation state to's' polarisation state. The generated CEP-stable pulse was collimated to a beam size of 3.2 mm at 1/e\({}^{2}\) employing a 4-inch focal length parabola (Thorlabs MPD254508-90-P01). A custom-built (UFI BS2214-RC2) broadband dichroic beam splitter separated the pump and the NIR beam. The measured power after the beam splitter was 160 mW. A custom-built double-angle CM compressor (UFI IR7202) (see SI Fig. 12) with four bounces was used for temporal compression to 15 fs.
### Field-resolved detection
In EOS, probe and excitation pulses propagate collinearly in a nonlinear crystal, generating spectral components at sum and difference frequencies. The up-converted field-sensitive signal arises from the interference between partially overlapping spectra of the probe pulse and the sum or difference frequency pulse. Through an ellipsometer, direct access is obtained to the electric field of the sampled pulse [37]. By utilizing a lock-in amplifier and balanced detection, the technical noise surplus of the gate pulse was mitigated, thereby making the shot noise of the probe pulse the primary limitation on detection sensitivity. In the EOS, the IPDFG pulses were used as an excitation pulse, whereas a 5 % reflection from the second fiber stage's output was used as a probe pulse. A wire grid polarizer combined the probe and the excitation pulses with orthogonal polarization, collinearly in the EOS crystal. An off-axis parabolic mirror of 3-inch focal length was used to focus the beams in a 20-\(\upmu\)m-thick, type II BBO crystal to generate the sum frequency signal. The sum frequency signal interferes with the high-frequency portion of the probe pulse, which acts as a local oscillator for a heterodyne detection. Appropriate filters (Thorlabs FEL 600 and FES 700) were placed after the EOS crystal. The resulting polarization rotation was measured by an ellipsometer, which included a Wollaston prism, a quarter waveplate, and balanced photodiodes. The quarter-wave plate is adjusted to ensure that both photodiodes receive the same intensity in the presence of the probe pulses. A mechanical chopper modulated the excitation pulses at 5.8 kHz to enable heterodyne lock-in detection. The delay line was based on the linear motorized stage (Physik Instrument V-528.1AA) with a scanning
range of 20 mm, corresponding to a scanning delay of 132 ps. An interferometric delay tracking system [73] was employed to precisely track the delay line and any timing jitter artifacts (see SI).
In EOS, the measured interference of the sampling field with sum frequency field components is convoluted with the detector response. Consequently, it is subject to a complex response function comprising both amplitude and phase components. The response function can be calculated using the methods described in [30]. Based on the wavevector mismatch calculation (see SI Fig. 13), it can be seen that the post-processing of the measured field can be neglected for our spectral range as the nonlinear response remains constant throughout the EOS detection range.
### Sample preparation
A Pike Technologies liquid cell (162-1200) with two 3 mm-thick barium fluoride windows and a spacer was used to hold the liquid samples. To examine the liquid water, 10 \(\upmu\)L and 20 \(\upmu\)L of deionized water were mixed with 1 mL acetic acid buffer solution (chemlab - CL00.0119). The samples were placed between two windows using a 0.5 mm Teflon O-ring, corresponding to an irradiation volume of 4.81 \(\upmu\)L. Three cells with different concentrations were mounted side to side on a translation stage to reduce the systematic error. For ethanol measurements, pure ethanol (VWR chemicals - 85033.360) was filled into three liquid cells with Teflon O-ring spacers of different thicknesses (0.5 mm, 0.1 mm, and 0.025 mm). The spacers corresponded to irradiation volumes of 4.81 \(\upmu\)L, 0.962 \(\upmu\)L, and 0.241 \(\upmu\)L, respectively.
## Acknowledgements
We thank PSJ Russell, MH Frosz, and their team for the production of the fibers used in this experiment. We want to express our deep gratitude to Daniel Schade, Wolfgang Schweinberger, Gunnar Arisholm, Nicholas Karpowicz, and Mallika I Suresh for their invaluable guidance and support.
## Declarations
* This work was supported by research funding from the Max Planck Society.
* Conflict of interest/Competing interests: The authors do not declare any competing interests.
* Authors' contribution: H.F. envisioned and designed the experiment. A.S., A.H., F.T., M.K. implemented fiber stages for short pulse generation. A.S, A.H. M.B implemented the data acquisition system. A.S. performed the fieldoscopy measurements. A.S. and H.F performed the data analysis and wrote the manuscript. All authors proofread the manuscript.
## Supplementary Information
### Data acquisition
The experimental configuration incorporates custom-developed software tailored to comprehensively control various components involved in the measurement process, namely the delay stage, PicoScale, and lock-in amplifier. The delay stage employed is a Physik Instrument V-528.1AA, specifically chosen for its suitability within the experimental framework. The PicoScale, an integral setup component, is a commercially available interferometer manufactured by SmarAct. It operates on the principles of sinusoidal phase modulation for precise interferometric displacement measurements. Notably, the PicoScale measures temporal delays and detects any mechanical jitter introduced within the beam path. Integrating the interferometer into the experimental arrangement involves amplifying the PicoScale laser (1550 nm, CW) through an erbium-doped fiber amplifier and directing it parallel to the optical setup. This enables accurate tracking of the optical path difference between the pulses with attosecond precision. To facilitate lock-in measurements, an optical chopper modulates the NIR beam at a frequency of 5.882 kHz. For the presented measurements, the stage velocity is 0.4 mm/s.
### Concentration Calculations
The number of molecules is calculated by the ideal gas law:
\[\mathrm{PV}=\mathrm{nRT} \tag{2}\]
where, P = pressure, V = volume, n = Number of moles, R = ideal gas constant (8.314 J/mol K) and T = temperature in Kelvin. 3.2 m of beam propagation in the air with a beam diameter of 3.5 mm corresponds to a volume of \(3.08\times\,10^{-5}\,\mathrm{m}^{3}\) (0.0308 L). At 22deg C (295.15 K), the saturation vapor pressure of water (Psaturated) is 2.64 kPa. To calculate the partial pressure of water vapor Pwater, we use:
\[\mathrm{P}_{\mathrm{water}}=\mathrm{RH}\,\times\,\mathrm{P}_{\mathrm{saturated}} \tag{3}\]
For RHs of 50 % and 8 %, the corresponding values for Pwater are 1.32 kPa and 0.211 kPa respectively. Using the equation 2, we have nRH50 % = 16.5 umol and nRH8% = 2.64 umol.
For liquid water samples, we dissolved 10 umL and 20 umL of deionized water in 1 mL of acetic acid buffer, corresponding to a Molarity of 0.555 M and 1.088 M, respectively (taking into account the density of water as 1 g/mL and the molar mass of water as 18.02 g/mol). For a beam diameter of 3.5 mm and an O-ring spacer of 0.5 mm, irradiated volume is calculated to be 4.81 umL. To determine the number of moles in the irradiation volume, the irradiation volume is multiplied by the molarity of the corresponding solution. For 10 umL and 20 umL dissolved water, we obtain 2.64 umol and 5.23 umol, respectively.
For calculating the number of moles of ethanol at three different volumes of 4.81 umL, 0.55 umL, and 0.137 umL: firstly the corresponding mass was calculated by multiplying the
volume and density. Afterward, the obtained values were divided by ethanol's molar mass. Obtaining 0.79 g/ml as the density of ethanol, and 46.068 g/mol for its molar mass, the number of moles was calculated to be 82.5 mol, 16.5 mol, and 4.13 mol, respectively. Molar mass and density information for all compounds were retrieved from PubChem.[74].
### Nonlinear propagation in both fibers
Figure 5: **Numerical simulation for nonlinear pulse propagation in the fiber stages.** (a) The spectral evolution of the first fiber stage based on self-phase modulation. (b) The corresponding time domain propagation. External methods are required for pulse compression to its Fourier transform. (c) Spectral evolution of the second fiber stage. (d) The time evolution indicates the temporal compression of the pulses to \(3\,\mathrm{fs}\) at the fiber output, due to soliton self-effect compression. These values correspond to \(317\,\mathrm{TW/cm^{2}}\) peak intensity. The numerical simulations used a model described in [75].
Figure 6: **Fiber dispersion tuning.** (a) Fiber dispersion for SR-PCF when filled with 15 bar of Argon. (b) Fiber dispersion for SR-PCF when filled with 15 bar of Helium. Both curves are calculated using the model described in this paper [76]. Two discontinuities stretching longitudinally indicate first and second-order resonances inside the fiber due to core-wall capillary thickness. The intersection of the blue curve and the dotted line indicates the zero dispersion wavelength (ZDW). The region to the right of ZDW has anomalous dispersion, whereas the region to the left has normal dispersion.
## Field-resolved detection
Figure 7: **Comparison with HITRAN.** The black spectra obtained from the field-resolved water measurement at 50 % relative humidity, shown in Figure 4(a), match the red HITRAN database spectra. The inset figure displays the combination band at 0.160 PHz, in agreement with the HITRAN spectrum. At 0.10 PHz and 0.12 PHz spectral range, the measured spectrum appears to be blue-sifted compared to the HITRAN reference spectrum, which is caused by the narrower bandwidth of our excitation spectrum.
**Fig. 8**: **Molecular information in the electric field vs intensity.** (a) Electric field of the molecular response when excited impulsively. (b) The detected intensity counterpart in pump-probe intensity techniques. In field-resolved detection the measurement signal scales linearly with the electric field. Therefore molecular response is recorded by higher sensitivity than the pump-probe techniques.
### Effectiveness of temporal gating in isolating discrete responses
The dephasing duration of molecular response is subject to the molecular phase and varies from hundreds of femtoseconds in the liquid state to a few nanoseconds in the gas phase. Therefore, effective time filtering enables relative isolation of phase responses. Fig. 9 (a) shows the full trace of the measured electric field for ethanol, while a1 and a2 in the inset show two magnified regions of the trace at the trailing edge of the femtosecond excitation pulse corresponding to the short-lived liquid response, and long-lived gas response, respectively. The Fourier transform of different temporally gated areas of panel (a) is shown in panel (b). The black spectrum shows the Fourier Transform of the entire sampled field, while the blue spectrum shows the absorption peaks of ethanol corresponding to panel a1, and the red spectrum shows the absorption of atmospheric water vapor corresponding to Figure a2. The absorption band for the O-H stretch in ethanol and water overlaps.
Figure 9: **Temporal gating in ethanol sample.** (a) The sampled electric field with 82.5 \(\upmu\)mol of liquid ethanol in the beam path. Inset a1 and a2 show two distinct temporal gating times. (b) The black color represents the Fourier transform of the entire waveform, while the blue and red colors denote the Fourier transform of the inset a1 and a2, respectively. The legend illustrates the time frame for which the Fourier transform is applied.
Figure 11: **First fiber stage characterization.** (a) The first nonlinear fiber stage’s output spectrum (red), and the dashed line represent the input laser spectrum. (b) The autocorrelation trace of the same stage was measured at the input to the second stage, and the pulse duration at the FWHM was found to be \(25\,\mathrm{fs}\).
Figure 10: **SEM fiber images.** Both nonlinear fiber stages use an SR-PCF. (a) The SEM image shows a \(55\,\mathrm{\SIUnitSymbolMicro m}\) diameter fiber with 8 capillaries. (b) Shows the capillary diameter of \(23\,\mathrm{\SIUnitSymbolMicro m}\). (c) The core-wall thickness of the capillary is \(260\,\mathrm{nm}\).
Figure 12: **UFI IR7202 dispersion curve.** Group delay dispersion of a pair of double-angle chirp mirrors used for NIR pulse compression. In total a pair of 3 double-angled chirp mirrors were used to achieve the compression.
Figure 13: **EOS phase matching bandwidth.** Calculated the wavevector mismatch (\(\Delta\)k) for L = 10 \(\upmu\)m BBO, which is related to the EOS phase-matching bandwidth. The detection bandwidth of the EOS is indicated between two dashed lines, which shows a nearly flat response. The numerical calculations were performed using SISYFOS [77]. |
2309.14918 | Ethical Challenges in Gamified Education Research and Development: An
Umbrella Review and Potential Directions | Gamification is a technological, economic, cultural, and societal development
toward promoting a more game-like reality. As this emergent phenomenon has been
gradually consolidated into our daily lives, especially in educational
settings, many scholars and practitioners face a major challenge ahead: how to
understand and mitigate the unethical impacts of gamification when researching
and developing such educational technologies? Thus, this study explores ethical
challenges in gamified educational applications and proposes potential
solutions to address them based on an umbrella review. After analysing
secondary studies, this study details and proposes recommendations on
addressing some ethical challenges in gamified education, such as power
dynamics and paternalism, lack of voluntarity and confidentiality, cognitive
manipulation, and social comparison. Research and development decision-making
processes affected by such challenges are also elaborated, and potential
actions to mitigate their effects in gamification planning, conducting and
communication are further introduced. Thus, this chapter provides an
understanding of ethical challenges posed by the literature in gamified
education and a set of guidelines for future research and development. | Ana Carolina Tomé Klock, Brenda Salenave Santana, Juho Hamari | 2023-09-26T13:23:50Z | http://arxiv.org/abs/2309.14918v1 | Ethical challenges in gamified education research and development: An umbrella review and potential directions
###### Abstract
Gamification is a technological, economic, cultural, and societal development toward promoting a more game-like reality. As this emergent phenomenon has been gradually consolidated into our daily lives, especially in educational settings, many scholars and practitioners face a major challenge ahead: how to understand and mitigate the unethical impacts of gamification when researching and developing such educational technologies? Thus, this study explores ethical challenges in gamified educational applications and proposes potential solutions to address them based on an umbrella review. After analysing secondary studies, this study details and proposes recommendations on addressing some ethical challenges in gamified education, such as power dynamics and paternalism, lack of voluntarity and confidentiality, cognitive manipulation, and social comparison. Research and development decision-making processes affected by such challenges are also elaborated, and potential actions to mitigate their effects in gamification planning, conducting and communication are further introduced. Thus, this chapter provides an understanding of ethical challenges posed by the literature in gamified education and a set of guidelines for future research and development.
Keywords:Gamification, Teaching-learning processes, Virtue ethics
## 1 Introduction
Using game design elements to promote game-like experiences throughout many daily tasks - namely gamification - has garnered growing attention among scholars and practitioners over recent years [1]. Gamification is an emergent phenomenon in multiple domains, having a meaningful role in upholding many Sustainable Development Goals (SDG), such as good health and well-being (SDG3) [2], decent work and economic growth (SDG8) [3], and climate action (SDG13) [4]. Quality education (SDG4) is prominent for gamification among this variety of possibilities [5], since it offers a gameful way to engage and inspire students during the teaching-learning process. Consequently, there is a continuously increasing interest in both uses and implications of gamified education [6].
Nevertheless, despite contributing towards several improvements to the educational field, gamification also introduces adverse effects when not suitably applied [7]. For instance, while it aims to promote more appealing and rewarding learning, gamification affects diverse students in distinct ways [8]. Such individual differences raise questions on how scholars and practitioners should consider and handle data regarding personal characteristics and preferences - questions that are ethical by nature. Thus, understanding and promoting ways to ethically research and develop gamified applications in all fields, but especially in education, is an essential question yet to be addressed.
Towards an answer to such a question, this chapter investigates and discusses ethical challenges of gamification research methodology and application development in teaching-learning processes. This study is organised as follows: Section 2 provides an overview of normative ethics and its philosophical theories, as well as their relation to gamification research and development, and further decision-making processes regarding planning, conducting and communicating gamification outcomes. Section 3 describes the methodology, research questions, inclusion criteria, search process, screening procedure and data extraction plan. Section 4 details bibliometric information of the secondary studies, while elaborating on how to make ethical gamification and how to make gamification ethical. Finally, Section 5 concludes this chapter by presenting the final remarks and limitations.
## 2 Background
Ethics is an extensive philosophy branch that analyses and conceptualises moral behaviours and determines right from wrong through multiple perspectives (e.g., meta-ethics, normative ethics, and applied ethics) [9]. From these sub-branches, this work focuses on the normative ethics perspective by seeking to establish standards of conduct in a practical manner to promote better gamification research and development for educational settings. Normative ethics is a broad term that describes the moral reasoning (i.e., norms of conduct of what is acceptable or not) from multiple philosophical theories (e.g., consequentialism, deontological, and virtue ethics) and decision-making processes (e.g., planning, conducting and communicating) [10].
Briefly describing some of these **philosophical theories**, the _consequentialism_ perspective follows a utilitarian rationale, prioritising societal over individual interests (i.e., favouring actions that benefit the majority of people), regardless of potential harms. In opposition, the _deontological_ perspective follows standards based on universal moral principles and duties to others (e.g., what would happen if everyone adhered to this standard?), disregarding how these codes vary according to their context and their unexpected results. At last, _virtue ethics_ perspective follows behaviours towards living a virtuous life by practising good traits (e.g., honesty, integrity) and emphasising the interdependency of human beings, which creates a moral obligation to care for dependent groups (e.g., children, older people) and to use emotional virtues (e.g., sensitivity, responsiveness) when interacting with people. Gamification scholars and practitioners usually fo
cus on enhancing teaching-learning processes with motivational affordances to invoke psychological and behavioural outcomes, in which game elements are justified by their utility towards the so-called common good (i.e., consequentialism) [1]. At the same time, multiple studies discuss the so-called right way to design gamification, following a set of rules defined in a framework (i.e., deontology) [11]. Relatively newer in the gamification field, the virtue ethics viewpoint invites scholars and practitioners to move from coercion to facilitating the best life and from instrumental perfection to critical transformation (i.e., "a critical, transformative, socio-technical systems design practice for motivational affordances in the service of human flourishing" - eudaemonist virtue [12]).
In this sense, scholars and practitioners must ensure that gamification in educational settings focuses on students' flourishing, such as by providing a fulfilling and meaningful gameful learning experience, through a well-thought **decision-making process** throughout its research and development. For instance, in the _planning phase_, those researching or developing gamification for educational domains must evaluate their competence in terms of skills and expertise (or collaborating with those with the necessary complementary abilities), as well as be familiar with relevant ethical guidelines for technology-assisted education through the lenses of cultural relativism and applicable legislation [13]. As another example, the _conduction phase_ should follow principles of fairness, accountability, transparency and ethics (FATE) to avoid gamification designs that are exploitative or addictive [14], while ensuring that data is findable, accessible, interoperable and reusable (FAIR) [15]. As gamification scholars and practitioners, communicating the results of implementing such technology in education should be clear and comprehensive and respect students' privacy, whose data should be anonymised since the earlier stages of the gamified project [16]. Yet, a better understanding of the unethical issues in gamified education and a comprehensive set of guidelines to address them is still needed towards making ethical gamification development and making gamification research ethical.
## 3 Methodology
An umbrella review aims to summarise evidence from multiple research syntheses to provide an overall picture of findings for particular questions or phenomena [17]. According to this goal, the umbrella review might also include studies regarding different conditions and populations [18]. Furthermore, this methodology comprises a protocol detailing investigated research questions, inclusion criteria, search process, screening procedure and data extraction plan [18]. Regarding this study, despite the existing multiple secondary studies on the intersection of gamification and ethics, there is still no comprehensive understanding of how to address and overcome unethical issues in the research and development of gamification in education. Towards this goal, this chapter describes an umbrella review that complies with the following:
### Research questions
Towards understanding and finding ways to mitigate the ethical challenges of gamification when researching and developing such technologies, this chapter focuses on two main research questions:
1. **How to make ethical gamification?** addressing how to design, implement and evaluate the effects of gamified educational applications towards living a virtuous life
2. **How to make gamification ethical?** addressing how to plan, conduct and communicate the outcomes of gamification in educational settings
### Inclusion criteria
Based on these research questions for a high-quality and assertive outcome, this umbrella review adopted the following inclusion criteria:
* **Language:** Studies need to be written in English;
* **Venue:** Studies need to be published as Journal articles, Conference papers or Book chapters;
* **Methodology:** Studies need to conduct a secondary study;
* **Intervention:** Studies need to investigate gamification research, design or implementation; and
* **Outcome:** Studies need to tackle any ethical issues emerging from gamification.
### Search process
The search string was set following the PICOC method, which defines the Population, Intervention, Comparison, Outcome and Context of the desired studies [19]. In this chapter, the Population includes _any secondary study_, using _gamification_ as the Intervention, and focusing on _ethics_ as the main Outcome. Based on the research questions, there is no Comparison to be made among the studies. No limitations were defined based on the Context of these studies, as investigating ethical aspects of gamification in a broader sense also contributes to a deeper understanding and anticipation of potential issues towards their mitigation in the educational domain, especially given that it also employs, reuses and benefits from overall research methodology and development. Therefore, the search was conducted on Scopus, which indexes many of the literature databases available, and considered studies that meet _gamification AND ethic* AND (review OR systematic)_ in their title, abstract, or keywords. The search was conducted in September 2022 and returned a total of 34 works.
### Screening procedure
Based on the selection criteria (described in Subsection 3.2), the authors excluded studies based on their language (n = 1), venue (n = 7), methodology (n = 6), intervention (n = 7), and outcome (n = 8). Accordingly, a total of five secondary studies on the ethical issues and potential harms of gamification were included in this umbrella review [20; 21; 22; 23; 24].
### Data extraction plan
Following the guidelines proposed by Aromataris et al. [17], this umbrella review extracted bibliometric information (i.e., citation details, objectives of the included reviews, review type, the context of the application, number of databases searched, publication range, number of studies included, and country of origin), and outcomes and implications reported that are related to the aforementioned research questions.
## 4 Results
In terms of bibliometric information, _Arora and Razavian (2021) [20]_ conducted a systematic review to understand the ethical issues in the existing empirical work on the effects of gamification in health tracking. For this, the authors analysed 23 studies ranging between 2012 and 2021 from six different search engines (i.e., ACM Digital Library, IEEE Xplore, PhilPapers, PubMed, Scopus, and Web of Science). The second study, from _Benner, Schobel and Janson (2021) [21]_, conducted a systematic review to understand the current ethical considerations in persuasive system design. The authors included 17 studies published between 2011 and 2020 from eight search engines (i.e., ACM Digital Library, AISel, Emerald, IEEE Xplore, JSTOR, PubMed, ScienceDirect and Springer-Link). Next, _Hassan and Hamari (2020) [22]_ conducted a systematic review to summarise what has been carried out on gamified e-participation. In this study, a total of 66 papers from Scopus were included, which the publication range is between 2012 and 2018. The fourth study, from _Humlung and Haddara (2019) [23]_, was a systematic review of how to apply gamification in business as a means to create an innovative environment. While searching in Google Scholar, 19 studies published between 2013 and 2019 were analysed. The last included study, from _Hyrynsalmi, Smed and Kimppa (2017) [24]_, conducted a systematic review to understand the perceived negative side effects of applying gamification in a more general context. A total of 22 studies published between 2013 and 2016 were included in this study based on six search engines (i.e., ACM digital library, AISel, IEEE Xplore, ScienceDirect and Wiley Online Library). All these five systematic reviews were written by authors affiliated with European Universities (i.e., Finland [22; 24], The Netherlands [20], Germany and Switzerland [21], and Norway [23]). Details on their outcomes and implications are detailed below while addressing our research questions.
### How to make ethical gamification? (RQ1)
Towards achieving such critical and transformative gamification as aimed by virtue ethics, the current means to design, implement, and evaluate it must be rethought. In this sense, the ethical challenges found in the included studies were:
_Power dynamics and Paternalism._ Shaping or reinforcing behaviours through persuasive technologies, such as gamification, _without the proper communication_ of the intentions behind gamification aligns with the consequentialism rationale, in which the benefits for the general audience are perceived to outweigh the potential harms [10, 23]. More than a lack of communication, gamification has paternalistic characteristics that _limit autonomy and freedom of choice_ by positioning scholars and practitioners as authorities of the "correct" behaviour from the deontology rationale, while fostering the stigmatisation of those who are not able to meet these desired behaviours from gamification goals [20, 21]. Thus, despite being generally used to foster "good" habits, gamification design, implementation and evaluation usually focus on top-down approaches that patronise individuals instead of promoting autonomous and voluntary engagement [22]. While gamification scholars and practitioners are all susceptible to their own implicit biases, consequentialism may take over again when the interest of a third party (e.g., companies, schools) contradicts and overcomes the original intention of helping people to achieve their own goals (e.g., dissonance, imbalance, conflict of interest), which now instead focuses on exploiting their real-world vulnerabilities [20, 21]. To address these challenges, _any (digital) nudge should be disclosed to preserve people' autonomy and freedom_, despite potential undesirable outcomes, given that those aware of this persuasion might react differently [21]. On top of that, paternalistic characteristics can be avoided by _educating people on nudging_, so that they would be aware of potential issues by themselves [24]. Furthermore, gamification scholars and practitioners should _include diverse stakeholders to account for multiple user voices during gamification design, implementation and evaluation processes_[22], while ensuring that its persuasive effects are not misused to exploit people physically, financially, emotionally, or psychologically [21]. In educational settings, gamification could also benefit from being based and aligned with transformative learning theories that allow a non-hierarchical dialogue among students and educators, so that individual needs are considered and learning becomes a more autonomous process where knowledge is promoted as collective construct [25].
_Lack of voluntarity._ Following power dynamics from gamification being unfair to one party, another ethical issue relates to people _feeling obliged to use gamified systems_. For instance, gamification might be deeply rooted in educational settings as an efficiency metric, while not being translated as beneficial for the students [20, 21]. Gamification not being entirely voluntary supports the power imbalance by intentionally or accidentally _sugar-coating coercive practices_ (i.e., by questionable means) and the _reinforcement of desirable outcomes_ (i.e., for questionable purposes) [22]. Examples of this can be seen, for instance, in a classroom where the educator gamifies a specific task that is more aligned with
school needs (e.g., for external evaluation purposes) than individual learning meaningfulness, and the assessment of students is conditional on their participation, regardless if they have the means or desire to perform this task in a gamified way. To avoid such misconduct, _gamification should include an opt-in design and proper anonymisation of people's information_[21].
_Confidentiality issues._ Another issue is the interaction between users and gamification providers. Mismanagement of the necessary communication between the parties can inflict fundamental ethical issues related to confidentiality. _Anonymisation and providing information on what data and why they are collected while asking for explicit permission would allow information security and data privacy in gamified systems_[20]. This would prevent dark patterns in the interaction design, such as cookies and consent default options favouring the gamification provider [21], intentionally luring people to share personal data through game elements, sharing or even selling personal data with third parties, and making people uncomfortable, anxious or any other psychological and emotional harm with their data being tracked or shared [20]. In this sense, _special attention should also be given when designing, implementing and evaluating gamification that might not ensure students' privacy_ - such as avatars, challenges and competitions, which recognise and record information on students' characteristics, performance, and opponents [26].
_Cognitive manipulation._ Gamification can also be a means to inhibit autonomy and undermine self-reflection in unjustifiable ways (e.g., distraction, addiction). For instance, gamification requiring instant reaction in some works and job positions, such as medics and firefighters, add unnecessary steps and _distractions_ that might cause dangers, remarkable losses, and threatening situations - overall physical and psychological harms [20]. At the same time, the potential moral, ethical and legal challenges of gamblification require further investigation [21], while the overall _addiction_ might have detrimental effects on people, such as obsessively relying on game incentives or choosing goals that are potentially harmful (e.g., dieting based on what other people consider healthy) [20]. Thus, _gamification should provide safe restrictions or warnings against cognitive effects, allow autonomy and personalisation, and focus on facilitating internal motivation (e.g., self-determination and self-reward) for a sustainable behaviour change_[20]. Furthermore, scholars and practitioners need to consider the context and target audience, such as adding gamification in products and services marketed for children or considering their impact on people with a history or tendency towards addiction [24]. This, however, should not be a way to justify hyper-focus on specific target audiences, in which some people would have privilege over others, nor a means to reinforce stereotypes, in which different genders would have different colours or game elements in a reductionist way [20]. Thus, _gamification should be deeply aligned with intended learning outcomes to ensure it is not distracting students from the educational content_. At the same time, scholars and practitioners should _understand the interaction of the students with the learning environment and its game elements to avoid reward dependency in the teaching-learning process_[27].
_Social comparison_. By drawing inspiration from games, gamification also allows competition and rivalry, potentially giving a sense of social overload and straining people [21]. This might lead to a loss of motivation and a feeling of segregation for those who systematically perform worse than their counterparts and, furthermore, might lead to cheating. Thus, gamification scholars and practitioners _should avoid giving people a sense of defeat while also transferring the responsibility to them by allowing cheating to some extent - with more autonomy in defining their own tasks, gamification supports a tolerant community of individual choice-making and acknowledges individual differences_[20]. In educational settings, while considering that students have different learning styles, _a tailor-made gamification_ might be a good alternative to allow everyone to have fun and play the game even though the rules are not the same for everybody [28].
### How to make gamification ethical? (RQ2)
More than rethinking what aspects of gamified educational applications can be designed, implemented and evaluated in an unethical manner (RQ1), scholars and practitioners should follow ethical principles to promote ethical research and development of gamification (RQ2). Thus, gamification research and development are indivisible from ethics, as scholars' and practitioners' choices during the planning, conduction and communication have inherently ethical aspects [10].
For instance, whenever _planning_ gamification research and development in educational settings, scholars and practitioners must consider which values and interests are promoted by the research questions and by the purpose of gamification in the educational application. Given supervisors' individual interests and business' own agendas, gamification investigation and execution might be loaded with contradictory assumptions (i.e., _conflict of interest_) [10], which very much relate with _power dynamics and paternalism_ discussed in RQ1. Thus, scholars and practitioners must reflect on the gamification implications in educational settings before starting the project to ensure the research and development integrity [29], especially regarding _sampling_ and _data collection methods_ through ethical lenses. Since the gamified education target audience tend to be quite broad and focuses on human beings, scholars and practitioners should clearly define whom the subjects are (i.e., who is included, but especially who is excluded and why), while avoiding presenting people in stereotyped ways (e.g., tailored gamification that generalises preferences based on a single characteristic). More than understanding ethical principles when involving people in research and development, scholars and practitioners should provide informed consent that ensures transparency and _voluntarity_, clearly communicates potential risks and benefits of the participation, guarantees _confidentiality_, privacy and anonymisation, prevents _social comparison_, as well as affords special protections against _addiction, distraction_ and further needs for targeted populations (e.g., children, minority and elderly) [29].
Regarding _conducting_ gamification research and development in educational settings, gamification research and development should _ensure integrity and quality_ regardless of the chosen methodologies. For instance, involving participants from co-participatory approaches beyond the design phase allows a further co-production that promotes the inclusion of participants and their social worlds in the analysis (e.g., giving them a voice to agree or disagree with scholars and practitioners' understanding of their data, as opposing to _power dynamics and paternalism_) and dissemination of the results (e.g., giving them credits for the co-production, while guaranteeing _confidentiality_) [10]. Overall, reliability and validity are some examples of measurements that ensure research quality in _quantitative methods_, while transparency and data triangulation are examples of means to ensure quality when applying _qualitative methods_[10]. Nevertheless, doing ethical gamification research and development to the highest possible standard is not only a need but also a moral obligation. Scholars and practitioners should commit to following up-to-date ethnic codes (e.g., APA Ethical Principles and Code of Conduct 1) to promote fairness [15] and avoid _misinterpretation_ or _misrepresentation_ of their gamification outcomes in educational settings.
Footnote 1: [https://www.apa.org/ethics/code](https://www.apa.org/ethics/code)
Moreover, gamification outcomes need to be _communicated_ to relevant audiences, such as academia, industry and to the general public. Before that happens, it is essential to have an upfront discussion about the _authorship_ with scholars and practitioners involved in the research and development of gamification [10]. While communicating, misconduct in all of its forms should be prevented: scholars and practitioners must not alter data (i.e., _falsification_), nor publish data that were not actually collected (i.e., _fabrication_), and mostly not steal other's ideas, methods or data without proper attribution (i.e., _plagia-rism_) [30]. Finally, communication should be complete (i.e., avoid fragmentary publication) and comprehensive (i.e., with a sufficient description of methods, corrections, and retractions) [29].
## 5 Final remarks
This chapter addressed a series of ethical issues in gamification research and development, with a particular focus on the educational field. From the analysis of secondary studies, this umbrella review explored many ethical challenges of gamified educational applications and proposed potential solutions for future research and development.
Towards designing, implementing and evaluating the effects of gamified educational applications towards living a virtuous life, we elaborated on _how to make ethical gamification_ (RQ1). Based on major unethical outcomes reported by the secondary studies, we propose that scholars and practitioners should ensure that digital nudging is properly disclosed to preserve people's autonomy and freedom, while educating people on nudging to raise awareness on
potential issues from one's perspective. Gamification research and development should also involve diverse stakeholders to account for multiple user voices to further avoid _power dynamics and paternalism_. As a mean to avoid coercive practices in face of the _lack of voluntarity_ from people in using gamification, gamification should include an opt-in design and proper anonymisation of user's information. Data anonymisation is also a good strategy to prevent _confidentiality issues_, while providing information on what data and why they are collected when asking for explicit permission from the students. Gamification scholars and practitioners need also to be careful when designing and implementing some game elements (e.g., avatars, challenges and competitions) that might not ensure students' privacy. Gamification should provide safe restrictions or warnings against _cognitive manipulation_, allowing autonomy and personalisation, and focusing on facilitating internal motivation. Towards preventing distractions and addiction, gamification needs to be aligned with the intended learning outcomes and understand students' interaction with the learning environment to avoid reward dependency. The autonomy is also important to stop _social comparison_, by allowing students to define their own learning path or even promoting an autonomous tailoring system that understand students' individual preferences and needs. Towards planning, conducting and communicating the outcomes of gamification in educational settings, we elaborated on _how to make gamification ethical_ (RQ2), especially regarding conflict of interests, sampling and data collection methods, ensuring integrity and quality regardless of the chosen methodologies, misinterpretation and misrepresentation, authorship agreement, and misconduct through falsification, fabrication, and plagiarism.
However, this study has some limitations. As with any umbrella review, it was not possible to ensure the quality assessment of the included primary works, which may vary from one secondary study to another. As with any systematic study, our work and the secondary ones may have issues with the definition of the string, search engines and selection criteria, which may not have retrieved relevant papers during the search. Finally, since the screening of the study was conducted by one senior researcher in systematic reviews and gamification, but yet only one, we may have some false negative papers along the way. Because of that, our results could be slightly different from similar analysis approaches. Still, the ethical challenges in gamified educational research and development, as well as potential directions for future works, might be useful for scholars and practitioners as a first step towards promoting critical transformation of gamified educational applications and making them a tool to facilitate the best life.
#### 5.0.1 Acknowledgements.
This work was supported by the Academy of Finland Flagship Programme [grant No 337653, Forest-Human-Machine Interplay (UNITE)] and the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie [grant No 101029543, GamInclusive]. |
2310.20506 | Revisiting the paper Simulating dynamical features of escape panic: What
have we learnt since then? | The paper "Simulating dynamical features of escape panic" by Helbing, Farkas,
and Vicsek, published over two decades ago in Nature, has left an indelible
mark on the field of crowd dynamics. With nearly 3,000 citations to date,
according to the Web of Science records, and significant influence, it has
shaped the crowd dynamics field. This analysis investigates the overall
influence of this paper through a variety of indicators, mapping its reach
across research areas. The intellectual foundation of the paper is traced,
examining the references cited. The terminological impact is also explored,
showing how the paper made use of terms like "panic" and "herding". Moreover,
the alignment of the assumptions of the paper with empirical evidence is
discussed, finding discrepancies in key assertions about panic behaviour. The
numerical simulations of the paper and observations have significantly
influenced the field, such as for the case of the "faster-is-slower"
phenomenon. The paper remains a key pillar in crowd dynamics, nevertheless, we
advocate for a new course of the field shifting away from the terminology
adopted in the paper and focusing more on empirical evidence. | Milad Haghani, Enrico Ronchi | 2023-10-31T14:51:29Z | http://arxiv.org/abs/2310.20506v1 | Revisiting the paper "Simulating dynamical features of escape panic": What have we learnt since then?
###### Abstract
The paper "Simulating dynamical features of escape panic" by Helbing-Sarkas, and Vicsek, published over two decades ago in Nature, has left an indelible market on the field of crowd dynamics. With nearly 3,000 citations to date, according to the Web of Science records, and significant influence, it has shaped the crowd dynamics field. This analysis investigates the overall influence of this paper through a variety of indicators, mapping its reach across research areas. The intellectual foundation of the paper is traced, examining the references cited. The terminological impact is also explored, showing how the paper made use of terms like "panic" and "herding". Moreover, the alignment of the assumptions of the paper with empirical evidence is discussed, finding discrepancies in key assertions about panic behaviour. The numerical simulations of the paper and observations have significantly influenced the field, such as for the case of the "faster-is-slower" phenomenon. The paper remains a key pillar in crowd dynamics, nevertheless, we advocate for a new course of the field shifting away from the terminology adopted in the paper and focusing more on empirical evidence.
Social force model pedestrian dynamics crowd dynamics evacuation simulation
## 1 Introduction
More than two decades have passed since the publication of one of the most influential papers in the field of crowd dynamics authored by Helbing, Farkas, and Vicsek, which was featured in Nature (Helbing et al. [1]). Remarkably, this paper has obtained nearly 3,000 citations according to Web of Science and 6,000 citations according to Google Scholar up to now. Its influence extends beyond mere numerical counts, shaping the trajectory of
Introduction
The physics of the Universe is the fundamental theory of physics, and it is the fundamental theory of physics. The theory of physics is the fundamental theory of physics, and it is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics, and it is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics, and it is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics, and it is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics, and it is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics, and it is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics, and it is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics, and it is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics, and it is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics, and it is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics, and it is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics, and it is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics, and it is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics, and it is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics, and it is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics, and it is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics, and it is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics, and it is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is fundamental theory of physics. The fundamental theory of physics is the fundamental theory of physics. The fundamental theory of physics is fundamental theory of physics. The fundamental theory of physics is fundamental theory of physics. The fundamental theory of physics is fundamental theory of physics. The fundamental theory of physics is fundamental theory of physics. The fundamental theory of physics is fundamental theory of physics. The fundamental theory of physics is fundamental theory of physics. The fundamental theory of physics is fundamental theory of physics. The fundamental theory of physics is fundamental theory of physics. The fundamental theory of physics is fundamental theory of physics. The fundamental theory of physics is fundamental theory of physics. The fundamental theory of physics is fundamental theory of physics. The fundamental theory of physics is fundamental theory of physics. The fundamental theory of physics is fundamental theory of physics. The fundamental theory of physics is fundamental theory of physics. The fundamental theory of physics is fundamental theory of physics. The fundamental theory of physics is fundamental theory of physics. The fundamental theory of physics is fundamental theory of physics. The fundamental theory of physics is fundamental theory of physics. The fundamental theory of physics is fundamental theory of fundamental theory of physics. The fundamental theory of physics is fundamental theory of fundamental theory of physics. The fundamental theory is fundamental theory of physics. The fundamental theory of physics is fundamental theory of physics. The fundamental theory of physics is fundamental theory of physics. The fundamental theory of physics is fundamental theory of physics. The fundamental theory of physics is fundamental theory of physics. The fundamental theory of physics is fundamental theory of physics. The fundamental theory of physics is fundamental theory of fundamental theory of physics. The fundamental theory of physics is fundamental theory of physics. The fundamental theory of physics is fundamental theory of fundamental theory of physics. The fundamental theory of physics is fundamental theory of physics. The fundamental theory of physics is fundamental theory of physics. The fundamental theory of physics is fundamental theory of physics. The fundamental theory of physics is fundamental theory of physics. The fundamental theory of physics is fundamental theory of physics. The fundamental theory of physics is fundamental theory of fundamental theory of physics. The fundamental theory of physics is fundamental theory of physics. The fundamental theory of physics is fundamental theory of fundamental theory of physics. The fundamental theory of physics is fundamental theory of physics. The fundamental theory of physics is fundamental theory of fundamental theory of physics. The fundamental theory of physics is fundamental theory of fundamental physics. The fundamental theory of physics is fundamental theory of fundamental physics. The fundamental theory of physics is fundamental theory of fundamental theory of physics. The fundamental theory of physics is fundamental theory of fundamental theory of physics. The fundamental theory of physics is fundamental theory of physics.
and introduced the model for the first time. The rate of annual citations for the HFV paper exhibited a consistent linear increase since its publication and that trend continued until 2015, when there was a noticeable change in this growing trend (See Fig. 1). This may coincide with a paradigm shift in crowd dynamics research around that year. Researchers began to shift their focus significantly towards experimental work, diverging from the predominant trend of numerical work up until that point.
One noteworthy aspect is the substantial portion of these citations originating from papers authored by researchers based in China, indicating the exceptional popularity and influence of this paper within this community. The majority of citing articles belong to the fields of Physics and Computer Science, with a notable concentration of these articles in the journal Physica A. Within the field of crowd dynamics, the HFV paper has also left its mark on influential papers such as [3] and [4] which introduced the concept of cellular automata modelling to the field in 2001. To gain a deeper understanding of the areas where the HFV paper has exerted its influence, we conducted a document co-citation analysis among the reference lists of the nearly 3,000 citing papers. This analysis identifies clusters of references that are co-cited by these papers. In other words, it reveals groups of references that tend to appear together in the citations to the HFV paper by its citing articles. We identified ten such clusters, and by analysing the key phrases in the titles of these citing articles, we identified and categorised the themes of studies that have referenced the HFV paper. These themes are listed in App. A in order of significance, showing how the influence of this paper has extended beyond the field of crowd dynamics.
Figure 1: The number of citing articles of HFV paper every year since its publication (part (a)), the countries of origin of the citing articles and the clusters of citing articles and their relative prominence (part (c)).
Additionally, App. A provides a list of references that have been most frequently co-cited with the HFV paper.
## 3 Intellectual Foundation
The HFV paper cites a total of twenty references. In our analysis, we do not intend to scrutinise every single reference, as not all appear to be equally fundamental in shaping its intellectual foundation. Notably, the initial references cited in the HFV paper have played a pivotal role in setting the stage and justifying the study and modelling of _escape panic_. An intriguing observation concerning these initial references is that not all of them appear to support the argument; in fact, some seem to directly contradict the statements they accompany. For instance, the paper by Keating [5] is cited following a statement asserting _"Sometimes this behaviour [panic] is triggered in life-threatening situations such as fires"_. The reference title [5], _The myth of panic_, and its content strongly suggests that it does not align with the statement, but rather points in the opposite direction. Similarly, the subsequent statement reads, _"At other times, stampedes can arise during the rush for seats"_. One of the two references cited following this statement is the 1987 study by Johnson [6]. However, the content and conclusions of Johnson's paper appear to contradict the statement. This study [6] analyses empirical evidence related to a tragic incident before a rock concert and reports _"evidence showing that panic did not cause the death and injury of numerous young people"_. Furthermore, it highlights that post-disaster interviews and event reconstructions revealed no signs of stampede; instead, most competition stemmed from people's attempts to escape a crowd crush. This contradicts the notion that _panic_ can arise from people's rush for seats, as stated by HFV. Interestingly, in an editorial written by David Low about the HFV paper in the same issue of Nature, Johnson's paper is inaccurately cited following this statement, _"The consequences of crushing, trampling, and panic in crowds are well known"_. References [5] and [6] are repeatedly cited by HFV throughout the paper without them offering support in favour of the arguments that they accompany.
Another example arises in the statement, _"Panicking individuals tend to show maladaptive and relentless mass behaviour like jamming and life-threatening overcrowding, which has often been attributed to social contagion"_. The 1957 work of Quarantelli [7] is frequently cited following many of these statements. Published in 1957, it has been hard to trace this reference in contemporary databases. However, we managed to locate a more recent working paper by Quarantelli from 2001 [8], which we believe would naturally reflect the author's observations and reports from earlier stages of their career. The working paper is titled _The Sociology of Panic_ and it clearly indicates that the Quarantelli's findings could not have possibly been consistent with the portrayal presented in the HFV paper. Taken together, these observations cast doubt on some of the foundational ideas upon which the HFV paper appears to rely. In fact, certain aspects of the work cited seem to fundamentally contradict key premises of the paper or at least reflect a possible inaccurate use of terminology.
## 4 Terminological Influence
An examination of the text within the article reveals the prominent usage of two terms: _panic_ (mentioned 37 times) and _herding_ (mentioned 10 times). The notable prevalence of these terminologies within the field of pedestrian dynamics can be directly attributed to the influential impact of this article. As shown in [9], prior to the publication of this article, there was minimal mention of these two terms in the context of studies on crowds and evacuations. Similarly, the terminology of _crowd dynamics_ (mentioned 3 times), which has been detected in the titles, abstracts, or keywords of more than 700 articles since 2000, was scarcely identifiable in any publications within the aforementioned domains before the publication of this article. It is reasonable to conclude that the nomenclature by which this field is recognised owes its existence to the terminological influence exerted by this article.
## 5 Assumptions vs. Empirical Evidence
The foundation of the HFV paper hinges on the concept of _panic_, which it delineates through a set of defining characteristics. In this analysis, we prospectively reevaluate some of these key assumptions to determine whether they align with the empirical evidence that has emerged in the field since that publication. _People move or try to move considerably faster than normal_. This behaviour does not necessarily indicate panic per se. _Individuals start pushing, and interactions among people become physical in nature_. Empirical evidence does not support the notion that people engage in competitive behaviour when responding to a life-threatening situation [10, 11]. _At exits, arching and clogging are observed_. Arching is not a particular characteristic of panic; rather, it is observable to some extent in laboratory experiments when a crowd of pedestrians is instructed to pass through a narrow bottleneck [12]. _Jams build up_. Similar to the previous point, the formation of a pedestrian traffic jam is linked to a situation where the inflow of pedestrians exceeds the capacity of a passage. _People show a tendency toward mass behaviour, that is, to do what other people do_. This assumption is particularly important because it is readily operationalisable in evacuation models. In many cases, in fact, crowd models have been assessed on their ability to replicate such phenomenon. However, empirical evidence accumulated since the HFV paper suggests that the matter of imitation is complex. Social influence has been observed in several experimental studies, but there are several factors and variables affecting its extent, e.g., crowd size, familiarity with the environment, and perceived urgency [13]. While there is evidence of some level of imitation regarding decision adaptation, it does not lead to blind herd-like behaviour where all individuals follow a single action (e.g., as visually depicted by the Figure 1 of the Editorial piece by David Low [14] on the HFV paper, marking one of the most influential illustrations produced in this field). In addition, empirical evidence has overwhelmingly suggested that humans take into consideration a range of factors in flights situations and not purely the decisions of others [15].
## 6 Numerical Simulations and Experimental Replication
The HFV article subjected its proposed crowd escape panic model to numerical testing, leading to recommendations that have significantly influenced both the field and public perception. Among these observations, the most consequential is the so-called _faster-is-slower_ phenomenon. The study considered the evacuation of a crowd of 200 agents through a 1m-wide exit while gradually increasing a parameter known as _desired velocity_. The outcome revealed that as the desired velocity increased, system efficiency (in this context, the inverse of total evacuation time) initially improved. However, further increases in this parameter led to reduced system efficiency, implying longer evacuation times. This finding was translated into the recommendation that the most efficient way to evacuate and survive a crowded space with limited exit capacity, relative to occupancy, is for individuals to remain patient and keep lower speeds when passing through bottlenecks. Another suggestion was that asymmetrical placement of columns in front of exits can improve outflow. Several dimensions warrant consideration: _(1) Empirical testing_: Empirical studies have in some instances confirmed and in other contradicted the faster-is-slow and column phenomena [16, 17, 18]. In a controlled experiment, participants were instructed to exit forcefully (without resorting to aggression), resulting in significantly higher flow efficiency compared to a calm and patient manner [16]. _(2) Model results need context_: Numerical findings could be very parameter specific. According to Figure 1 (c) of the original paper, the minimum time required for 200 people to evacuate a room with a single 1m-wide exit is approximately 120s. Empirical observations indicate that this number of people may complete this evacuation in different (even faster) times depending on several factors. _(3) Desired velocity in non-free flow conditions_: The concept of _desired velocity_ is hard to grasp physically, except in free-flow conditions. In scenarios with bottlenecks or obstacles, it remains unclear how different levels of desired flow correspond to various levels of escape motivation. It is possible that in reality, assuming that the model components can manifest physically in the real world (at least in terms of desired velocity), we are always within the monotonically decreasing limb of the graph in Figure 1(c) of the HFV article. This would imply that an increase in evacuation time does not manifest for typical (non-aggressive) forces applied by people at a bottleneck. This observation has had profound implications and has influenced both research studies that employed ants and mice as models for human crowds [18] as well as public perceptions regarding efficient evacuations. The main argument is that the panic concept assumed by the model may be misinterpreted as an encouragement to stay on the beginning of the left-hand side of the flow/density relationship, whereas we know that flow/density reaches a maximum with a speed which is at an optimal density value, and not at slow speed level.
## 7 Conclusions
Our analysis suggests that the HFV paper stands as one of the most fundamental and influential contributions to the field of crowd dynamics. It has not only profoundly shaped the terminologies employed within this field but has also significantly influenced the tra |
2305.00460 | A Family of Bipartite Separability Criteria Based on Bloch
Representation of Density Matrices | We study the separability of bipartite quantum systems in arbitrary
dimensions based on the Bloch representation of density matrices. We present
two separability criteria for quantum states in terms of the matrices
$T_{\alpha\beta}(\rho)$ and $W_{ab,\alpha\beta}(\rho)$ constructed from the
correlation tensors in the Bloch representation. These separability criteria
can be simplified and detect more entanglement than the previous separability
criteria. Detailed examples are given to illustrate the advantages of results. | Xue-Na Zhu, Jing Wang, Gui Bao, Ming Li, Shu-Qian Shen, Shao-Ming Fei | 2023-04-30T12:11:51Z | http://arxiv.org/abs/2305.00460v2 | # A Family of Bipartite Separability Criteria Based on Bloch Representation of Density Matrices
###### Abstract
**Abstract** We study the separability of bipartite quantum systems in arbitrary dimensions based on the Bloch representation of density matrices. We present two separability criteria for quantum states in terms of the matrices \(T_{\alpha\beta}(\rho)\) and \(W_{\alpha\phi,\beta}(\rho)\) constructed from the correlation tensors in the Bloch representation. These separability criteria can be simplified and detect more entanglement than the previous separability criteria. Detailed examples are given to illustrate the advantages of results.
## I Introduction
Quantum entanglement [1; 2; 3; 4; 5] lies at the heart of quantum information processing and quantum computation [6]. The quantification of quantum entanglement has drawn much attention in the last decade. A prior question in the study of quantum entanglement is to determine whether a given quantum state is entangled or not. Denote \(H_{M}\) and \(H_{N}\) the vector spaces with dimensions \(M\) and \(N\), respectively. A bipartite \(M\otimes N\) state \(\rho\in H_{M}\otimes H_{N}\) is said to be separable if it can be written as a convex sum of tensor products of the states of subsystems,
\[\rho=\sum_{i}p_{i}\rho_{M}^{i}\otimes\rho_{N}^{i}, \tag{1}\]
where \(p_{i}\geq 0\) and \(\sum_{i}p_{i}=1\). Otherwise \(\rho\) is said to be entangled.
As a consequence, much efforts have been devoted to the so-called separability problem. The most well-known one is the positive partial transpose (PPT) criterion [7; 8], which is both necessary and sufficient for low-dimensional systems \(2\otimes 2\) and \(2\otimes 3\). For high-dimensional states, the PPT criterion is only a necessary one. A variety of separability criteria have been proposed so far, such realignment criteria [9; 10], covariance matrix criterion (CMC) [11] and so on [12; 13; 14; 15]. In particular, much subsequent works [16; 17; 18; 19] have been devoted to finding necessary conditions for separability based on Bloch representation of density matrices.
In terms of the Bloch representation any quantum state \(\rho\in H_{M}\otimes H_{N}\) can be written as,
\[\rho=\frac{1}{MN}\big{(}I_{M}\otimes I_{N}+\sum_{k=1}^{M^{2}-1}r_{k}\lambda_{ k}^{M}\otimes I_{N}+\sum_{l=1}^{N^{2}-1}s_{l}I_{M}\otimes\lambda_{l}^{N}+\sum_{k= 1}^{M^{2}-1}\sum_{l=1}^{N^{2}-1}t_{kl}\lambda_{k}^{M}\otimes\lambda_{l}^{N} \big{)}, \tag{2}\]
where \(I_{i}\) (\(i=M,N\)) denote the \(i\times i\) identity matrix, \(\lambda_{l}^{M}\), \(i=1,2,...,M^{2}-1\), are the generators of \(SU(M)\) given by \(\{\omega_{l},u_{jk},v_{jk}\}\) with \(\omega_{l}=\sqrt{\frac{2}{(l+1)(l+2)}}\left(\sum_{i=0}^{l}|i\rangle\langle i|-(l+ 1)|l+1\rangle\langle l+1|\right)\), \(u_{jk}=|j\rangle\langle k|+|k\rangle\langle j|\), \(v_{jk}=-i(|j\rangle\langle k|-|k\rangle\langle j|)\), \(0\leq l\leq M-2\) and \(0\leq j<k\leq M-1\), \(r_{i}=\frac{M}{2}Tr(\rho\lambda_{i}^{M}\otimes I_{N})\), \(s_{i}=\frac{N}{2}Tr(\rho I_{N}\otimes\lambda_{i}^{N})\) and \(t_{ij}=\frac{MN}{4}Tr(\rho\lambda_{i}^{M}\otimes\lambda_{j}^{N})\).
Denote \(r=(r_{1},...,r_{M^{2}-1})^{t}\) and \(s=(s_{1},...,s_{N^{2}-1})^{t}\), where \(t\) stands for transpose. Let \(T(\rho)\) be the matrix with entries \(t_{kl}\). If the bipartite state \(\rho\in H_{M}\otimes H_{N}\) with Bloch representation (2) is separable, it has been shown that [16]
\[||T(\rho)||_{KF}\leq\sqrt{\frac{MN(M-1)(N-1)}{4}}, \tag{3}\]
where the Ky Fan matrix norm is defined as the sum of the singular value of the matrix, \(||A||_{KF}=Tr\sqrt{A^{\dagger}A}\). In [18] the authors presented a stronger separability criteria,
\[||T^{{}^{\prime}}(\rho)||_{KF}\leq\frac{\sqrt{(M^{2}-M+2)(N^{2}-N+2)}}{2MN} \tag{4}\]
for separable states, where \(T^{{}^{\prime}}(\rho)=\begin{pmatrix}1&s^{t}\\ r&T(\rho)\end{pmatrix}.\) In [17], the authors constructed the following matrix,
\[S_{ab}^{m}(\rho)=\begin{pmatrix}abE_{m\times m}&aw_{m}^{t}(s)\\ bw_{m}(r)&T(\rho)\end{pmatrix},\]
where \(a\) and \(b\) are nonnegative real numbers, \(E_{mm}\) is the \(m\times m\) matrix with all entries being \(1\), \(m\) is a given natural number, \(w_{m}(x)\) denotes \(m\) columns of the column vector \(x\), i.e., \(w_{m}(x)=(x...x).\) The Theorem 1 of [17] showed that if the state \(\rho\in H_{M}\otimes H_{N}\) is separable, then \(\rho\) satisfies
\[||S_{ab}^{m}(\rho)||_{KF}\leq\frac{1}{2}\sqrt{(2ma^{2}+M^{2}-M)(2mb^{2}+N^{2}- N)}, \tag{5}\]
which is even stronger than the previous criteria.
## II Separability conditions from the Bloch representation based on \(T_{\alpha\beta}(\rho)\)
Denote \(\alpha=(a_{1},...,a_{n})^{t}\) and \(\beta=(b_{1},...,b_{m})^{t}\), where \(a_{i}\) (\(i=1,...,n\)) and \(b_{j}\) (\(j=1,...,m\)) are given real numbers, \(m\) and \(n\) are positive integers. We define the following matrix,
\[T_{\alpha\beta}(\rho)=\begin{pmatrix}\alpha\beta^{t}&\alpha s^{t}\\ r\beta^{t}&T(\rho)\end{pmatrix}. \tag{6}\]
Using \(T_{\alpha\beta}(\rho)\), we have the following separability criterion for bipartite states.
**Theorem 1**: _If the state \(\rho\in H_{M}\otimes H_{N}\) is separable, then_
\[||T_{\alpha\beta}(\rho)||_{KF}\leq\sqrt{||\alpha||_{2}^{2}+\frac{M(M-1)}{2}} \sqrt{||\beta||_{2}^{2}+\frac{N(N-1)}{2}}, \tag{7}\]
_where \(||\cdot||_{2}\) is the Euclidean norm on \(R^{N^{2}-1}\)._
[Proof] A bipartite quantum state with Bloch representation (2) is separable if and only if there exist vectors \(\mu_{i}\in R^{M^{2}-1}\) and \(\nu_{i}\in R^{N^{2}-1}\) with \(||\mu_{i}||_{2}=\sqrt{\frac{M(M-1)}{2}}\) and \(||\nu_{i}||_{2}=\sqrt{\frac{N(N-1)}{2}}\), and \(0<p_{i}\leq 1\) with \(\sum_{i}p_{i}=1\) such that
\[T(\rho)=\sum_{i}p_{i}\mu_{i}\nu_{i}^{t},r=\sum_{i}p_{i}\mu_{i},s=\sum_{i}p_{i} \nu_{i}.\]
The matrix \(T_{\alpha\beta}(\rho)\) can then be written as,
\[T_{\alpha\beta}(\rho) = \begin{pmatrix}\alpha\beta^{t}&\alpha s^{t}\\ r\beta^{t}&T(\rho)\end{pmatrix}\] \[= \sum_{i}p_{i}\begin{pmatrix}\alpha\beta^{t}&\alpha\nu_{i}^{t}\\ \mu_{i}\beta^{t}&\mu_{i}\nu_{i}^{t}\end{pmatrix}\] \[= \sum_{i}p_{i}\begin{pmatrix}\alpha\\ \mu_{i}\end{pmatrix}\left(\beta^{t},\nu_{i}^{t}\right).\]
Therefore,
\[||T_{\alpha\beta}(\rho)||_{KF} \leq \sum_{i}p_{i}\left||\begin{pmatrix}\alpha\\ \mu_{i}\end{pmatrix}\right||_{2}\cdot\left|\left|\left(\beta^{t},\nu_{i}^{t} \right)\right|\right|_{2}\] \[= \sqrt{||\alpha||_{2}^{2}+\frac{M(M-1)}{2}}\sqrt{||\beta||_{2}^{2 }+\frac{N(N-1)}{2}}.\]
It can be seen that if we chose \(a_{i}=a\) and \(b_{j}=b\) for \(i,j=1,...,n\) and \(m=n\), Theorem 1 reduces to the separability criterion (5) given in [17].
Define
\[R(\beta)=\begin{pmatrix}p\beta\beta^{t}&\beta c^{t}\\ c\beta^{t}&\mathrm{W}\end{pmatrix},\]
where \(p\) is a nonzero real number, \(\beta\) (\(c\)) is a nonzero \(n\) (\(m\))-dimensional real vector, \(\mathrm{W}\) is an \(m\times m\) Hermitian matrix. We denote \(\lambda_{i}(R(\beta))\) (\(i=1,...,m+n\)) the singular values of \(R(\beta)\) with \(\lambda_{i}(R(\beta))\leq\lambda_{j}(R(\beta))\) (\(i\leq j\)).
**Lemma 1**: _For \(\beta_{1}\neq\beta_{2}\) but \(||\beta_{1}||_{2}=||\beta_{2}||_{2}\), we have \(\lambda_{i}(R(\beta_{1}))=\lambda_{i}(R(\beta_{2}))\) (\(i=1,...,m+n\))._
[Proof] With respect to any nonzero real vector \(\beta=(b_{1},b_{2},...,b_{n})^{t}\), there exists a unitary matrix \(\mathrm{U}\) such that \(\mathrm{U}\beta=(0,0,...,0,||\beta||_{2})^{t}\). Then we have
\[\begin{pmatrix}U&0\\ 0&I\end{pmatrix}R\begin{pmatrix}U^{\dagger}&0\\ 0&I\end{pmatrix}=\begin{pmatrix}0&0&0\\ 0&p||\beta||_{2}^{2}&||\beta||_{2}c^{t}\\ 0&||\beta||_{2}c&\mathrm{W}\end{pmatrix}.\]
Denote
\[D(\beta)=\begin{pmatrix}p||\beta||_{2}^{2}&||\beta||_{2}c^{t}\\ ||\beta||_{2}c&\mathrm{W}\end{pmatrix}.\]
Since the singular values of an Hermitian matrix do not change under the unitary transformations, we have \(\lambda_{i}(R(\beta))=\lambda_{i}\left(\begin{pmatrix}0&0\\ 0&D(\beta)\end{pmatrix}\right),\) (\(i=1,...,m+n\)). Because of \(D(\beta_{1})=D(\beta_{2})\), we complete the proof. \(\square\)
Since the Ky Fan matrix norm \(||T_{\alpha\beta}(\rho)||_{KF}=Tr\sqrt{T_{\alpha\beta}(\rho)^{\dagger}T_{ \alpha\beta}(\rho)}\), \(r\in R^{M^{2}-1}\), \(s\in R^{N^{2}-1}\) and \(T(\rho)\in R^{(M^{2}-1)(N^{2}-1)}\), we have
\[T_{\alpha\beta}(\rho)^{\dagger}T_{\alpha\beta}(\rho)=\begin{pmatrix}(||\alpha ||_{2}^{2}+||r||_{2}^{2})\beta\beta^{t}&\beta(||\alpha||_{2}^{2}s^{t}+r^{t}T) \\ (||\alpha||_{2}^{2}s+T^{t}r)\beta^{t}&||\alpha||_{2}^{2}ss^{t}+T^{t}T\end{pmatrix}.\]
By using Lemma 1 we have the following corollary.
**Corollary 1**: _For any quantum state \(\rho\), \(||T_{\alpha\beta}(\rho)||_{KF}=||T_{||\alpha||_{2}||\beta||_{2}}(\rho)||_{KF}.\)_
From the Corollary 1, we see that we only need to consider the norm of \(\alpha\) and \(\beta\) in dealing with the norm of \(T_{\alpha\beta}(\rho)\). Hence, we simplify our Theorem 1 to the following corollary.
**Corollary 2**: _If the state \(\rho\in H_{M}\otimes H_{N}\) is separable, then_
\[||T_{ab}(\rho)||_{KF}\leq\sqrt{a^{2}+\frac{M(M-1)}{2}}\sqrt{b^{2}+\frac{N(N-1)} {2}}\]
_for any non-negative real numbers \(a\) and \(b\)._
Corollary 2 is equivalent to the Theorem 1 with \(||\alpha||_{2}=a\) and \(||\beta||_{2}=b\).
_Example 1:_ We consider the \(2\otimes 4\) state, \(\rho_{x}=x|\xi\rangle\langle\xi|+(1-x)\rho\), where \(|\xi\rangle=\frac{1}{\sqrt{2}}(|00\rangle+|11\rangle)\), \(\rho\) is the bound entangled state considered in [17; 18],
\[\rho\,=\tfrac{1}{7d+1}\,\begin{pmatrix}d&0&0&0&0&d&0&0\\ 0&d&0&0&0&0&d&0\\ 0&0&d&0&0&0&0&d\\ 0&0&0&d&0&0&0&0\\ 0&0&0&0&\frac{1+d}{2}&0&0&\frac{\sqrt{1-d^{2}}}{2}\\ d&0&0&0&0&d&0\\ 0&d&0&0&0&0&\frac{1+d}{2}\end{pmatrix}\]
with \(d\in(0,1)\). For simplicity, set \(d=\frac{9}{10}\) and choose \(\alpha=(\frac{1}{2\sqrt{3}},\frac{1}{2\sqrt{3}})^{t}\) and \(\beta=(1,0)^{t}\). Then Theorem 1 detects the entanglement of \(\rho_{x}\) for \(x\in[0.223406,1]\). One may also choose \(\alpha=(a_{1},..,a_{n})^{t}\) and \(\beta=(b_{1},...,b_{m})^{t}\) in general, where \(\sum_{i=1}^{n}a_{i}^{2}=\frac{1}{6}\) and \(\sum_{i=1}^{m}b_{i}^{2}=1\). The result is the same.
Combining Theorem 1 and Corollary 2, we have the following theorem.
**Theorem 2**: _If a state \(\rho\in H_{M}\otimes H_{N}\) is separable, then_
\[||T_{ab}(\rho)||_{KF}\leq\sqrt{\frac{NM(N-1)(M-1)}{4}}+|ab|, \tag{8}\]
_where \(a,b\in R\) and \(|b|=|a|\sqrt{\frac{N(N-1)}{M(M-1)}}\)._
[Proof] For a state \(\rho\in H_{M}\otimes H_{N}\), we have
\[||T_{ab}(\rho)||_{KF}=||\begin{pmatrix}ab&as^{t}\\ br&T(\rho)\end{pmatrix}||_{KF}\geq|ab|+||T(\rho)||_{KF},\]
where the first inequality is due to \(||\begin{pmatrix}A&B\\ C&D\end{pmatrix}||_{KF}\geq||A||_{KF}+||D||_{KF}\) for any complex matrices \(A,B,C\) and \(D\) with adequate dimensions[16]. If \(\rho\) is separable, we have
\[||T_{ab}(\rho)||_{KF}\leq\sqrt{a^{2}+\frac{M(M-1)}{2}}\sqrt{b^{2}+\frac{N(N-1) }{2}}\]
and
\[||T(\rho)||_{KF}\leq\sqrt{\frac{MN(M-1)(N-1)}{4}}.\]
Setting
\[\sqrt{a^{2}+\frac{M(M-1)}{2}}\sqrt{b^{2}+\frac{N(N-1)}{2}}=|ab|+\sqrt{\frac{ MN(M-1)(N-1)}{4}},\]
we have \(|b|=|a|\sqrt{\frac{N(N-1)}{M(M-1)}}\). \(\square\)
From the proof of Theorem 2, for the separable quantum states one has
\[||T(\rho)||_{KF}\leq||T_{ab}(\rho)||_{KF}-|ab|\leq\sqrt{\frac{MN(M-1)(N-1)}{4 }}.\]
Theorem 2 can detect more entanglement than the Theorem 1 given in [16], see the following example.
_Example 2:_ Consider the following bipartite qubit state, \(\rho=p|\psi\rangle\langle\psi|+(1-p)|00\rangle\langle 00|\), where \(p\in[0,1]\) and \(|\psi\rangle=\frac{1}{\sqrt{2}}(|01\rangle+|10\rangle).\) Let \(b=a\neq 0\). We have \(||T_{aa}(\rho)||_{KF}=2p+\sqrt{4a^{2}p^{2}+(2p-1-a^{2})^{2}}\), which implies that \(||T_{aa}(\rho)||_{KF}>1+a^{2}\) for \(p\in(0,1]\). Namely, the entanglement is detected for \(p\in(0,1]\), which is better than the result \(p\in(\frac{1}{2},1]\) from the Theorem 1 in [16].
## III Separability conditions from the Bloch representation based on \(Wab,\alpha\beta(\rho)\)
Next we define
\[W_{ab,\alpha\beta}(\rho)=\begin{pmatrix}ab&a\alpha^{t}\otimes s^{t}\\ b\beta\otimes r&\beta\alpha^{t}\otimes T(\rho)\end{pmatrix}, \tag{9}\]
where \(a\) and \(b\) are real numbers. Using \(W_{ab,\alpha\beta}(\rho)\), we get the following separability criterion for bipartite states.
**Theorem 3**: _If the state \(\rho\in H_{M}\otimes H_{N}\) is separable, then_
\[||W_{ab,\alpha\beta}(\rho)||_{KF}\leq\sqrt{a^{2}+||\beta||_{2}^{2}\frac{M(M-1) }{2}}\sqrt{b^{2}+||\alpha||_{2}^{2}\frac{N(N-1)}{2}}, \tag{10}\]
_where \(||\cdot||_{2}\) is the Euclidean norm on \(R^{N^{2}-1}\)._
[Proof] A bipartite quantum state with Bloch representation (2) is separable if and only if there exist vectors \(\mu_{i}\in R^{M^{2}-1}\) and \(\nu_{i}\in R^{N^{2}-1}\) with \(||\mu_{i}||_{2}=\sqrt{\frac{M(M-1)}{2}}\) and \(||\nu_{i}||_{2}=\sqrt{\frac{N(N-1)}{2}}\), \(0<p_{i}\leq 1\) with \(\sum_{i}p_{i}=1\) such that \(T(\rho)=\sum_{i}p_{i}\mu_{i}\nu_{i}^{t}\), \(r=\sum_{i}p_{i}\mu_{i}\) and \(s=\sum_{i}p_{i}\nu_{i}\). Therefore, for separable states \(\rho\) the matrix \(W_{ab,\alpha\beta}(\rho)\) reduces to
\[W_{ab,\alpha\beta}(\rho) = \begin{pmatrix}ab&a\alpha^{t}\otimes s^{t}\\ b\beta\otimes r&\beta\alpha^{t}\otimes T(\rho)\end{pmatrix}\] \[= \sum_{i}p_{i}\begin{pmatrix}ab&a\alpha^{t}\otimes\nu_{i}^{t}\\ b\beta\otimes\mu_{i}&\beta\alpha^{t}\otimes\mu_{i}\nu_{i}^{t}\end{pmatrix}\] \[= \sum_{i}p_{i}\begin{pmatrix}a\\ \beta\otimes\mu_{i}\end{pmatrix}\begin{pmatrix}b&\alpha^{t}\otimes\nu_{i}^{t} \end{pmatrix}.\]
Hence one gets
\[||W_{ab,\alpha\beta}(\rho)||_{KF} \leq \sum_{i}p_{i}\left|\left|\begin{pmatrix}a\\ \beta\otimes\mu_{i}\end{pmatrix}\right|\right|_{2}\cdot\left|\left|\begin{pmatrix} b&\alpha^{t}\otimes\nu_{i}^{t}\end{pmatrix}\right|\right|_{2}\] \[= \sqrt{a^{2}+||\beta||_{2}^{2}\frac{M(M-1)}{2}}\sqrt{b^{2}+||\alpha ||_{2}^{2}\frac{N(N-1)}{2}},\]
which proves the theorem. \(\square\)
_Example 3:_ For the quantum state \(\rho_{x}\) with \(d=\frac{9}{10}\) in Example 1, if we take \(a=\frac{1}{\sqrt{6}}\), \(b=1\), \(\beta^{t}=(1,-2)\) and \(\alpha^{t}=(1,3)\), Theorem 3 can detect the entanglement of \(\rho_{x}\) for \(x\in[0.22325,1]\), which is better than the result \(x\in[0.2234,1]\) from [20].
Below we provide another example of PPT state whose entanglement is not detected by the filtered CMC [11] but detected by our Theorem 3.
_Example 4:_ Consider a two qubit state,
\[\rho\ =\tfrac{1}{2}\ \begin{pmatrix}1+a_{1}&0&0&a_{3}\\ 0&0&0&0\\ 0&0&a_{2}-a_{1}&0\\ t&0&0&1-a_{2}\end{pmatrix},\]
where the real parameters \(\{a_{1},a_{2},a_{3}\}\) are taken such that \(\rho\geq 0.\) We choose \(\alpha=(1,1)^{t}\), \(\beta=(1,1)^{t}\), \(a=\sqrt{2}x\) and \(b=\sqrt{2}y\) in \(W_{\alpha\beta}(\rho)\). From Theorem 3, we have that if \(\rho\) is separable, then
\[|a_{3}|+\sqrt{\lambda_{+}}+\sqrt{\lambda_{-}}\leq\sqrt{\frac{1+x^{2}}{2}}\sqrt {\frac{1+y^{2}}{2}}, \tag{11}\]
where
\(\lambda_{\pm}=\frac{1}{8}\left((1+a_{1}-a_{2})^{2}+a_{2}^{2}x^{2}+a_{1}^{2}y^{2}+ x^{2}y^{2}\pm\sqrt{((1+a_{1}-a_{2})^{2}+a_{2}^{2}x^{2}+a_{1}^{2}y^{2}+x^{2}y^{2})^{2} -4(1+a_{1})^{2}(1-a_{2})^{2}x^{2}y^{2}}\right).\)
The inequality (11) is the same as the one from [19], which recovers the \(PPT\) condition for \(\rho\).
Furthermore, we consider the family of \(3\otimes 3\) bound entangled states \(\rho_{PH}^{x}\) introduced by P. Horodecki[15, 21].
_Example 5:_ Consider the mixtures of \(\rho_{PH}^{x}\) with the white noise, \(\rho(x,q)=q\rho_{PH}^{x}+(1-q)\frac{I}{9}\), where \(0\leq q\leq 1\) and
\[\rho_{PH}^{x}\ =\frac{1}{8x+1}\ \begin{pmatrix}x&0&0&0&x&0&0&0&x\\ 0&x&0&0&0&0&0&0&0\\ 0&0&x&0&0&0&0&0\\ 0&0&0&0&0&x&0&0&0\\ 0&0&0&0&0&0&\frac{1+x}{2}&0&\frac{\sqrt{1-x^{2}}}{2}\\ 0&0&0&0&0&0&0&x&0\\ x&0&0&0&x&0&\frac{\sqrt{1-x^{2}}}{2}&0&\frac{1+x}{2}\end{pmatrix}.\]
For simplicity, we let \(x=0.9\). From the fig 4 of [15], \(\rho(0.9,q)\) is entangled with \(q>0.997\). We take \(a=\frac{1}{12}\), \(b=\frac{1}{6}\), \(\alpha=(\frac{1}{8},\frac{1}{8})^{t}\) and \(\beta=\frac{1}{8}\) in \(W_{ab,\alpha\beta}(\rho(0.9,q))\). From our Theorem 3, \(\rho(0.9,q)\) is entangled when \(q>0.9867\), which is better than [15]. See Fig.1, where \(\Delta=||W_{ab,\alpha\beta}(\rho(0.9,q))||_{KF}-\sqrt{a^{2}+3||\beta||_{2}^{2} }\sqrt{b^{2}+3||\alpha||_{2}^{2}}.\)
Next, we give the relation Corollary 2 and Theorem 3.
**Corollary 3**: _For quantum state \(\rho\in H_{A}\otimes H_{B}\), \(||W_{ab,\alpha\beta}(\rho)||_{KF}=||W_{ab,||\alpha||_{2}||\beta||_{2}}(\rho)|| _{KF}\) for any non-negative real numbers \(a\) and \(b\)._
[Proof] For \(W_{ab,\alpha\beta}(\rho)\), we have
\[W_{ab,\alpha\beta}^{\dagger}(\rho)W_{ab,\alpha\beta}(\rho)\ =\ \begin{pmatrix}ab^{2}+b||\beta||_{2}||r||_{2}&b \alpha^{t}\otimes(a^{2}s^{t}+||\beta||_{2}r^{t}T)\\ b\alpha\otimes(a^{2}s+||\beta||_{2}T^{t}r)&\alpha\alpha^{t}\otimes(a^{2}ss^{t}+ ||\beta||_{2}T^{t}T)\end{pmatrix}. \tag{12}\]
From (12), we have \(||W_{ab,\alpha\beta}(\rho)||_{KF}=||W_{ab,\alpha||\beta||_{2}}(\rho)||_{KF}\). For a given matrix \(A\), one has \(||A||_{KF}=Tr\sqrt{A^{\dagger}A}=Tr\sqrt{AA^{\dagger}}\). Next, we have \(||W_{ab,\alpha\beta}(\rho)||_{KF}=||W_{ab,||\alpha||_{2}\beta}(\rho)||_{KF}\) due to \(W_{ab,\alpha\beta}(\rho)W_{ab,\alpha\beta}^{\dagger}(\rho)\). Then we obtain \(||W_{ab,\alpha\beta}(\rho)||_{KF}=||W_{ab,||\alpha||_{2}||\beta||_{2}}(\rho)||_ {KF}\).
For two positive numbers \(k\) and \(l\), we have
\[W_{ab,kl}(\rho)\ =\ \begin{pmatrix}ab&aks^{t}\\ blr&klT\end{pmatrix}=klT_{\frac{1}{k}}(\rho).\]
If the state \(\rho\in H_{M}\otimes H_{N}\) is separable, from Corollary 2, we have
\[T_{\left(\frac{i}{7}\right)\left(\frac{i}{7}\right)}(\rho)\leq\sqrt{\left( \frac{a}{l}\right)^{2}+\frac{M(M-1)}{2}}\sqrt{\left(\frac{b}{k}\right)^{2}+ \frac{N(N-1)}{2}}, \tag{13}\]
Fig. 1: Entanglement detection of \(\rho(0.9,q)\).
and from Theorem 3, we have
\[||W_{ab,kl}(\rho)||_{KF}\leq\sqrt{a^{2}+l^{2}\frac{M(M-1)}{2}}\sqrt{b^{2}+k^{2} \frac{N(N-1)}{2}}. \tag{14}\]
Form (13) and (14), one has Theorem 3 is equivalent to Corollary 2 in detecting entanglement.
Note that the family of bipartite separability criteria based on \(T_{\alpha\beta}(\rho)\) and \(W_{ab,\alpha\beta}(\rho)\) come down to the Corollary 2, which is only depend on real parameters \(a\) and \(b\). Proposition 1 of Ref.[17] shown that the result of [17] becomes more effective when \(m\) gets larger. From Corollary 2 and the Proposition 1 of Ref.[17], we know that Corollary 2 becomes more effective when \(a\) and \(b\) are selected large enough and satisfy \(b\sqrt{M(M-1)}=a\sqrt{N(N-1)}\).
_Example 6:_ Let consider a generalization of well known \(d_{1}\otimes d_{2}\) isotropic states[22]
\[\rho_{p}=\frac{1-p}{d_{1}d_{2}}I_{d_{1}}\otimes I_{d_{2}}+p|\psi_{d_{1}}^{+} \rangle\langle\psi_{d_{1}}^{+}|, \tag{15}\]
where \(|\psi_{d_{1}}^{+}\rangle=\frac{1}{\sqrt{d_{1}}}\sum_{i=1}^{d_{1}}|e_{i}\otimes f _{i}\rangle\), \(|e_{i}\rangle\) defines orthonormal basis in \(H_{d_{1}}\) and \(|f_{i}\rangle\) defines orthonormal set in \(H_{d_{2}}\).
It is well known that this state is separable if and only if it is \(PPT\) which is equivalent to \(p\leq\frac{1}{d_{2}+1}\). For simplicity, we take \(d_{1}=2\) and \(d_{2}=3\) for \(\rho_{p}\) in the example 6. We show that Corollary 2 detects more entangled state than de Vicente criterion[16], realignment criterion [9; 10] and criterion based on SIC POVVs(ESIC)[15] for \(\rho_{p}\). And, we show that Corollary 2 becomes more effective when \(a\) and \(b\) get larger with \(\frac{b}{a}=\sqrt{3}\).
We take \(a=\sqrt{2}\) and \(b=\sqrt{6}\) of Corollary 2 for \(\rho_{p}\). Then Corollary 2 can detect the entanglement in \(\rho_{p}\) for \(p\geq 0.378054\), while the de Vicente criterion, realignment criterion and ESIC criterion can only detect the entanglement in \(\rho_{p}\) for \(p\geq 0.3849\), \(p\geq 0.3846\) and \(p\geq 0.3819\), respectively. At last, we choose \(a=\sqrt{2}t\) and \(b=\sqrt{6}t\) with \(t>0\), Then Corollary 2 can detect the entanglement in \(\rho_{p}\) for \(p\geq 0.379712\) with \(t=\frac{1}{10}\), \(p\geq 0.378139\) with \(t=\frac{1}{2}\), \(p\geq 0.378032\) with \(t=2\), \(p\geq 0.378025\) with \(t=10\), respectively.
## IV Conclusions and remarks
In summary, based on the Bloch representation of a bipartite quantum state \(\rho\), we have introduced the matrices \(T_{\alpha\beta}(\rho)\) and show that \(||T_{\alpha\beta}(\rho)||_{KF}=||T_{||\alpha||_{2}||\beta||_{2}}(\rho)||_{KF}\). i.e., the value of \(||T_{\alpha\beta}(\rho)||_{KF}\) only depends on the norm of \(\alpha\) and \(\beta\). Thus the Theorem 1 is equivalent to the Theorem 1 of [17] and can be further simplified to the Corollary 2 which has a very simpler form. Meanwhile we have shown that the Corollary 2 is more effective than the existing formula (3). In addition, we have presented a separability criteria based on \(W_{ab,\alpha\beta}(\rho)\), and show that \(||W_{ab,\alpha\beta}(\rho)||_{KF}=||W_{ab,||\alpha||_{2}||\beta||_{2}}(\rho)||_ {KF}\). i.e., the value of \(||W_{ab,\alpha\beta}(\rho)||_{KF}\) only depends on the norm of \(\alpha\) and \(b\). At last, the three separability criteria: Theorem 1 of [17], Theorem 1 and Theorem 3 can be simplified to the Corollary 2 which has a very simpler form.
Acknowledgments and Data Availability StatementsThis work is supported by the Research Award Fund for Natural Science Foundation of Shandong Province No. ZR2021LLZ002, National Natural Science Foundation of China under grant Nos. 12075159 and 12171044, Beijing Natural Science Foundation (Z190005), Academy for Multidisciplinary Studies, Capital Normal University, Shenzhen Institute for Quantum Science and Engineering, Southern University of Science and Technology (SIQSE202001), and the Academician Innovation Platform of Hainan Province. All data generated or analysed during this study are included in this published article.
Conflict of interest statementThe authors declared that they have no conflicts of interest to this work.
References
|
2306.00130 | The ancestral selection graph for a $Λ$-asymmetric Moran model | Motivated by the question of the impact of selective advantage in populations
with skewed reproduction mechanims, we study a Moran model with selection. We
assume that there are two types of individuals, where the reproductive success
of one type is larger than the other. The higher reproductive success may stem
from either more frequent reproduction, or from larger numbers of offspring,
and is encoded in a measure $\Lambda$ for each of the two types. Our approach
consists of constructing a $\Lambda$-asymmetric Moran model in which
individuals of the two populations compete, rather than considering a Moran
model for each population. Under certain conditions, that we call the "partial
order of adaptation", we can couple these measures. This allows us to construct
the central object of this paper, the $\Lambda-$asymmetric ancestral selection
graph, leading to a pathwise duality of the forward in time
$\Lambda$-asymmetric Moran model with its ancestral process. Interestingly, the
construction also provides a connection to the theory of optimal transport. We
apply the ancestral selection graph in order to obtain scaling limits of the
forward and backward processes, and note that the frequency process converges
to the solution of an SDE with discontinous paths. Finally, we derive a
Griffiths representation for the generator of the SDE and use it to find a
semi-explicit formula for the probability of fixation of the less beneficial of
the two types. | Adrián González Casanova, Noemi Kurt, José Luis Pérez | 2023-05-31T19:08:27Z | http://arxiv.org/abs/2306.00130v2 | # The ancestral selection graph for a \(\Lambda\)-asymmetric Moran model
###### Abstract
Motivated by the question of the impact of selective advantage in populations with skewed reproduction mechanisms, we study a Moran model with selection. We assume that there are two types of individuals, where the reproductive success of one type is larger than the other. The higher reproductive success may stem from either more frequent reproduction, or from larger numbers of offspring, and is encoded in a measure \(\Lambda\) for each of the two types. Our approach consists of constructing a \(\Lambda\)-asymmetric Moran model in which individuals of the two populations compete, rather than considering a Moran model for each population. Under certain conditions, that we call the "partial order of adaptation", we can couple these measures. This allows us to construct the central object of this paper, the \(\Lambda-\)asymmetric ancestral selection graph, leading to a pathwise duality of the forward in time \(\Lambda\)-asymmetric Moran model with its ancestral process. Interestingly, the construction also provides a connection to the theory of optimal transport. We apply the ancestral selection graph in order to obtain scaling limits of the forward and backward processes, and note that the frequency process converges to the solution of an SDE with discontinous paths. Finally, we derive a Griffiths representation for the generator of the SDE and use it to find a semi-explicit formula for the probability of fixation of the less beneficial of the two types.
keywords: Moran model, ancestral selection graph, duality, \(\Lambda-\)coalescent, fixation probability, optimal transport Msc: [2020] 92D15, 60J28, 60J90
## 1 Introduction
There is a deep connection between the ancestry of a population and the dynamics of its genetic configuration. Mathematical population genetics exploits and formalises this connection, see for example [9] for an overview. Its most classical instance is the moment duality relation between the block counting process of the Kingman coalescent and the Wright Fisher diffusion, which relates the past and future of a population that evolves on absence of selection and in which the number of offsprings of a mother is of a smaller order of magnitude than the population size.
This connection between the past and the future is ubiquitous and occurs in many biologically inspired mathematical models. For example, the ancestry of populations with skewed offspring distributions have been modelled by \(\Lambda\)-coalescents or multiple merger coalescents [27; 30] different from the Kingman coalescent, but which still have a moment dual, namely the \(\Lambda\)-Fleming Viot process (see [3; 5; 6]). Skewed offspring distributions occur in populations where the number of offspring of one mother can be of the order of magnitude of the population size. It is believed that they may be
relevant for certain marine species such as the Atlantic cod, where one mother may lay millions of eggs. See e.g. [1, 11] and references therein for a more detailed discussion.
In the presence of weak selection, for example when one type of individuals reproduces slightly faster than the others, it is cumbersome to formally connect the ancestry and the forward in time changes in a population configuration. Remarkably, there is a notion of potential ancestry that overcomes this difficulty for populations in the universality class of the Kingman coalescent: The celebrated ancestral selection graph (ASG) of Krone and Neuhauser [22, 25]. Heuristically, in a population with only two types, Krone and Neuhauser were able to describe the number of potential ancestors of a sample as a branching coalescing process. Under the rule that an individual is of the selective type if at least one of its ancestors has selective advantage, the forward in time propagation of types can be specified in terms of the potential ancestry and the types at time zero. As a consequence, the frequency of individuals with selective disadvantage is a Markov process, which is moment dual to the process of the number of potential ancestors (backward in time). The graphical construction of the ancestral selection graph is very strong and provides a pathways duality relation of the forward and backward processes, which for example links fixation probabilities with the ancestral process, see e.g. [28, 21].
However, the classical ancestral selection graph couldn't capture genetic dynamics of a population of, say, cords in which a subpopulation is capable of reproducing faster, as this leads to selective events individually affecting large portions of the population. What is then a good model for populations with skewed offspring distributions in the presence of selection? This was one of the main motivating questions of work by Griffiths, Etheridge and Taylor [8], see also Etheridge and Griffiths [7]. These authors showed that the process of potential ancestors in this case should be a nonlinear branching and coalescing process. They equipped their model with parent independent mutation and found a duality in terms of the stationary distribution of the forward process, in the case that the stationary distribution exists. They also spelled out the reason for this mysterious nonlinear branching.
Furthermore, it was theorised by Gillespie [15, 16] that subpopulations with different reproduction mechanisms competing in a sequential sampling experiment (for example, in the Lenski experiment [23, 18]) lead to different macroscopic behaviours, which scale beyond the Wright Fisher diffusion universality class. A detailed mathematical analysis of these _asymmetric_ models models was carried out in [4]. What does it mean that one subpopulation has selective advantage over another if they have very different reproduction behaviour? Can one compare the strategy of the cod with the strategy of the rabbit?
The goal of the present paper is to revisit the motivating questions of [8] and [15] to give a graphical representation of a Moran model with \(\Lambda\)-reproduction in the presence of selection and asymetry, following the structure of the ancestral selection graph. Our representation provides a pathways duality between what we will call below the \(\Lambda\)-asymmetric Moran model and its ancestral line counting process, from which the generator duality can be derived. Contrary to Etheridge, Griffiths and Taylor we don't need to start from the invariant distribution in our construction and therefore don't need to include mutations in the model. Moreover, and probably most importantly, our construction works without previously assigning types to the individuals, since we can construct the ancestral selection graph independently, in the spirit of Neuhauser and Krone.
Our construction is based on a coupling of the measures governing the reproduction mechanisms of two subpopulations, which has connections to the theory of optimal transport. However, not every pair of measures can be used to construct a \(\Lambda\)-ancestral selection graph, and we provide explicit sufficient conditions. This is done by introducing the _partial order of adaptation_ and showing that a pair of measures that is comparable with respect to this partial order can be coupled to construct an ancestral selection graph.
Finally, we derive a representation inspired by Griffiths [19] for the processes that arise as scaling limits of our \(\Lambda\)-Moran model with selection. As an application we compute a recursion for the fixation probabilities..
The paper is organised as follows. In the next section, we will define the \(\Lambda\)-asymmetric Moran model and provide the generator of its frequency process. In Section 3 we will state the central
coupling lemma and construct the ancestral selection graph. Using this construction, we will consider the ancestral process and show our duality result in Section 4. The coupling lemma will be proved and discussed in Section 5. Finally, we will discuss scaling limits in Section 6, and fixation probabilites via Griffith's representation in Section 7.
## 2 \(\Lambda-\)asymmetric Moran model
In this section, we define our main object of interest, the \(\Lambda\)-asymmetric Moran model. It is related to the Moran model with viability selection of Etheridge, Griffiths and Taylor [8], but in the present paper we restrict ourselves to the case of only two types, and we don't include mutation.
We consider a continuous time Moran model with fixed population size \(N.\) The two types will be denoted by \(+\) and \(-,\) we write \(\tau(i,t)\in\{+,-\}\) for the type of individual \(i\in[N]=\{1,...,N\}\) at time \(t\geq 0.\) Denote by \(\mathcal{M}[0,1]\) the set of finite measures on \([0,1]\) equipped with the Borel sigma field. Fix measures \(\Lambda^{+},\Lambda^{-}\in\mathcal{M}[0,1].\) These measures will provide the reproduction rates and the strength of the selection at a reproductive event. We write \(\|\Lambda^{\tau}\|\) for the total mass of the measure \(\Lambda^{\tau},\tau\in\{-,+\}.\)
**Definition 2.1** (\(\Lambda\)-asymmetric Moran model).: _Each of the \(N\) individuals reproduces independently at rate \(N^{-1}\|\Lambda^{\tau(i,t)}\|\) if it has currently type \(\tau(i,t).\) If at time \(t\) individual \(i\) reproduces, then the selective strenght of the reproductive event is provided as a random variable \(Y\) sampled independently of everything else from the probability measure \(\Lambda^{\tau(i,t)}/\|\Lambda^{\tau(i,t)}\|\). Conditional on \(Y,\) each of the \(N-1\) individuals in \([N]\setminus\{i\}\) participates in the reproduction event with probability \(Y,\) meaning that the participating individuals die and are replaced by offspring of the reproducing individuals, carrying the type of individual \(i.\)_
**Remark 2.2**.: _In [8], a Moran model including viability selection was defined in a similar manner. There, individuals reproduce at fixed rate \(\lambda\) and produce a number of offspring, among which only some of the children survive to maturity. The probability of an individual of type \(\tau\) to have \(b\) mature offspring is given as \(p_{\tau b}=\sum_{a=b}^{N-1}r_{a}v_{rab},\) with \(r_{a}\) the number of children, and \(v_{rab}\) the probability that \(b\) out of a children survive to maturity. In particular \(v_{rab}=\int_{[0,1]}\binom{a}{b}p^{b}(1-p)^{a-b}v_{\tau}(dp)\) for some probability measure \(\nu_{\tau}\) may be considered. This corresponds to our model if we set \(\Lambda^{\tau}=\lambda\nu_{\tau}\) and \(r_{N-1}=1.\) As it stands, our model doesn't directly lead to an interpretation in terms of viability selection. On the other hand, we can at least implicitly incorporate various selective mechanisms in our finite measures \(\Lambda^{\tau}.\) See also Example 3.3 below for an interpretation of the \(\Lambda\)-measures in some special cases. Moran models with other types of \(\Lambda\) reproduction undergoing selection have been investigated by Schweinsberg and studied further by Bah and Pardoux and Ged [31; 32; 2; 14]. However, the role of the \(\Lambda\) measure in these works differs from our construction._
We denote by
\[X_{t}^{N}=\frac{1}{N}\sum_{i=1}^{N}1_{\{\tau((t,i))=-\}},\qquad t\geq 0\]
the frequency at time \(t\) of individuals of type \(-,\) which will later be the less fit type. Clearly the process \(X^{N}=(X_{t}^{N})_{t\geq 0}\) is a piecewise continuous Markov chain with state space \(\{0,1/N,...,(N-1)/N,1\}.\) Its transitions are given by
\[x\mapsto\begin{cases}x+\frac{k}{N}&\text{ at rate }x\int_{0}^{1}\binom{(1-x)N}{k}y ^{k}(1-y)^{(1-x)N-k}\Lambda^{-}(dy),\quad k=1,...,(1-x)N\\ x-\frac{k}{N}&\text{ at rate }(1-x)\int_{0}^{1}\binom{xN}{k}y^{k}(1-y)^{xN-k} \Lambda^{+}(dy),\quad k=1,...,xN.\end{cases}\]
In other words, upon a reproduction event of a type \(-\) individual, which happens at a total rate of \(x\|\Lambda^{-}\|\) in the population, the strength \(y\) of the reproduction is determined according to the measure \(\Lambda^{-}/\|\Lambda^{-}\|\) (thus the total mass \(\|\Lambda^{-}\|\) cancels out in the expression of the transition rate). Then, independently with probability \(y,\) each of the \(N-1\) non-reproducing individuals dies and is replaced by an offspring of type \(-\). With probability \(1-y\) it remains. Only a replacement of a type \(+\) individual
by an offspring of type \(-\) leads to an increase in type \(-\) individuals, thus there are \((1-x)N\) individuals that may swap type from \(+\) to \(-\) at such an event. The number of individuals switching type from \(+\) to \(-\) is thus binomial with success probabiliy \(y\) and \(N(1-x)\) trials. A reproduction of a type \(+\) individuals works vice versa. See Figure 1 for an example.
Hence, the generator of the frequency process of the \(\Lambda\)-asymmetric Moran model acts on bounded measurable functions \(f:[0,1]\rightarrow[0,1]\) as
\[\mathcal{B}^{N}f(x) =x\|\Lambda^{-}\|\mathbb{E}\left[f\left(x+\frac{1}{N}\text{Binom }(N(1-x),Y^{-})\right)-f(x)\right]\] \[+(1-x)\|\Lambda^{+}\|\mathbb{E}\left[f\left(x-\frac{1}{N}\text{ Binom}(Nx,Y^{+})\right)-f(x)\right].\]
Here, \(x\in[N_{0}]:=\{0,1/N,...,(N-1)/N,1\}\), and the expectation is taken with respect to the random variables \(Y^{-}\) resp. \(Y^{+}\), which are distributed according to the probability measures \(\Lambda^{-}/\|\Lambda^{-}\|\) resp. \(\Lambda^{+}/\|\Lambda^{+}\|\).
## 3 Partial order of adaptation, and the \(\Lambda\)-asymmetric ancestral selection graph
The next aim is to construct an ancestral selection graph corresponding to the \(\Lambda\)-asymmetric Moran model defined in the previous section. We will give a graphical representation which forward in time provides a construction of the \(\Lambda-\)asymmetric Moran model, and backward in time gives the ancestry of a sample of such a population. Our construction differs from the graphical representation of Etheridge, Griffiths and Taylor in several ways. In particular, the construction works for any assignment of types to the individuals, and it doesn't need an invariant distribution. It thus provides a pathwise construction of the duality found in [8], see Corollary 4.6 below.
As a trade-off, it only works for measures \(\Lambda^{-}\) and \(\Lambda^{+}\) that are ordered in a relatively general way, that we call the partial order of adaptation. This assumption will allow us to construct a coupling that will play a crucial role in the construction. It turns out that this coupling also provides an interesting connection with the theory of optimal transport, as we will explain later.
**Definition 3.1** (The partial order of adaptation).: _For any pair of finite measures on \([0,1]\), we say that \(\mu_{0}\leq\mu_{1}\) if for every \(x\in[0,1]\) it holds that \(\mu_{0}[x,1]\leq\mu_{1}[x,1]\)._
**Proposition 3.2**.: _The partial order of adaptation is a partial order on the finite measures of \([0,1]\)._
Proof.: Let \(\mu_{0},\mu_{1},\mu_{2}\) be finite measures in \([0,1]\). It is immediate that \(\mu_{0}\leq\mu_{0}\), so the relation is reflexive. If \(\mu_{0}\leq\mu_{1}\) and \(\mu_{1}\leq\mu_{0}\), then for every \(x\in[0,1]\), \(\mu_{0}[x,1]=\mu_{1}[x,1]\), which in turns implies that \(\mu_{0}=\mu_{1}\). Finally, if \(\mu_{0}\leq\mu_{1}\) and \(\mu_{1}\leq\mu_{2}\), or every \(x\in[0,1]\), \(\mu_{0}[x,1]\leq\mu_{1}[x,1]\leq\mu_{2}[x,1]\), which implies that \(\mu_{0}\leq\mu_{2}\).
In terms of the measures \(\Lambda^{+},\Lambda^{-}\) we used above in the construction of the \(\Lambda-\)asymmetric Moran model, the partial order of adaptation, which is known as the partial order of stochastic domination in other contexts, is readily interpreted in terms of the selective advantage of the reproduction mechanisms. In our simplistic setting, we say that a mutation _contributes to the adaptation_ or that it is a _selective mutation_ if it increases the reproduction rate or the typical number of offsprings per reproduction event, or both.
**Example 3.3**.: Indeed, \(\Lambda^{-}\leq\Lambda^{+}\) in particular in the following cases:
1. (Faster reproduction) if \(\Lambda^{+}=(1+\alpha)\Lambda^{-}\) for some \(\alpha>0\).
2. (Bigger reproductive events) There exists a function \(s:[0,1]\mapsto[0,1]\) such that \(s(x)-x\geq 0\) and \(\Lambda^{-}(s(A))=\Lambda^{+}(A)\).
In the first case, in particular \(\|\Lambda^{+}\|=(1+\alpha)\|\Lambda^{-}\|\), hence we are in the situation of the classical ancestral selection graph by Krone and Neuhauser [22; 25]. The second case is satisfied for example if \(\Lambda^{-}=\delta_{a}\) and \(\Lambda^{+}=\delta_{b}\), with \(0\leq a\leq b\leq 1\) where \(\Lambda^{-}\) generates reproductive events of size \(a\), and \(\Lambda^{+}\) of size \(b\).
The crucial step in our construction is the following coupling lemma. It turns out that if the finite measures \(\Lambda^{-}\) and \(\Lambda^{+}\) we used in our construction of the \(\Lambda-\)asymmetric Moran models are ordered according to the partial order of adaptation, we can equivalently consider a Moran model constructed from just one measure \(\Lambda\) that contains the information of a particular coupling of the pair \((\Lambda^{-},\Lambda^{+})\).
**Lemma 3.4** (Adaptation coupling).: _Let \(\Delta=\{(y,z)\in[0,1]^{2}:y+z\in[0,1]\}\) and consider two finite measures \(\Lambda^{+},\Lambda^{-}\) on \([0,1]\). If \(\Lambda^{-}\leq\Lambda^{+}\), in the sense of Definition 3.1 then there exists a finite measure \(\Lambda^{1}\) on \(\Delta\) and two finite measures \(\Lambda^{+,1}\) and \(\Lambda^{+,2}\) on \([0,1]\) such that \(\Lambda^{+}=\Lambda^{+,1}+\Lambda^{+,2}\), and such that the following are satisfied:_
* \(\Lambda^{-}(A)=\Lambda^{1}(\{(y,z):y\in A\})\) _for any_ \(A\in\mathcal{B}([0,1])\)_._
* \(\Lambda^{+,1}(A)=\Lambda^{1}(\{(y,z):y+z\in A\})\) _for any_ \(A\in\mathcal{B}([0,1])\)_._
* \(\Lambda^{+}(A)=\Lambda(\{(y,z):y+z\in A\})\)_, where the measure_ \(\Lambda\) _on_ \(\Delta\) _is defined by_ \[\Lambda(dy,dz)=\Lambda^{1}(dy,dz)+\delta_{0}(dy)\otimes\Lambda^{+,2}(dz).\]
_In particular, if \(\|\Lambda^{-}\|=\|\Lambda^{+}\|\), then we can take \(\Lambda^{+}=\Lambda^{+,1}\), \(\Lambda=\Lambda^{1}\), and the measure \(\rho\) on \([0,1]^{2}\) defined by_
\[\rho(A\times B)=\Lambda(\{(y,z):y\in A,y+z\in B\}),\qquad A,B\in\mathcal{B}([0,1]),\]
_is a coupling of \(\Lambda^{-}\) and \(\Lambda^{+}\) such that \(\rho\{(y,z):y>z\}=0\)._
Figure 1: A realisation of the \(\Lambda\)-asymmetric frequency process. Filled dots represent the reproducing individuals, filled squares the offspring. In the first reproductive event, a type \(+\) individual has three children, in the second one a type \(-\) individual has three children, and in the last event a type \(-\) individual has no offspring.The role of selection in this construction will be discussed later in Section 3, cf. also Figure 2.
We refer to the measure \(\Lambda\) as the adaptation coupling of \((\Lambda^{-},\Lambda^{+})\), although strictly speaking, only the measure \(\rho\) is a coupling. The proof of Lemma 3.4 is postponed to Section 5 below. The idea behind it may be interpreted as splitting the measure \(\Lambda^{+}\) into two parts \(\Lambda^{+,1}\) and \(\Lambda^{+,2}\), where the mass of \(\Lambda^{+,1}\) is transported to \(\Lambda^{-}\), and \(\Lambda^{+,2}\) contains the excess of mass. In this sense, \(\Lambda^{+,1}\) corresponds to the case of bigger reproductive events of Example 3.3, while \(\Lambda^{+,2}\) is related to the \(\alpha\) in case 1 of this example. The idea of mass transport is also further discussed in Section 5.
**Remark 3.5**.: _By the definition of \(\Lambda\) as in Lemma 3.4, we note that for any measurable function \(f:[0,1]\mapsto[0,1]\) such that \(f(0)=0\),_
\[\int_{\Delta}f(y)\Lambda(dy,dz)=\int_{[0,1]}f(y)\Lambda^{-}(dy),\text{ and }\int_{\Delta}f(y+z)\Lambda(dy,dz)=\int_{[0,1]}f(z)\Lambda^{+}(dz).\]
With this coupling, we can now construct the \(\Lambda\)-asymmetric ancestral selection graph. In order to make the construction more transparent, we first provide an alternative description of the \(\Lambda-\)asymmetric Moran model, using the above partial order of adaptation. Assume that the measures \(\Lambda^{+}\) and \(\Lambda^{-}\) satisfy \(\Lambda^{-}\leq\Lambda^{+}\). Let \(\Lambda\) be its adaptation coupling in the sense of Lemma 3.4. This means that the random variable \(Y\) that was sampled in Definition 2.1 according to either \(\Lambda^{+}\) or \(\Lambda^{-}\), depending on the type of the reproducing individual, can now be sampled instead as a pair \((Y,Z)\) according to \(\Lambda\) from \(\Delta\), such that individuals are replaced with probability \(Y\) if a type \(-\) individual reproduces, and individuals are replaced with probability \(Y+Z\) if a type \(+\) individual reproduces. The rates of the frequency process can thus be rewritten as (cf. Remark 3.5)
\[x\mapsto\begin{cases}x+\frac{k}{N}&\text{ at rate }x\int_{\Delta}\binom{(1-x)N}{ k}y^{k}(1-y)^{(1-x)N-k}\Lambda(dy,dz),\quad k=1,...(1-x)N\\ x-\frac{k}{N}&\text{ at rate }(1-x)\int_{\Delta}\binom{xN}{k}(y+z)^{k}(1-(y+z))^{xN-k} \Lambda(dy,dz),\quad k=1,...Nx\end{cases}\]
and the generator becomes
\[\mathcal{B}^{N}f(x) =x\int_{\Delta}\mathbb{E}\left[f\left(x+\frac{1}{N}\text{Binom}(N (1-x),y)\right)-f(x)\right]\Lambda(dy,dz)\] \[+(1-x)\int_{\Delta}\mathbb{E}\left[f\left(x-\frac{1}{N}\text{ Binom}(Nx,y+z)\right)-f(x)\right]\Lambda(dy,dz).\]
We are now ready to define the central object of this paper, which is the ancestral selection graph for the \(\Lambda-\)asymmetric Moran model. We give a construction in the spirit of Neuhauser and Krone in terms of a Poisson process driven by the adaptation coupling measure \(\Lambda\).
**Definition 3.6** (\(\Lambda-\)asymmetric ancestral selection graph, Asg).: _Consider a Poisson processes \(M^{N}\) with values in \(\mathbb{R}_{+}\times[0,1]\times[N]\times\Delta\times[0,1]^{N}\) and intensity measure \(dt\times dm\times\Lambda(dy,dz)\times du_{1}\times du_{2}...\times du_{N}\), where \(dm\) denotes the uniform measure on \([N]\). Each point \((t,i)\in\mathbb{R}\times[N]\), represents the \(i\)-th individual alive at time \(t\)._
* _We say that at time_ \(t\) _there is a_ neutral arrow _between_ \(i\) _and_ \(j\) _if there is a point_ \((t,i,y,z,u_{1},u_{2},...,u_{N})\in M^{N}\) _such that_ \(u_{j}\in[0,y]\)_._
* _We say that at time_ \(t\) _there is a_ selective arrow _between_ \(i\) _and_ \(j\) _if there is a point_ \((t,i,y,z,u_{1},u_{2},...,u_{N})\in M^{N}\) _such that_ \(u_{j}\in[0,y+z]\)_._
_The ancestral selection graph is then given by \((\mathbb{R}_{+}\times[N],M^{N})\)._
In the graphical representation of the ancestral selection graph, as usual, individuals are represented by lines, and reproductive events are denoted by arrows, where the arrow starts at the reproducing individual, and the tip points to the line of (potential) offspring. Here we represent the reproducing individuals by filled dots, the individuals at the tips of neutral arrows by filled squares, and the individuals at the tips of selective arrows by squares with a question mark, see Figure 2. Neutral and
selective arrows may occur at the same time. Observe that formally we can have both neutral and selective arrows from \(i\) to itself, but they won't have an effect on the frequency process.
From the graphical reperesentation and the Poisson point process construction we can derive the frequency process. Heuristically, at an arrival of the Poisson process, the reproducing individual \(i\) is chosen uniformly at random, and the reproductive strenght is determined by the measure \(\Lambda\). The uniform values \(u_{j}\) then determine whether individual \(j\) participates in a neutral or selective reproduction event. After introducing types, type \(+\) will be propagated (that is, the individual at the tip ot the arrow will be replaced) both through neutral and selective arrows, while type \(-\) will be propagated only through neutral arrows. Hence we can construct the frequency process from the ASG by distributing types at time \(t=0\), and propagating them forward in time according to the arrows on the ASG.
**Proposition 3.7**.: _The frequency process constructed from the \(\Lambda\)-asymmetric ancestral selection graph by propagating type \(+\) through both neutral and selective arrows, and type \(-\) through neutral arrows only, is the same as the frequency process of the \(\Lambda\)-asymmetric Moran model._
We will give a more formal definition of the frequency process and a proof of this result below.
**Remark 3.8**.: _Note that it's straightforward to extend our construction to include parent independent mutations just as the classical ASG can be modified easily to incorporate them. Generalising our construction to multiple types would require a careful application of the order of adaptation and is left for future work._
## 4 Ancestral process, sampling function and duality
In this section we will formalize the connection between the \(\Lambda\)-asymmetric Moran model and the ancestral selection graph from Definition 3.6. We will also define the ancestral line counting process and state its pathwise sampling duality with the frequency process.
Figure 2: The realisation of the \(\Lambda\)-asymmetric frequency process from Figure 1 now in terms of the coupling construction. Neutral arrows are black with filled squares at the tips, selective arrows grey with question marks at the tips. Individuals of type \(+\) may reproduce through any arrow, individuals of type \(-\) only through neutral arrows. Therefore some of the arrows weren’t present in Figure 1, where the measures applied to determine the reproduction depended on the type of the reproducing individuals.
We start by introducing suitable forward and backward filtrations. For any subset of individuals \(S\subset[N]\) and any time \(T\in\mathbb{R}_{+}\) we will write \(S_{T}=\{(T,i):i\in S\}.\) We think of \(S_{T}\) as a (random) sample of individuals taken at time \(T\). We will follow the construction detailed in [17] and define the forward and backward filtrations of our ancestral selection graph.
**Definition 4.1**.: _Let \((\mathbb{R}_{+}\times[N],M^{N})\) be the \(\Lambda\)-asymmetric ancestral selection graph, and let \(S_{T}\) be a sample as defined above._
* _Let_ \(\mathcal{F}_{t}^{T}\) _denote the sigma algebra generated by_ \(S_{T}\) _and the restriction of_ \(M^{N}\) _to_ \([T,T+t]\)_. Then_ \(\{\mathcal{F}_{t}^{T}\}_{t\geq 0}\) _is the_ forward filtration_._
* _Let_ \(\mathcal{P}_{t}^{T}\) _denote the sigma algebra generated by_ \(S_{T}\) _and the restriction of_ \(M^{N}\) _to_ \([T,T-t]\)_. Then_ \(\{\mathcal{P}_{t}^{T}\}_{t\geq 0}\) _is the_ backward filtration_._
The ancestral selection graph is particularly useful as a construction that can be used both forward and backward in time. In order to utilize this, we now introduce the notion of ancestral path in the ancestral selection graph, which will go backward in time.
**Definition 4.2**.: _Let \(t,s\in\mathbb{R}_{+},i,j\in[N].\) An ancestral path going from \((t+s,i)\) to \((t,j)\) is the graph of a cadlag function \(f:[t,t+s]\mapsto[N]\) such that:_
1. \(f(t)=j\)_,_ \(f(t+s)=i\)_._
2. _If_ \(f(u^{-})=k\neq f(u)=l\) _then there is an arrow (neutral or selective) in the ancestral selection graph_ \(M^{N}\) _from_ \((u,k)\) _to_ \((u,l)\)_._
3. _If there is an arrow from_ \((u,k)\) _to_ \((u,l)\)_, then_ \(f(u^{-})\neq l\)_._
_We say that an individual \((t,j)\) is a potential ancestor of \((t+s,i)\) if there exists an ancestral path going from \((t+s,i)\) to \((t,j)\) and in this case we write \((t,j)\sim(t+s,i)\)._
Graphically, it is easy to understand the concept of potential ancestor and of ancestral paths (see Figure 3).
Note that, as opposed to the Moran model, an individual in this model can have many potential ancestors at any given time in its past. Moreover, recall that we introduced neutral and selective arrows, which correspond to the measures \(\Lambda^{+}\) and \(\Lambda^{-}\) in our original definition of the \(\Lambda\)-asymmetric Moran model, where individuals of type \(+\) have a selective advantage by reproducing according to \(\Lambda^{+}\) and individuals of type \(-\) reproduce according to \(\Lambda^{-}\), with \(\Lambda^{-}\leq\Lambda^{+}.\) This means that if we assign type \(-\) to all individuals of a sample \(S_{T}\) taken at time \(T,\) and type \(+\) to the individuals of \(S_{T}^{c},\) then an individual \(i\) at time \(T+s\) is of type \(+\) if and only if there exists an ancestral path from \((T+s,i)\) to at least one individual in \(S_{T}^{c}.\) In this case, we write \(\tau((s,i))=+\). Otherwise, if all the potential ancestors of \((T+s,i)\) at time \(T\) belong to \(S_{T},\) we write \(\tau((T+s,i))=-\).
We therefore formally define the _frequency process in the \(\Lambda-\)asymmetric ancestral selection graph_ by assigning types \(-,+\) to all individuals at time \(T>0,\) and denote by \(X^{N,T}=\{X_{t}^{N,T}:t\geq T\}\) the frequency of type \(-\) individuals at time \(t>T\). Thus,
\[X_{t}^{N,T}=\frac{1}{N}\sum_{i=1}^{N}1_{\{\tau((t,i))=-\}},\qquad t\geq T,\]
where \(\tau(t,i)\) is constructed as explained above using the ancestral lines of the ASG.
By construction, it is straightforward to check that the process \((X_{t}^{N,T})_{t\geq T}\) satisfies
\[X_{t}^{N,T} =X_{T}^{N,T}+\sum_{i=1}^{N}\int_{t}^{T}\int_{\Delta}\int_{[0,1]^ {N}}1_{\{\tau(s,i)=-\}}\frac{1}{N}\sum_{j=1}^{N}1_{\{\tau(s,j)=+,u_{j}\leq y \}}M^{N}(di,ds,dy,dz,du)\] \[-\sum_{i=1}^{N}\int_{t}^{T}\int_{\Delta}\int_{[0,1]^{N}}1_{\{ \tau(s,i)=+\}}\frac{1}{N}\sum_{j=1}^{N}1_{\{\tau(s,j)=-,u_{j}\leq y+z\}}M^{N}( di,ds,dy,dz,du),\quad t\geq T. \tag{1}\]
**Proposition 4.3**.: _Fix \(x\in[0,1]\) and assume that \(X_{T}^{N,T}=x\), then the process \(X^{N,T}\) is a \(\{\mathcal{F}_{t}^{T}\}_{t\geq 0}\) measurable continuous-time Markov chain with values in \([N_{0}]/N\), and its infinitesimal generator \(\mathcal{B}^{N,T}\) is given for any \(f\in\mathcal{C}^{2}([0,1])\) by_
\[\mathcal{B}^{N,T}f(x) =x\int_{\Delta}\mathbb{E}\left[f\left(x+\frac{1}{N}\mathrm{Binom} (N(1-x),y)\right)-f(x)\right]\Lambda(dy,dz)\] \[+(1-x)\int_{\Delta}\mathbb{E}\left[f\left(x-\frac{1}{N}\mathrm{ Binom}(Nx,y+z)\right)-f(x)\right]\Lambda(dy,dz). \tag{2}\]
Proof.: The first statement of the proposition follows from (1). Now, using (1) and an application of Ito's formula we obtain for any \(f\in\mathcal{C}^{2}([0,1])\) and \(t\geq T\)
\[f(X_{t}^{N,T})=f(X_{T}^{N,T})\] \[+\sum_{i=1}^{N}\int_{T}^{t}\int_{\Delta}\int_{[0,1]^{N}}\Bigg{[}f \left(X_{s-}^{N,T}+1_{\{\tau(s,i)=-\}}\frac{1}{N}\sum_{j=1}^{N}1_{\{\tau(s,j)= +,u_{j}\leq y\}}\right)-f(X_{s-}^{N,T})\Bigg{]}M^{N}(di,ds,dy,dz,du)\] \[+\sum_{i=1}^{N}\int_{T}^{t}\int_{\Delta}\int_{[0,1]^{N}}\Bigg{[}f \left(X_{s-}^{N,T}-1_{\{\tau(s,i)=+\}}\frac{1}{N}\sum_{j=1}^{N}1_{\{\tau(s,j)= -,u_{j}\leq y+z\}}\right)-f(X_{s-}^{N,T})\Bigg{]}M^{N}(di,ds,dy,dz,du).\]
Taking expectations in the previous identity gives for \(t\geq T\)
\[\mathbb{E}\left[f(X_{t}^{N,T})\right]=\mathbb{E}[f(X_{T}^{N,T})]\] \[+\mathbb{E}\Bigg{[}\int_{T}^{t}\int_{\Delta}\int_{[0,1]^{N}}\frac {1}{N}\sum_{i=1}^{N}1_{\{\tau(s,i)=-\}}\Bigg{[}f\left(X_{s-}^{N,T}+\frac{1}{N }\sum_{j=1}^{N}1_{\{\tau(s,j)=+,u_{j}\leq y\}}\right)-f(X_{s-}^{N,T})\Bigg{]} ds\Lambda(dy,dz)du\]
Figure 3: The possible ancestries of two different samples, using the realisation of the \(\Lambda\)-asymmetric ancestral selection graph from Figure 2. On the left, the potential ancestry of a sample of four individuals (of type \(-\)) is represented, on the right the ancestry of two individuals (of type \(+\)). We see that if at a reproductive event, lines from individuals with a filled square (black arrows) are always merged with the reproducing line (filled circle), where for question marks (grey arrows), both lines are continued.
\[+\mathbb{E}\Bigg{[}\int_{T}^{t}\int_{\Delta}\int_{[0,1]^{N}}\frac{1}{N} \sum_{i=1}^{N}1_{\{\tau(s,i)=+\}}\Bigg{[}f\left(X_{s-}^{N,T}-\frac{1}{N}\sum_{j= 1}^{N}1_{\{\tau(s,j)=-,u_{j}\leq y+z\}}\right)-f(X_{s-}^{N,T})\Bigg{]}ds\Lambda( dy,dz)du\] \[=\mathbb{E}[f(X_{T}^{N,T})]+\mathbb{E}\Bigg{[}\int_{T}^{t}\int_{ \Delta}X_{s-}^{N,T}\Bigg{[}f\left(X_{s-}^{N,T}+\frac{1}{N}\mathrm{Binom}(N(1-X_ {s-}^{N,T}),y)\right)-f(X_{s-}^{N,T})\Bigg{]}ds\Lambda(dy,dz)\Bigg{]}\] \[+\mathbb{E}\Bigg{[}\int_{T}^{t}\int_{\Delta}\int_{[0,1]^{N}}(1-X_ {s-}^{N,T})\Bigg{[}f\left(X_{s-}^{N,T}-\frac{1}{N}\mathrm{Binom}(NX_{s-}^{N,T },y+z)\right)-f(X_{s-}^{N,T})\Bigg{]}ds\Lambda(dy,dz)\Bigg{]}. \tag{3}\]
Finally, by differentiating (3) and taking \(t=0\) we obtain (2).
Proof of Proposition 3.7.: The generator \(\mathcal{B}^{N,T}\) has no true dependence on \(T\), but only on the cardinality of the sample \(S_{T}\) of indivduals of type \(-\), which gives the initial frequency \(|S_{T}|/N\) of individuals of type \(-\). Comparing \(\mathcal{B}^{N,T}\) and \(\mathcal{B}^{N}\) shows that they are equal, hence the processes they generate are equal in distribution.
We may thus identify the two frequency processes \((X_{t}^{N})_{t\geq 0}\) and \((X_{t}^{N,T})_{t\geq T}\) with \(X_{0}^{N}=|S_{T}|/N\), and call it the frequency process associated with the \(\Lambda\)- asymmetric ancestral selection graph.
One of the big advantages of the \(\Lambda\)-ancestral selection graph is that it also allows for a \(\{\mathcal{P}_{t}^{T}\}_{t\geq 0}\) measurable ancestral process, the ancestral line counting process, which satisfies a duality relation with the frequency process.
**Definition 4.4** (Ancestral process).: _Fix a sample \(S_{T}\) of individuals at some time \(T\in\mathbb{R}\). The ancestral line counting process \((A_{t}^{N,T})_{0\leq t\leq T}\) of a sample taken at time \(T\) is given by_
\[A_{t}^{N,T}=\sum_{i=1}^{N}1_{\{(T-t,i)\sim(T,j),\;\text{for some $(T,j)\in S_{T}$}\}}, \qquad t\in[0,T].\]
Hence by definition \(A_{t}^{N,T}\) counts the number of potential ancestors living at time \(t\in[0,T]\) of individuals from the sample \(S_{T}\).
**Proposition 4.5**.: _The process \((A_{t}^{N,T})_{0\leq t\leq T}\) is a \(\{\mathcal{P}_{t}\}_{t\geq 0}\) measurable continuous-time Markov chain with values in \([N]\) starting at \(A_{0}^{N,T}=|S_{T}|\) and transition rates_
\[n\mapsto\begin{cases}n-k&\text{at rate $\frac{n}{N}\int_{\Delta}\binom{n-1}{k}y ^{k}(1-y)^{n-1-k}\Lambda(dy,dz)$,}\quad k=1,...,n-1\\ n-k+1&\text{at rate $(1-\frac{n}{N})\int_{\Delta}\binom{n}{k}y^{k}(1-y)^{n-k} \Lambda(dy,dz)$,}\quad k=2,...,n\\ n+1&\text{at rate $(1-\frac{n}{N})\int_{\Delta}[(1-y)^{n}-(1-y-z)^{n}] \Lambda(dy,dz)$.}\end{cases}\]
Since again the process doesn't depend on \(T\) but only on \(|S_{T}|\), we drop the \(T\) from the notation and call \((A_{t}^{N})_{t\geq 0}\) the ancestral line counting process or process of potential ancestors.
Proof.: Measurability is clear, as well as the initial value. In order to understand the rates, we consider the graphical representation and go backward from sampling time \(T.\) If we have currently \(n\) potential ancestors, then the next event backward in time is one of the following three cases (cf. Figure 4):
1. From one of the \(n\) potential ancestors, \(k\) neutral arrows emerge and hit \(k\) of the remaining \(n-1\) lines of potential ancestors. In that case, the \(k\) lines coalesce with the reproducing line, reducing the number of potential ancestors by \(k\). According to the construction of the ancestral selection graph, this happens at rate \(\frac{n}{N}\int_{\Delta}\binom{n-1}{k}y^{k}(1-y)^{n-1-k}\Lambda(dy,dz)\), leading to the first line in the above transition rates. There might be selective arrows at the same event, but these don't change the number of potential ancestors, since in that case both lines continue to be potential ancestors.
2. From one of the \(N-n\) lines that don't currently belong to the set of potential ancestors there are neutral arrows to \(k\geq 2\) out of the \(n\) current potential ancestors. In that case those \(k\) lines merge, but the line of the reproducing individual is added to the set of potential ancestors. This happens at rate \((1-\frac{n}{N})\int_{\Delta}{n\choose k}y^{k}(1-y)^{n-k}\Lambda(dy,dz)\).
3. From one of the \(N-n\) lines outside the current set of potential ancestors there is a selective arrow to at least one of the \(n\) individuals from the current set of ancestors, while at the same time there are no neutral arrows to any of the \(n\) current ancestors in the sample. In that case, the reproducing line is added to the set of potential ancestors, while all the other lines remain. This happens at rate \((1-\frac{n}{N})\int_{\Delta}[(1-y)^{n}-(1-y-z)^{n}]\Lambda(dy,dz)\), since \((1-y)^{n}-(1-y-z)^{n}\) is the probability that there are no neutral arrows to individuals of the ancestral process, while there is at least one selective arrow.
There are no further events in the graphical construction that could change the number of potential ancestors.
For \(x\in[N_{0}]/N\), \(n\in[N]\), and \(t\geq 0\) we define the _sampling function_\(S_{t}(x,n)\) as follows. Let \(\{u_{i}\}_{i\in[n]}\) be a uniformly sampled subset of \([N]\) of size \(n\), and let \(S_{t}(x,n)\) be the probability that all individuals in the sample \(S_{t}=\{(u_{i},t)\}\) are of type \(-\) conditional on the initial frequency of type \(-\) individuals at time \(0\) is \(x\). Note that for \(t=0\)
\[S_{0}(x,n)=\prod_{i=1}^{n}\frac{Nx+1-i}{N+1-i}.\]
Observe that for fixed \(n\) and large \(N\) is of order \(x^{n}+O(1/N)\), a fact that we will use in Proposition 6.5 below.
Now, using the graphical representation, for any \(t>0\) we can write \(S_{t}(x,n)\) in terms of \(S_{0}(x,n)\) and \(A_{t}^{N}\) started at \(A_{0}^{N}=n\). Recall that an individual at time \(t\) has type \(-\) if and only if all its potential ancestors at time \(0\) have type \(-\). Thus the probability that \(n\) individuals sampled at time \(t\) is given in terms of the number of potential ancestors of these individuals, and we obtain
\[S_{t}(x,n)=\mathbb{E}[S_{0}(x,A_{t}^{N})\,|\,A_{0}^{N}=n]=\mathbb{E}_{n}[S_{0 }(x,A_{t}^{N})].\]
Here, \(\mathbb{E}\) denotes the expectation with respect to the law of the ancestral selection graph, and \(\mathbb{E}_{n}\) the expectation with respect to the Markov chain \((A_{t}^{N})_{t\geq 0}\) started at \(n.\) Similarly, we can write \(S_{t}(x,n)\) in terms of \(S_{0}(x,n)\) and \(X_{t}^{N}\) started at \(X_{0}^{N}=x\) as
\[S_{t}(x,n)=\mathbb{E}[S_{0}(X_{t}^{N},n)\,|\,X_{0}=x]=\mathbb{E}_{x}[S_{0}(X_ {t}^{N},n)].\]
Hence we have shown
**Corollary 4.6**.: _The processes \((X_{t}^{N})_{t\geq 0}\) and \((A_{t}^{N})_{t\geq 0}\) are dual with respect to the duality function \(S_{0},\) that is,_
\[\mathbb{E}_{x}[S_{0}(X_{t}^{N},n)]=\mathbb{E}_{n}[S_{0}(x,A_{t}^{N})]\quad \forall t\geq 0,x\in[N_{0}]/N,n\in\mathbb{N}.\]
## 5 Proof of the coupling lemma, and optimal transport connection
We are now going to prove the adaptation coupling that lies at the heart of our construction, and discuss its connection to the theory of optimal transport.
Proof of Lemma 3.4.: _Step 1.-_ First assume that \(\Lambda^{+}[0,1]=\Lambda^{-}[0,1]=c^{-1}\). By assumption we know that \(\Lambda^{-}[0,x]-\Lambda^{+}[0,x]\geq 0\) for any \(x\in[0,1]\). Therefore by a version of Strassen Theorem (see (3) in [24]) there exists two random variables \(Z^{-}\) and \(Z^{+}\) on a probability space such that
\[Z^{-}\sim c\Lambda^{-}(\cdot),\qquad\text{and,}\qquad Z^{+}\sim c\Lambda^{+} (\cdot),\]
and \(Z^{-}\leq Z^{+}\) a.s. Here \(\sim\) means that the random variable is distributed according to the respective probability mesure. Moreover, the joint distribution of \((Z^{-},Z^{+})\), is given as follows
\[\mathbb{P}\left((Z^{-},Z^{+})\in A\right)=\mathbb{P}\left((F_{-}^{-1}(U),F_{+}^ {-1}(U))\in A\right),\qquad A\in\mathcal{B}([0,1]^{2}),\]
where \(F_{-}\) (resp. \(F_{+}\)) is the distribution function of the random variable \(Z^{-}\) (resp. \(Z^{+}\)) and \(U\) is a unifom random variable on \([0,1]\). Hence, let us define the following kernel in \(\Delta\)
\[\Lambda(A):=c^{-1}\mathbb{P}\left((Z^{-},(Z^{+}-Z^{-}))\in A\right),\qquad \text{for any }A\in\mathcal{B}(\Delta).\]
Then we note that for any Borel set \(B\subset[0,1]\)
\[\Lambda(B\times[0,1])=c^{-1}\mathbb{P}\left((Z^{-},Z^{+}-Z^{-})\in B\times[0, 1]\right)=c^{-1}\mathbb{P}(Z^{-}\in B)=\Lambda^{-}(B),\]
additionally
\[\Lambda\left(\{(y,z):y+z\in A\}\right)=c^{-1}\mathbb{P}\left(Z^{-}+Z^{+}-Z^{- }\in A\right)=c^{-1}\mathbb{P}\left(Z^{+}\in A\right)=\Lambda^{+}(A).\]
_Step 2.-_ Now we assume that \(\|\Lambda^{-}\|<\|\Lambda^{+}\|\). The fact that \(\Lambda^{-}\leq\Lambda^{+}\) implies that for any \(x\in[0,1]\),
\[\Lambda^{-}([x,1])\leq\Lambda^{+}([x,1]). \tag{4}\]
Without loss of generality let us assume that \(\Lambda^{-}\) is a probability measure on \([0,1]\), as the general case follows by normalizing the measure \(\Lambda^{-}\).
We define \(x^{*}=\sup\{x\in[0,1]:\Lambda^{+}([x,1])\geq 1\}\), then
* If \(\Lambda^{+}([x^{*},1])=1\) we define the measures \(\Lambda^{+,1}(A)=\Lambda^{+}(A\cap[x^{*},1])\) and \(\Lambda^{+,2}(A)=\Lambda(A\cap[0,x^{*}))\) for any \(A\in\mathcal{B}([0,1])\).
Figure 4: The three possible types of transitions of the ancestral process. On the left, the reproducing individuals belongs to the current sample, there is one neutral and two selective arrow. The line of the individual at the end of the neutral arrow is discarded, resp. merges with the reproducing line, while the lines with a question mark (selective) remain. In the middle, the reproducing individuals doesn’t belong to the current sample, there are two neutral and two selctive arrows. The line of the reproducing individual is thus added, the lines of the individuals at the end of a neutral arrow are discarded, resp. merge with the reproducing line. On the right, only selective arrows are present, where the incoming lines are kept, but the reproducing line is added. We therefore see a branching.
* if \(\Lambda^{+}([x^{*},1])>1\) then we have that \(\Lambda^{+}([x^{*},1])-\Lambda^{+}((x^{*},1])>0\) which implies that \(\Lambda^{+}(\{x^{*}\})>0\). In this case we define \(B:=\Lambda^{+}([x^{*},1])-\Lambda^{+}((x^{*},1])\), \(C:=1-\Lambda^{+}((x^{*},1])\) and \[\Lambda^{+,1}(A)=\Lambda^{+}((x^{*},1]\cap A)+C\delta_{x^{*}}(A).\] Then if we denote \(D:=\Lambda^{+}([x^{*},1])-1\) \[\Lambda^{+}(A) =\Lambda^{+}((x^{*},1]\cap A)+\Lambda^{+}([0,x^{*})\cap A)+B \delta_{x^{*}}(A)\] \[=\Lambda^{+}((x^{*},1]\cap A)+\Lambda^{+}([0,x^{*})\cap A)+C \delta_{x^{*}}(A)+D\delta_{x^{*}}(A)\] \[=\Lambda^{+,1}(A)+\Lambda^{+,2}(A),\] (5)
where \(\Lambda^{+,2}(A)=\Lambda^{+}([0,x^{*})\cap A)+D\delta_{x^{*}}(A)\).
By construction we have that \(\Lambda^{+}=\Lambda^{+,1}+\Lambda^{+,2}\) and that
\[\Lambda^{+,1}[0,1]=\Lambda^{+}((x^{*},1])+C=\Lambda^{+}((x^{*},1])+1-\Lambda^ {+}((x^{*},1])=1.\]
Additionally, assume that \(\Lambda^{+}([x^{*},1])>1\) and consider \(x\in[0,1]\) such that \(x>x^{*}\) then
\[\Lambda^{+,1}([x,1])=\Lambda^{+}([x,1])\geq\Lambda^{-}[x,1],\]
where the last inequality follows from (4). If we consider that \(x\leq x^{*}\) we obtain
\[\Lambda^{+,1}([x,1])=\Lambda^{+}((x^{*},1])+1-\Lambda^{+}((x^{*},1])=1\geq \Lambda^{-}([x,1]).\]
Hence, \(\Lambda^{-}<\Lambda^{+,1}\). The case in which \(\Lambda_{1}([x^{*},1])=1\) follows analogously.
Noting that by construction \(\|\Lambda^{+,1}\|=\|\Lambda^{-}\|=1\), we have by Step 1 that there exists a measure \(\Lambda^{1}\) on \(\Delta\) such that \(\Lambda^{-}(A)=\Lambda^{1}(\{(y,z):y\in A\})\) and \(\Lambda^{+,1}(A)=\Lambda^{1}(\{(y,z):y+z\in A\})\). And, by (5) we get for any \(A\in\mathcal{B}(\Delta)\)
\[\Lambda(A)=\Lambda^{+,1}(A)+\left(\delta_{0}\otimes\Lambda^{+,2}\right)(\{(y, z):y+z\in A\})=\Lambda^{+,1}(A)+\Lambda^{+,2}(A)=\Lambda^{+}(A)\]
Keeping in mind the proof of Lemma 3.4, we will now show that the representation of the frequency process by means of the coupling \(\rho\) introduced in Lemma 3.4 minimizes the number of potential ancestors in the construction of the \(\Lambda\)-asymmetric ancestral graph given in Definition 3.6in the case in which \(\Lambda^{-}\) and \(\Lambda^{+}\) are probability measures.
To be more precise, denote for \(n\in\mathbb{N}\).
\[c(y,z):=(1-y)^{n}-(1-z)^{n},\qquad y,z\in[0,1]^{2},\]
and consider the following optimal transport problem, consisting in finding a probability measure \(\gamma^{*}\) on \([0,1]^{2}\) such that the following infimum is achieved
\[V(n,\Lambda^{-},\Lambda^{+}):=\inf\left\{\int_{[0,1]^{2}}c(y,z)\gamma(dy,dz): \gamma\in\Gamma(\Lambda^{-},\Lambda^{+})\right\}, \tag{6}\]
where \(\Gamma(\Lambda^{-},\Lambda^{+})\) is the set of probability measures on \([0,1]^{2}\) with marginals \(\Lambda^{-}\), \(\Lambda^{+}\) on \([0,1]\).
We first note that by construction
\[\int_{\Delta}\left[(1-y)^{n}-(1-y-z)^{n}\right]\Lambda(dy,dz)=\int_{[0,1]^{2}} \left[(1-y)^{n}-(1-z)^{n}\right]\rho(dy,dz)=\mathbb{E}\left[c(Z^{-},Z^{+}) \right], \tag{7}\]
where \(Z^{-},Z^{+}\), given in the proof of Lemma (3.4), satisfy that \((Z^{-},Z^{+})=(F_{-}^{-1}(U),F_{+}^{-1}(U))\).
On the other hand, the function \(c\) satisfies the "Monge" conditions, i.e. \(c\) is continuous and satisfies that for \(y^{\prime}\geq y\) and \(z^{\prime}\geq z\)
\[c(y^{\prime},z^{\prime})-c(y,z^{\prime})-c(y^{\prime},z)-c(y,z)=0.\]
Hence, as in Section 7.1 in [29] we have that the solution to the optimization problem given in (6) is given by a measure \(\gamma^{*}\) with distribution \(F^{*}(y,z):=\min\{F_{-}(y),F_{+}(z)\}\) for \(y,z\in[0,1]^{2}\), or in terms of random variables, \(\gamma^{*}\) is the distribution function of the random vector \((F_{-}^{-1}(U),F_{+}^{-1}(U))\), where \(U\) a uniform random variable.
Hence comparing with (7) we obtain that \(\gamma^{*}=\rho\) and therefore the measure \(\rho\) from our coupling is the solution to the minimisation problem (6) i.e.
\[V(n,\mu_{1},\mu_{2})=\int_{[0,1]^{2}}\left[(1-y)^{n}-(1-z)^{n}\right]\rho(dy, dz),\]
for each \(n\in\mathbb{N}\). In that sense, the adaptation coupling \(\rho\) minimises the number of selective arrows resp. the number of potential ancestors.
## 6 Scaling Limits
In this section we will study the scaling limit of the frequency process associated to the \(\Lambda\)-asymmetric ancestral selection graph introduced in Section 3. We start by showing that the limiting object is well defined, which is the content of the next result.
**Proposition 6.1**.: _Assume that \(\Lambda\) is a measure on \(\Delta=\{(y,z)\in[0,1]^{2}:y+z\in[0,1]\}\) such that_
\[\int_{\Delta}(y^{2}+z)\Lambda(dy,dz)<\infty. \tag{8}\]
_Then, for any \(x\in[0,1]\) there exists a unique strong solution \(Y=\{Y_{t}:t\geq 0\}\) to the following stochastic differential equation_
\[Y_{t} =x+\int_{0}^{t}\int_{0}^{1}\int_{\Delta}\left(y(1-Y_{s-})1_{\{u \leq Y_{s-}\}}-(y+z)Y_{s-}1_{\{u\geq Y_{s-}\}}\right)1_{\{Y_{s-}\in[0,1]\}} \tilde{N}(ds,du,dy,dz)\] \[-\int_{\Delta}z\mu(dy,dz)\int_{0}^{t}(1-Y_{s-})Y_{s-}1_{\{Y_{s-} \in[0,1]\}}ds, \tag{9}\]
_where \(N\) is a Poisson random measure on \((0,\infty)\times[0,1]\times\Delta\) with intensity measure \(dt\times du\times\Lambda(dy,dz)\) and \(\tilde{N}(ds,du,dy,dz):=N(ds,du,dy,dz)-dsdu\Lambda(dy,dz)\) denotes the compensated random measure associated to \(N\). We refer to the process \(Y\) as the \(\Lambda\)-asymmetric frequency process._
Proof.: Let us denote for \((x,y,z,u)\in[0,1]\times\Delta\times[0,1]\)
\[g(x,y,z,u):=y(1-x)1_{\{u<x\}}-(y+z)x1_{\{u\geq x\}}.\]
For fixed \((y,z,u)\in\Delta\times[0,1]\) we have
\[x+g(x,y,z,u)=\begin{cases}y+x(1-y)&\text{if }u<x,\\ x(1-(y+z))&\text{if }u\geq x.\end{cases} \tag{10}\]
First, we note that for fixed \((y,z,u)\in\Delta\times[0,1]\)
\[0\leq x+g(x,y,z,u)=x+y(1-x)1_{\{u<x\}}-(y+z)x1_{\{u\geq x\}}\leq 1.\]
Hence, by a modification of Proposition 2.1 in [13] (see also Corollary 6.2 in [26]) we obtain that \(\mathbb{P}\left(Y_{t}\in[0,1]\text{ for all }t\geq 0\right)=1\).
On the other hand, (10) implies that for fixed \((y,z,u)\in\Delta\times[0,1]\) the mapping \(x\mapsto x+g(x,y,z,u)\) is non-decreasing for \(x\in[0,1]\).
Denote for \(x\in[0,1]\)
\[b(x):=\left(\int_{\Delta}z\Lambda(dy,dz)\right)x(1-x).\]
Then, for \(x_{1},x_{2}\in[0,1]\)
\[|b(x_{1})-b(x_{2})|=|x_{2}(1-x_{2})-x_{1}(1-x_{1})|\int_{\Delta}z\Lambda(dy,dz )\leq 2\int_{\Delta}z\Lambda(dy,dz)|x_{2}-x_{1}|. \tag{11}\]
Similarly, for \(x_{1},x_{2}\in[0,1]\)
\[|g(x_{1},y,z,u)-g(x_{2},y,z,u)| =y(x_{2}-x_{1})1_{\{u\leq x_{1}\wedge x_{2}\}}-(y(1-x_{2})+(y+z)x _{1})1_{\{x_{1}\leq u\leq x_{2}\}}\] \[+(y(1-x_{1})+(y+z)x_{2})1_{\{x_{2}\leq u\leq x_{1}\}}-(y+z)(x_{1} -x_{2})1_{\{x_{1}\lor x_{2}\leq u\}}. \tag{12}\]
Using (12) we obtain for \(x_{1},x_{2}\in[0,1]\)
\[\int_{\Delta}\int_{0}^{1} |g(x_{1},y,z,u)-g(x_{2},y,z,u)|^{2}du\Lambda(dy,dz)\] \[\leq\int_{\Delta}\int_{0}^{1}\Big{[}y^{2}|x_{2}-x_{1}|^{2}1_{\{u \leq x_{1}\wedge x_{2}\}}+(y(1-x_{2})+(y+z)x_{1})^{2}1_{\{x_{1}\leq u\leq x_{ 2}\}}\] \[+(y(1-x_{1})+(y+z)x_{2})^{2}1_{\{x_{2}\leq u\leq x_{1}\}}+(y+z)^{ 2}|x_{1}-x_{2}|^{2}1_{\{x_{1}\lor x_{2}\leq u\}}\Big{]}du\Lambda(dy,dz)\] \[\leq 6|x_{2}-x_{1}|\int_{\Delta}(y^{2}+z)\Lambda(dy,dz). \tag{13}\]
Finally we note that for \(x\in[0,1]\)
\[|b(x)|^{2}=\left(\int_{\Delta}z\Lambda(dy,dz)\right)^{2}x^{2}(1-x)^{2}\leq \left(\int_{\Delta}z\Lambda(dy,dz)\right)^{2}x^{2}, \tag{14}\]
and
\[\int_{\Delta}\int_{0}^{1}|g(x,y,z,u)|^{2}du\Lambda(dy,dz) =\int_{\Delta}\int_{0}^{1}\big{[}y^{2}(1-x)^{2}1_{\{u<x\}}+(y+z)^ {2}x^{2}1_{\{u\geq x\}}\big{]}\,du\Lambda(dy,dz)\] \[\leq\int_{\Delta}\big{[}y^{2}(1-x)^{2}x+x^{2}(y+z)^{2}(1-x)\big{]} \,\Lambda(dy,dz)\] \[\leq(1+x^{2})\int_{\Delta}(y^{2}+z)\Lambda(dy,dz). \tag{15}\]
Therefore, there exists \(K>0\) such that for \(x\in[0,1]\)
\[|b(x)|^{2}+\int_{\Delta}\int_{0}^{1}|g(x,y,z,u)|^{2}du\Lambda(dy,dz)\leq K(1+| x|^{2}). \tag{16}\]
Hence using (11), (13) and (15) together with the fact that the mapping \(x\mapsto x+g(x,y,z,u)\) is non-decreasing for \(x\in[0,1]\), we obtain by a slight modification of Theorem 5.1 in [26] that there exists a unique strong solution to (9).
**Remark 6.2**.: _Note that if the weaker condition_
\[\int_{\Delta}(y+z)\Lambda(dy,dz)<\infty,\]
_holds, then the process \(Y\), solution to the SDE given in (9), can be described as the solution to the simpler SDE given by_
\[Y_{t}=x+\int_{0}^{t}\int_{\Delta}\int_{0}^{1}\left(y(1-Y_{s-})1_{\{u\leq Y_{s-} \}}-(y+z)Y_{s-1_{\{u\geq Y_{s-}\}}}\right)1_{\{Y_{s-}\in[0,1]\}}N(ds,du,dy,dz).\]
In the following result we derive the convergence of the frequency processes associated to the \(\Lambda\)-ancestral selection graph introduced in Section 3. In doing so we need to take some care in dealing with the possible singularity at \(0\) of the measure \(\Lambda\). To this end, we will apply a truncation procedure at \(0\), and study a sequence of frequency processes \(Y^{N}\) that correspond to the truncated \(\Lambda\)-measures.
Fix a measure \(\Lambda\) on \(\Delta\) such that \(\int_{\Delta}(y^{2}+z)\Lambda(dy,dz)<\infty\), an initial frequency \(x\in[0,1]\) and a number \(\alpha\in(0,1/2)\). For each \(N\in\mathbb{N}\) we define the truncated frequency process \(Y^{N}=(Y^{N}_{t})_{t\geq 0}\) with initial value \(Y^{N}_{0}=\lfloor xN\rfloor/N\) as the frequency process of an asymmetric \(\Lambda\)-Moran model with reproduction mechanism
\[\Lambda^{(N)}(A):=\int_{\Delta^{N}}1_{A}(y,z)\Lambda(dy,dz),\]
where
\[\Delta^{N}:=\{(y,z)\in\Delta:y^{2}>1/N^{\alpha}\}.\]
Note that the measure \(\Lambda^{(N)}\) is a finite measure on \(\Delta\), indeed by the definition of \(\Delta^{N}\) we have
\[\Lambda^{(N)}(\Delta)=\Lambda(\Delta^{N})\leq N^{\alpha}\int_{\Delta}(y^{2}+z )\Lambda(dy,dz)<\infty.\]
Hence \(Y^{N}\) is well defined. We will now show the convergence of the sequence of process \((Y^{N})_{N\in\mathbb{N}}\) as the total size of the population \(N\) grows to infinity to the \(\Lambda\)-asymmetric frequency process \(Y\) defined in Proposition 6.1.
**Proposition 6.3**.: _The sequence \((Y^{N})_{n\in\mathbb{N}}\) converges weakly in \(\mathbb{D}([0,T],[0,1])\) to a limit \(Y\), where \(Y\) is the \(\Lambda\)-asymmetric frequency process given in (9)._
Proof.: We recall that the infinitesimal generator \(\mathcal{B}^{N}\) of the process \(Y^{N}\) is given for any \(f\in\mathcal{C}^{2}([0,1])\) by
\[\mathcal{B}^{N}f(x)=\int_{\Delta^{N}}\Bigg{\{}\frac{\lfloor xN \rfloor}{N}\mathbb{E}\Bigg{[}f\left(\frac{\lfloor xN\rfloor}{N}+\frac{1}{N}B_ {N}\right)-f\left(\frac{\lfloor xN\rfloor}{N}\right)\Bigg{]}\] \[+\left(1-\frac{\lfloor xN\rfloor}{N}\right)\mathbb{E}\Bigg{[}f \left(\frac{\lfloor xN\rfloor}{N}-\frac{1}{N}\tilde{B}_{N}\right)-f\left( \frac{\lfloor xN\rfloor}{N}\right)\Bigg{]}\Bigg{\}}\Lambda(dy,dz),\]
where \(B_{N}:=\text{Binom}\left(N\left(1-\frac{\lfloor xN\rfloor}{N}\right);y\right)\) and \(\tilde{B}_{N}:=\text{Binom}\left(\lfloor xN\rfloor;y+z\right)\).
Now, for any \(f\in\mathcal{C}^{2}([0,1])\) and \(x\in[0,1]\) we have by Taylor's Theorem that
\[f\left(\frac{\lfloor xN\rfloor}{N}+\frac{1}{N}B_{N}\right) -f\left(\frac{\lfloor xN\rfloor}{N}\right)-(f(x+(1-x)y)-f(x))\] \[=R\left(\frac{\lfloor xN\rfloor}{N},\frac{\lfloor xN\rfloor}{N}+ \frac{1}{N}B_{N}\right)-R(x,x+(1-x)y). \tag{17}\]
where \(R(u,v):=\int_{u}^{v}f^{\prime}(t)dt\).
Now, we note that
\[R\Bigg{(}\frac{\lfloor xN\rfloor}{N},\frac{\lfloor xN\rfloor}{N}+ \frac{1}{N}B_{N}\Bigg{)}-R(x,x+(1-x)y) =\int_{\frac{\lfloor xN\rfloor}{N}}^{\frac{\lfloor xN\rfloor}{N}+ \frac{1}{N}B_{N}}f^{\prime}(t)dt-\int_{x}^{x+(1-x)y}f^{\prime}(t)dt\] \[\leq\int_{\frac{\lfloor xN\rfloor}{N}}^{x}f^{\prime}(t)dt+\int_{x +(1-x)y}^{\frac{\lfloor xN\rfloor}{N}+\frac{1}{N}B_{N}}f^{\prime}(t)dt. \tag{18}\]
Therefore, using (18)
\[\Bigg{|}R\Bigg{(}\frac{\lfloor xN\rfloor}{N},\frac{\lfloor xN \rfloor}{N}+\frac{1}{N}B_{N}\Bigg{)} -R(x,x+(1-x)y)\Bigg{|}\] \[\leq\|f^{\prime}\|_{[0,1]}\left|x-\frac{\lfloor xN\rfloor}{N} \right|+\|f^{\prime}\|_{[0,1]}\left|\frac{\lfloor xN\rfloor}{N}+\frac{1}{N}B_ {N}-x-(1-x)y\right|\] \[=\|f^{\prime}\|_{[0,1]}\frac{1}{N}+\|f^{\prime}\|_{[0,1]}\left|( 1-y)\left(\frac{\lfloor xN\rfloor}{N}-x\right)+\frac{1}{N}B_{N}-\left(1- \frac{\lfloor xN\rfloor}{N}\right)y\right|\] \[\leq\|f^{\prime}\|_{[0,1]}\frac{1}{N}+\|f^{\prime}\|_{[0,1]} \left[\frac{1}{N}(1-y)+\left|\frac{1}{N}B_{N}-\left(1-\frac{\lfloor xN\rfloor }{N}\right)y\right|\right], \tag{19}\]
where \(\|g\|_{[0,1]}:=\sup_{x\in[0,1]}|g(x)|\) for any \(g\in\mathcal{C}^{2}([0,1])\).
On the other hand, we note that
\[\mathbb{E}\left[\left|\frac{1}{N}B_{N}-\left(1-\frac{\lfloor xN\rfloor}{N} \right)y\right|^{2}\right]=\frac{1}{N^{2}}\mathbb{E}\left[\left|B_{N}-N\left( 1-\frac{\lfloor xN\rfloor}{N}\right)y\right|^{2}\right]=\frac{1}{N}\left(1- \frac{\lfloor xN\rfloor}{N}\right)y(1-y),\]
and by the Cauchy-Schwarz inequality we have
\[\mathbb{E}\left[\left|\frac{1}{N}B_{N}-\left(1-\frac{\lfloor xN\rfloor}{N} \right)y\right|\right]=\left[\frac{1}{N}\left(1-\frac{\lfloor xN\rfloor}{N} \right)y(1-y)\right]^{1/2}.\]
Hence, by taking expectations on (19), we can find a constant \(C_{f}>0\) only dependent on \(f\) such that
\[\mathbb{E}\Bigg{[}\Bigg{|}R\Bigg{(}\frac{\lfloor xN\rfloor}{N},\frac{\lfloor xN \rfloor}{N}+\frac{1}{N}B_{N}\Bigg{)}-R(x,x+(1-x)y)\Bigg{|}\Bigg{]}=C_{f} \left(\frac{1}{N}+\frac{1}{N^{1/2}}y^{1/2}\right). \tag{20}\]
Hence, by (17) together with (20) there exists a constant \(C_{f}>0\) only dependent on \(f\) such that
\[\Bigg{|}f\left(\frac{\lfloor xN\rfloor}{N}+\frac{1}{N}B_{N}\right)-f\left( \frac{\lfloor xN\rfloor}{N}\right)-\left(f(x+(1-x)y)-f(x)\right)\Bigg{|}\leq C _{f}\left(\frac{1}{N}+\frac{1}{N^{1/2}}y^{1/2}\right). \tag{21}\]
Very similarly we can deal with the other term in the generator involving \(\tilde{B}_{N}\), and after the same kind of calculations as before we arrive at
\[\Bigg{|}f\left(\frac{\lfloor xN\rfloor}{N}-\frac{1}{N}\tilde{B}_{N}\right)-f \left(\frac{\lfloor xN\rfloor}{N}\right)-\left(f(x-x(y+z))-f(x)\right)\Bigg{|} \leq\tilde{C}_{f}\left(\frac{1}{N}+\frac{1}{N^{1/2}}(y+z)^{1/2}\right). \tag{22}\]
Using (21) together with (22) we obtain, for \(x\in[0,1]\),
\[\sup_{x\in[0,1]}\Bigg{|}\int_{\Delta^{N}}\Bigg{\{}x\mathbb{E}\Bigg{[} f\left(\frac{\lfloor xN\rfloor}{N}+\frac{1}{N}B_{N}\right)-f \left(\frac{\lfloor xN\rfloor}{N}\right)\Bigg{]}+(1-x)\mathbb{E}\Bigg{[}f\left( \frac{\lfloor xN\rfloor}{N}-\frac{1}{N}\tilde{B}_{N}\right)-f\left(\frac{ \lfloor xN\rfloor}{N}\right)\Bigg{]}\Bigg{\}}\Lambda(dy,dz) \tag{23}\] \[-\int_{\Delta}\Bigg{\{}xf(x+y(1-x))+(1-x)f(x-(y+z)x)-f(x)\Bigg{\}} \Lambda(dy,dz)\Bigg{|}\] \[\leq 2(C_{f}+\tilde{C}_{f})\frac{1}{N^{1/2}}\int_{\Delta^{N}} \Lambda(dy,dz)\] \[+\sup_{x\in[0,1]}\int_{\Delta\setminus\Delta^{N}}|xf(x+y(1-x))+(1 -x)f(x-(y+z)x)-f(x)|\,\Lambda(dy,dz).\]
Expanding the integrand in the right-hand side of (23) gives
\[|xf(x+y(1-x)) +(1-x)f(x-(y+z)x)-f(x)|=\Bigg{|}x\int_{x}^{x+y(1-x)}\frac{f^{\prime \prime}(t)}{2}(x+y(1-x)-t)dt \tag{24}\] \[+(1-x)\int_{x}^{x-(y+z)x}\frac{f^{\prime\prime}(t)}{2}(x-(y+z)x- t)dt-x(1-x)zf^{\prime}(x)\Bigg{|}\] \[\leq\frac{\|f^{\prime\prime}\|_{[0,1]}}{4}\left(x(1-x)^{2}y^{2}+( 1-x)(y+z)^{2}x^{2}\right)+\|f^{\prime}\|_{[0,1]}x(1-x)z\leq K_{f}(y^{2}+z),\]
where \(K_{f}\) is positive constant only dependent on \(f\). Hence, using (24) in (23)
\[\lim_{n\to\infty}\sup_{x\in[0,1]}\Bigg{|}\int_{\Delta^{N}}\Bigg{\{} x\mathbb{E}\Bigg{[} f\left(\frac{\lfloor xN\rfloor}{N}+\frac{1}{N}B_{N}\right)-f \left(\frac{\lfloor xN\rfloor}{N}\right)\Bigg{]} \tag{25}\] \[+(1-x)\mathbb{E}\Bigg{[}f\left(\frac{\lfloor xN\rfloor}{N}-\frac {1}{N}\tilde{B}_{N}\right)-f\left(\frac{\lfloor xN\rfloor}{N}\right)\Bigg{]} \Bigg{\}}\Lambda(dy,dz)\] \[-\int_{\Delta}\Bigg{\{}xf(x+y(1-x))+(1-x)f(x-(y+z)x)-f(x)\Bigg{\}} \Lambda(dy,dz)\Bigg{|}\] \[\leq\lim_{n\to\infty}\Bigg{[}(C_{f}+\tilde{C}_{f})\frac{1}{N^{1/ 2-\alpha}}\int_{\Delta}(y^{2}+z)\Lambda(dy,dz)+K_{f}\int_{\Delta\setminus\Delta ^{N}}(y^{2}+z)\Lambda(dy,dz)\Bigg{]}=0.\]
Finally, we note that
\[\lim_{N\to\infty} \sup_{x\in[0,1]}\Bigg{|}\int_{\Delta^{N}}\Bigg{\{}\frac{\lfloor xN \rfloor}{N}\mathbb{E}\Bigg{[}f\left(\frac{\lfloor xN\rfloor}{N}+\frac{1}{N}B_ {N}\right)-f\left(\frac{\lfloor xN\rfloor}{N}\right)\Bigg{]} \tag{26}\] \[+\left(1-\frac{\lfloor xN\rfloor}{N}\right)\mathbb{E}\Bigg{[}f \left(\frac{\lfloor xN\rfloor}{N}-\frac{1}{N}\tilde{B}_{N}\right)-f\left(\frac {\lfloor xN\rfloor}{N}\right)\Bigg{]}\Bigg{\}}\Lambda(dy,dz)\] \[-\int_{\Delta^{N}}\Bigg{\{}x\mathbb{E}\Bigg{[}f\left(\frac{ \lfloor xN\rfloor}{N}+\frac{1}{N}B_{N}\right)-f\left(\frac{\lfloor xN\rfloor}{N }\right)\Bigg{]}+(1-x)\mathbb{E}\Bigg{[}f\left(\frac{\lfloor xN\rfloor}{N}- \frac{1}{N}\tilde{B}_{N}\right)-f\left(\frac{\lfloor xN\rfloor}{N}\right) \Bigg{]}\Bigg{\}}\Lambda(dy,dz)\Bigg{|}\] \[\leq\lim_{N\to\infty}2\|f\|_{[0,1]}\frac{1}{N^{1-\alpha}}\int_{ \Delta}(y^{2}+z)\Lambda(dy,dz)=0.\]
Following the steps of Proposition 4.2 in [4] we have that the infinitesimal generator \(\mathcal{B}\) of the \(\Lambda\)-asymmetric frequency process \(Y\) is given for any \(f\in\mathcal{C}^{2}([0,1])\) and \(x\in[0,1]\) by
\[\mathcal{B}f(x)=\int_{\Delta}\left[xf(x+y(1-x))+(1-x)f(x-(y+z)x)-f(x)\right] \Lambda(dy,dz),\qquad x\in[0,1]. \tag{27}\]
Hence, by (25) together with (26) we obtain that \(\mathcal{B}^{N}f\to\mathcal{B}\) as \(n\to\infty\) uniformly in \([0,1]\). Therefore, Theorem 17.25 in [20] implies that \(Y^{N}\to Y\) as \(n\to\infty\) weakly in the space \(\mathbb{D}([0,T],[0,1])\).
Let \(A^{N}:=\{A^{N}_{t}:t>0\}\) denote the ancestral process associated to the frequency process \(Y^{N}\) defined in Proposition 6.3. In the next result we show the convergence of the sequence of processes \((A^{N})_{N\in\mathbb{N}}\) as the total size of the population \(N\) grows to infinity.
**Proposition 6.4**.: _The sequence \((A^{N})_{N\in\mathbb{N}}\) converges weakly in \(\mathbb{D}([0,T],\mathbb{N})\) to a limit \(A\), where \(A\) is a continuous-time Markov chain with values in \(\mathbb{N}\), whose generator has the following transition rates_
\[m\mapsto\begin{cases}m-k+1&\text{at rate }\int_{\Delta}\binom{m}{k}y^{k}(1-y)^ {m-k}\Lambda(dy,dz),\quad k=2,...,m\\ m+1&\text{at rate }\int_{\Delta}[(1-y)^{m}-(1-y-z)^{m}]\Lambda(dy,dz).\end{cases}\]
Proof.: Fix \(m\in\mathbb{N}\). Using that \(|(1-y)^{m}-(1-y-z)^{m}|\leq mz\), for \(y,z\in\Delta\) we obtain that (8) together with dominated convergence gives
\[\lim_{N\to\infty}\left(1-\frac{m}{N}\right)\int_{\Delta^{N}}\left[(1-y)^{m}- (1-y-z)^{m}\right]\Lambda(dy,dz)=\int_{\Delta}\left[(1-y)^{m}-(1-y-z)^{m} \right]\Lambda(dy,dz).\]
On the other hand, for \(k\geq 2\), by (8) and dominated convergence
\[\lim_{N\to\infty}\left(1-\frac{m}{N}\right)\int_{\Delta^{N}}\binom{m}{k}y^{k }(1-y)^{m-k}\Lambda(dy,dz)=\int_{\Delta}\binom{m}{k}y^{k}(1-y)^{m-k}\Lambda(dy,dz).\]
By the definiton of \(\Delta^{N}\) we note that dominated convergence gives
\[\lim_{N\to\infty}\frac{m}{N}\int_{\Delta^{N}}my(1-y)^{m-1}\Lambda(dy,dz)\leq \lim_{N\to\infty}m^{2}\frac{1}{N^{1-\alpha/2}}\int_{\Delta^{N}}y^{2}\Lambda(dy,dz)=0.\]
Similarly, for \(k\geq 2\), proceeding as in the previous identities we get
\[\lim_{N\to\infty}\frac{m}{N}\int_{\Delta^{N}}\binom{m}{k}y^{k}(1-y)^{m-k} \Lambda(dy,dz)=0.\]
Therefore, by Problem 3(ii) in [10] we obtain the result.
As expected we show in the next result that the processes \(Y\) and \(A\) are moment duals. To this end, we note that the infinitesimal generator \(\mathcal{A}\) of the process \(A\) is given for any \(f:\mathbb{N}\mapsto\mathbb{N}\) by
\[\mathcal{A}f(n) =\sum_{k=1}^{n}(f(n-k+1)-f(n))\int_{\Delta}\binom{n}{k}y^{k}(1-y)^ {n-k}\Lambda(dy,dz)\] \[+(f(n+1)-f(n))\int_{\Delta}[(1-y)^{n}-(1-y-z)^{n}]\Lambda(dy,dz). \tag{28}\]
**Proposition 6.5**.: _Consider the \(\Lambda\)-asymmetric frequency process \(Y\) defined in Proposition 6.1 and the ancestry process \(A\) with rates given in Proposition 6.4. Then the processes \(Y\) and \(A\) are moment duals, i.e. for any \(x\in[0,1]\) and \(n\in\mathbb{N}\)_
\[\mathbb{E}_{x}\left[Y^{n}_{t}\right]=\mathbb{E}_{n}\left[x^{\mathcal{A}_{t}} \right],\qquad\text{for all }t>0.\]
Proof.: We will consider \(\mathbb{N}\) endowed with the discrete topology and \(\mathbb{N}\times[0,1]\) with the product topology. We denote for every fixed \(x\in[0,1]\), \(H(n,x)=x^{n}\), which is bounded and continuous. In addition, for every fixed \(k\in\mathbb{N}\), \(H(k,x)=x^{k}\) is continuous. Therefore, we conclude that \(H:\mathbb{N}\times[0,1]\mapsto[0,1]\) is continuous.
We observe that \(H(\cdot,n)\) is a polynomial in \([0,1]\) for fixed \(n\in\mathbb{N}\). This fact clearly implies that \(H(\cdot,n)\in\mathcal{C}^{2}([0,1])\) and hence it lies in the domain of the generator \(\mathcal{B}\) of the \(\Lambda\)-asymmetric frequency process \(Y\) as in (27). Therefore, the process
\[H(n,Y_{t})-\int_{0}^{t}\mathcal{B}H(n,Y_{s})ds,\]
is a martingale.
Additionally, we have that for fixed \(x\in[0,1]\) the function \(H(\cdot,x)\) lies in the domain of the generator \(\mathcal{A}\), which implies that the process
\[H(A_{t},x)-\int_{0}^{t}\mathcal{A}H(A_{s},x)ds,\]
is also a martingale.
We will compute \(\mathcal{B}H(n,x)\) for \(x\in[0,1]\) and \(n\in\mathbb{N}\), then using (27)
\[\mathcal{B}H(n,x)=\int_{\Delta}\left[x(x+y(1-x))^{n}+(1-x)(x-(y+z)x)^{n}-x^{n} \right]\Lambda(dy,dz),\ x\in[0,1],n\in\mathbb{N}. \tag{29}\]
We note that
\[x(x+y(1-x))^{n}=x\sum_{k=0}^{n}\binom{n}{k}y^{k}x^{n-k}(1-y)^{n-k}=x^{n+1}(1-y) ^{n}+\sum_{k=1}^{n}\binom{n}{k}y^{k}x^{n-k+1}(1-y)^{n-k}.\]
Additionaly,
\[(1-x)(x-(y+z)x)^{n}=(x^{n}-x^{n+1})(1-y-z)^{n},\]
and finally
\[x^{n}=x^{n}(1-y)^{n}+x^{n}\sum_{k=1}^{n}\binom{n}{k}y^{k}(1-y)^{n-k}.\]
Hence, using the previous identities gives
\[x(x+y(1-x))^{n}+(1-x)(x-(y+z)x)^{n}-x^{n} =\sum_{k=2}^{n}\binom{n}{k}y^{k}(1-y)^{n-k}(x^{n-k+1}-x^{n})\] \[+(x^{n+1}-x^{n})\left[(1-y)^{n}-(1-y-z)^{n}\right]. \tag{30}\]
Using (30) in (29) we obtain
\[\mathcal{B}H(n,x) =\sum_{k=2}^{n}(x^{n-k+1}-x^{n})\int_{\Delta}\binom{n}{k}y^{k}(1- y)^{n-k}\Lambda(dy,dz)\] \[+(x^{n+1}-x^{n})\int_{\Delta}\left[(1-y)^{n}-(1-y-z)^{n}\right] \Lambda(dy,dz)=\mathcal{A}H(n,x).\]
Finally an application of Proposition 6.1 in [4] gives the result.
## 7 Griffiths representation and the probability of fixation
We motivate our next result by recalling a result for the two-type \(\Lambda\)-Fleming Viot process, which is defined as the solution to the SDE
\[Z_{t} =y+\int_{0}^{t}\int_{0}^{1}\int_{\Delta}\left(y(1-Z_{s-})1_{\{u\leq Z _{s-}\}}-yZ_{s-1_{\{u\geq Z_{s-}\}}}\right)1_{\{Z_{s-}\in[0,1]\}}\tilde{N}(ds, du,dy)\] \[-\int_{0}^{t}s(1-Z_{s-})Z_{s-1_{\{Z_{s-}\in[0,1]\}}}ds,\quad t\geq 0.\]
Independently and using different techniques, Foucart [12] and Griffiths [19] obtained explicit conditions for the ttwo-type \(\Lambda\)-Fleming Viot process to be absorbed at \(0\) almost surely, or to be absorbed in either of the boundaries \(\{0,1\}\) with positive probability. In particular, they observed that for every \(\Lambda\) it is always possible to find a small enough selection parameter \(s\) such that \(\mathbb{P}(\lim_{t\to\infty}Z_{t}=1)\mathbb{P}(\lim_{t\to\infty}Z_{t}=0)>0\). Foucart uses a duality technique which relies on the observation that the \(\Lambda\)-Fleming Viot process can be absorbed at \(1\) if and only if its dual, that we denote \((F_{t})_{t\geq 0}\), is positive recurrent. The process \((F_{t})_{t\geq 0}\) coalesces like the block counting process of the \(\Lambda\)-coalescent and branches at rate \(ns\). In our model, the dual process is given by the ancestral line counting process \((A_{t})_{t\geq 0}\). It coalesces at the same rate and branches at rate \(\int_{0}^{1}\int_{0}^{1}\left[(1-y)^{n-1}-(1-y-z)^{n-1}\right]\Lambda(dy,dz)\) which is smaller than \(ns\) for all \(n>n(s)\), for some \(n(s)\geq 1\), which exists for every \(s\). This implies that \((F_{t})_{t\geq 0}\) reflected in \(n(s)\) stochastically dominates \((A_{t})_{t\geq 0}\). In turn this implies that \((A_{t})_{t\geq 0}\) is positive recurrent. A duality argument allows to conclude that the limiting frequency process \((Y_{t})_{t\geq 0}\) in equation (6.1) fullfils \(\mathbb{P}(\lim_{t\to\infty}Y_{t}=1)\mathbb{P}(\lim_{t\to\infty}Y_{t}=0)>0\).
Here, instead of formalising the previous argument, we will exploit Griffiths' technique in order to compute a semi-explicit expression for the probability of fixation of type \(-\) individuals in a \(\Lambda\)-asymmetric frequency process. To this end, we will start by obtaining an alternative expression for its generator \(\mathcal{B}\), given in (27), following the ideas of Theorem 1 in [19]. This is given in the next result. We define the measure \(\tilde{\Lambda}\) on the simplex \(\Delta\) by
\[\tilde{\Lambda}(dy,dz):=(y^{2}+z)\Lambda(dy,dz).\]
**Proposition 7.1**.: _Let \(\mathcal{B}\) be the infinitesimal generator of a \(\Lambda\)-asymmetric frequency process as in (27). Let \(U,V,Y,Z\) be independent random variables, such that \(V\) has a uniform distribution on \([0,1]\), \(U\) has density \(2u\) with respect to the Lebesgue measure, and \((Y,Z)\) is distributed according to \(\tilde{\Lambda}/\|\tilde{\Lambda}\|\). Then, for any \(f\in\mathcal{C}^{2}([0,1])\),_
\[\mathcal{B}f(x)=\frac{\|\tilde{\Lambda}\|}{2}x(1-x)\mathbb{E}\left[f^{\prime \prime}(x(1-UY)+UYV)\frac{Y^{2}}{Z+Y^{2}}-2f^{\prime}(x-xY-xZV)\frac{Z}{Z+Y^{2 }}\right].\]
Proof.: Using that \(V\) has a uniform distribution on \([0,1]\) we obtain that
\[\mathbb{E}\left[f^{\prime\prime}(x(1-UY)+UYV)\frac{Y^{2}}{Z+Y^{2 }}\right] =\mathbb{E}\left[\frac{Y^{2}}{Z+Y^{2}}\int_{0}^{1}f^{\prime\prime }(x(1-UY)+UYv)dv\right]\] \[=\mathbb{E}\left[\frac{Y}{Z+Y^{2}}\left[f^{\prime}(x(1-UY)+UY)- f^{\prime}(x(1-UY))\right]\frac{1}{U}\right]\] \[=\mathbb{E}\left[\frac{2}{Z+Y^{2}}\left(\frac{f(x(1-Y)+Y)-f(x)}{ 1-x}+\frac{f(x(1-Y))-f(x)}{x}\right)\right]\] \[=\frac{2}{\|\tilde{\Lambda}\|}\int_{\Delta}\left[\frac{f(x(1-y)+y )-f(x)}{1-x}+\frac{f(x(1-y))-f(x)}{x}\right]\frac{\tilde{\Lambda}(dy,dz)}{z+y^ {2}}. \tag{31}\]
where in the third equality we used that \(U\) has density given by \(2u\) on \([0,1]\).
In a similar way, we obtain
\[\mathbb{E}\left[\frac{Z}{Z+Y^{2}}f^{\prime}(x-xY-xZV)\right] =-\mathbb{E}\left[\frac{Z}{Z+Y^{2}}\left(f(x-xZ-xY)-f(x-xY)\right) \frac{1}{xZ}\right]\] \[=-\frac{1}{\|\tilde{\Lambda}\|}\int_{\Delta}\left[\frac{f(x-xy- xz)-f(x-xy)}{x}\right]\frac{\tilde{\Lambda}(dy,dz)}{z+y^{2}}. \tag{32}\]
Therefore, using (31) and (32) we obtain for \(x\in[0,1]\)
\[\frac{\|\tilde{\Lambda}\|}{2} x(1-x)\mathbb{E}\left[f^{\prime\prime}(x(1-UY)+UYV)\frac{Y^{2}}{Z+Y^ {2}}-2f^{\prime}(x-xZV)\frac{Z}{Z+Y^{2}}\right]\] \[=\int_{\Delta}\left[x(f(x(1-y)+y)-f(x))+(1-x)(f(x(1-y))-f(x)) \right]\frac{\tilde{\Lambda}(dy,dz)}{z+y^{2}}\] \[+\int_{\Delta}(1-x)\left[f(x(1-y-z))-f(x(1-y))\right]\frac{ \tilde{\Lambda}(dy,dz)}{z+y^{2}}\] \[=\int_{\Delta}\left[x(f(x(1-y)+y)-f(x))+(1-x)(f(x(1-y-z))-f(x)) \right]\frac{\tilde{\Lambda}(dy,dz)}{z+y^{2}}\] \[=\mathcal{B}f(x).\]
Denote by \(p(x)\) for \(x\in[0,1]\) the probability of fixation of type \(-\) in the \(\Lambda\)-asymmetric frequency process, where \(x\) denotes the initial frequency of type \(-\) individuals. Since \(p\) is a harmonic function for the generator \(\mathcal{B}\), we see from the previous proposition that it satisfies for \(x\in(0,1)\)
\[\mathbb{E}\left[p^{\prime\prime}(x(1-UY)+UYV)\frac{Y^{2}}{Z+Y^{2}}-2p^{\prime }(x-xY-xZV)\frac{Z}{Z+Y^{2}}\right]=0, \tag{33}\]
with boundary conditions given by \(p(0)=0\) and \(p(1)=1\).
We note that
\[\mathbb{E}\left[p^{\prime\prime}(x(1-UY)+UYV)\frac{Y^{2}}{Z+Y^{2}}\right]= \mathbb{E}\left[\frac{p^{\prime}(x(1-W)+W)-p^{\prime}(x(1-W))}{W}\frac{Y^{2}} {Z+Y^{2}}\right], \tag{34}\]
with \(W:=UY\).
In order to find the solution to (33), following [19], we will consider polynomials of the form
\[h_{n}(x)=\sum_{r=0}^{n}a_{n,r}x^{r}, \tag{35}\]
with \(h_{0}(x)=1\).
Then, we will prove that we can take a choice of coefficients \(\{a_{n,r}\}_{0\leq r\leq n}\) such that
\[\mathbb{E}\left[\frac{h_{n}(x(1-W)+W)-h_{n}(x(1-W))}{W}\frac{Y^{2}}{Z+Y^{2}} \right]=n\mathbb{E}\left[h_{n-1}\left(x(1-Y-ZV)\right)\frac{Z}{Z+Y^{2}}\right]. \tag{36}\]
with the choice
\[a_{n,n}=\prod_{i=1}^{n-1}\frac{\mathbb{E}\left[(1-Y-ZV)^{i}\frac{Z}{Z+Y^{2}} \right]}{\mathbb{E}\left[(1-W)^{i}\frac{Y^{2}}{Z+Y^{2}}\right]},\qquad n=0,1\ldots \tag{37}\]
for the diagonal elements. Indeed, by (35)
\[\mathbb{E}\left[\frac{h_{n}(x(1-W)+W)-h_{n}(x(1-W))}{W}\frac{Y^{2}}{Z +Y^{2}}\right] =\mathbb{E}\left[\frac{Y^{2}}{Z+Y^{2}}\sum_{r=0}^{n}\frac{a_{n,r}} {W}\left[(x(1-W)+W)^{r}-(x(1-W))^{r}\right]\right]\] \[=\sum_{r=1}^{n}a_{n,r}\sum_{j=0}^{r-1}\binom{r}{j}\mathbb{E}\left[ (1-W)^{j}W^{r-j-1}\frac{Y^{2}}{Z+Y^{2}}\right]x^{j}\] \[=\sum_{j=0}^{n-1}\sum_{r=j+1}^{n}\binom{r}{j}\mathbb{E}\left[(1-W )^{j}W^{r-j-1}\frac{Y^{2}}{Z+Y^{2}}\right]a_{n,r}x^{j}.\]
Hence in order for (36) to hold we need that
\[\sum_{j=0}^{n-1}\sum_{r=j+1}^{n}\binom{r}{j}\mathbb{E}\left[(1-W)^{j}W^{r-j-1} \frac{Y^{2}}{Z+Y^{2}}\right]a_{n,r}x^{j}=\sum_{j=0}^{n-1}na_{n-1,j}\mathbb{E} \left[(1-Y-ZV)^{j}\frac{Z}{Z+Y^{2}}\right]x^{j},\]
or equivalently that
\[a_{n-1,j}=\sum_{r=j+1}^{n}\binom{r}{j}\frac{\mathbb{E}\left[(1-W)^{j}W^{r-j-1} \frac{Y^{2}}{Z+Y^{2}}\right]}{n\mathbb{E}\left[(1-Y-ZV)^{j}\frac{Z}{Z+Y^{2}} \right]}a_{n,r}. \tag{38}\]
Following the discussion in [19], we can determine the coefficients \(\{a_{n,j}\}_{j=0}^{n-1}\) of the polynomial \(h_{n}\), from the coefficients \(\{a_{n-1,j}\}_{j=0}^{n}\) of the polynomial \(h_{n-1}\). Indeed, we can take \(a_{n,n}\) as in (37) and then, using (38), recursively obtain \(a_{n,j}\) for \(j=n-1,\ldots,0\). We notice that the coefficient \(a_{n,0}\) can be chosen arbitrarly, and will be specified later in the construction of the probability of fixation \(p\).
Now let us return to the probability of fixation \(p\), where for the first derivative \(p^{\prime}(x)\) we make the ansatz
\[p^{\prime}(x)=A\sum_{n=1}^{\infty}2^{n}c_{n}h_{n-1}(x), \tag{39}\]
with \(p(1)=1\) and \(p(0)=0\). Using (39) in (33), together with (34) and (36) gives
\[\mathbb{E}\left[\frac{p^{\prime}(x(1-W)+W)-p^{\prime}(x(1-W))}{W} \frac{Y^{2}}{Z+Y^{2}}-2p^{\prime}(x(1-Y-ZV))\frac{Z}{Z+Y^{2}}\right]\] \[=A\sum_{n=1}^{\infty}2^{n}c_{n}(n-1)\mathbb{E}\left[h_{n-2}\left( x(1-Y-ZV)\right)\frac{Z}{Z+Y^{2}}\right]-2A\sum_{n=1}^{\infty}2^{n}c_{n} \mathbb{E}\left[h_{n-1}\left(x(1-Y-ZV)\right)\frac{Z}{Z+Y^{2}}\right].\]
By chosing \(c_{n}=\frac{1}{(n-1)!}\), for \(n\in\mathbb{N}\), we obtain
\[\mathbb{E}\left[\frac{p^{\prime}(x(1-W)+W)-p^{\prime}(x(1-W))}{W} \frac{Y^{2}}{Z+Y^{2}}-2p^{\prime}(x(1-Y-ZV))\frac{Z}{Z+Y^{2}}\right]\] \[=A\sum_{n=2}^{\infty}\frac{2^{n}}{(n-2)!}\mathbb{E}\left[h_{n-2} \left(x(1-Y-ZV)\right)\frac{Z}{Z+Y^{2}}\right]-A\sum_{n=1}^{\infty}\frac{2^{n+ 1}}{(n-1)!}\mathbb{E}\left[h_{n-1}\left(x(1-Y-ZV)\right)\frac{Z}{Z+Y^{2}} \right]=0.\]
Hence, \(p\) is a solution to (33). Now, by integrating (39) we obtain that
\[p(x)=A\sum_{n=1}^{\infty}\int_{0}^{x}\frac{2^{n}}{(n-1)!}h_{n-1}(u)du.\]
So if we choose \(\{h_{n}(0)\}_{n\geq 1}\) such that
\[\int_{0}^{1}nh_{n-1}(u)du=1,\]
then by the fact that \(p(1)=1\) and \(p(0)=0\) we obtain that
\[1=p(1)-p(0)=A\sum_{n=1}^{\infty}\frac{2^{n}}{n!}\int_{0}^{1}nh_{n-1}(u)du=A(e^{2 }-1),\]
and hence \(A=(e^{2}-1)^{-1}\).
So putting the pieces together we have that
\[p(x)=(e^{2}-1)^{-1}\sum_{n=1}^{\infty}\frac{2^{n}}{n!}H_{n}(x),\]
where \(H_{n}(x):=\int_{0}^{x}nh_{n-1}(u)du\), and \(\{h_{n}\}_{n\geq 1}\) satisfies (36).
The previous discussion leads to the following main result of this section.
**Proposition 7.2**.: _The fixation probability of type \(-\) individuals is given by_
\[p(x)=(e^{2}-1)^{-1}\sum_{n=1}^{\infty}\frac{2^{n}}{n!}H_{n}(x),\qquad x\in[0,1],\]
_where the polynomials \(\{H_{n}\}_{n=0}^{\infty}\) are given by_
\[H_{n}(x)=\int_{0}^{x}nh_{n-1}(u)du,\qquad x\in[0,1]\]
_with \(\{h_{n}\}_{n=0}^{\infty}\) given in (35), (37), and (38) and the constants \(\{h_{n}(0)\}_{n=0}^{\infty}\) chosen so that_
\[\int_{0}^{1}nh_{n-1}=1.\]
AcknowledgementsAGC was supported by the grant PAPIIT UNAM IN101722 "Nuevas aplicaciones de la dualidad de momentos y de la construccion Lookdown", and acknowledge support from the Hausdorff Research Institute for Mathematics in Bonn where he made a 3 month research visit in the summer of 2022. The authors would like to thank Fernanda Lopez, who wrote her Master thesis at UNAM on a simplified version of the model discussed in this manuscript.
|
2309.16869 | Vidaptive: Efficient and Responsive Rate Control for Real-Time Video on
Variable Networks | Real-time video streaming relies on rate control mechanisms to adapt video
bitrate to network capacity while maintaining high utilization and low delay.
However, the current video rate controllers, such as Google Congestion Control
(GCC), are very slow to respond to network changes, leading to link
under-utilization and latency spikes. While recent delay-based congestion
control algorithms promise high efficiency and rapid adaptation to variable
conditions, low-latency video applications have been unable to adopt these
schemes due to the intertwined relationship between video encoders and rate
control in current systems.
This paper introduces Vidaptive, a new rate control mechanism designed for
low-latency video applications. Vidaptive decouples packet transmission
decisions from encoder output, injecting ``dummy'' padding traffic as needed to
treat video streams akin to backlogged flows controlled by a delay-based
congestion controller. Vidaptive then adapts the target bitrate of the encoder
based on delay measurements to align the video bitrate with the congestion
controller's sending rate. Our evaluations atop Google's implementation of
WebRTC show that, across a set of cellular traces, Vidaptive achieves ~1.5x
higher video bitrate and 1.4 dB higher SSIM, 1.3 dB higher PSNR, and 40% higher
VMAF, and it reduces 95th-percentile frame latency by 2.2 s with a slight 17 ms
increase in median frame latency. | Pantea Karimi, Sadjad Fouladi, Vibhaalakshmi Sivaraman, Mohammad Alizadeh | 2023-09-28T21:56:14Z | http://arxiv.org/abs/2309.16869v2 | # Vidaptive: Efficient and Responsive Rate Control for Real-Time Video
###### Abstract
Real-time video streaming relies on rate control mechanisms to adapt video bitrate to network capacity while maintaining high utilization and low delay. However, the current video rate controllers, such as Google Congestion Control (GCC) in WebRTC, are very slow to respond to network changes, leading to link under-utilization and latency spikes. While recent delay-based congestion control algorithms promise high efficiency and rapid adaptation to variable conditions, low-latency video applications have been unable to adopt these schemes due to the intertwined relationship between video encoders and rate control in current systems.
This paper introduces Vidaptive, a new rate control mechanism designed for low-latency video applications. Vidaptive decouples packet transmission decisions from encoder output, injecting "dummy" padding traffic as needed to treat video streams akin to backlogged flows controlled by a delay-based congestion controller. Vidaptive then adapts the frame rate, resolution, and target bitrate of the encoder to align the video bitrate with the congestion controller's sending rate. Our evaluations atop WebRTC show that, across a set of cellular traces, Vidaptive achieves \(\sim\)2x higher video bitrate and 1.6 dB higher PSNR, and it reduces 95th-percentile frame latency by 2.7s with a slight increase in median frame latency.
Massachusetts Institute of Technology, \(\boxplus\) Microsoft Research
## 1 Introduction
Real-time video streaming has become an integral part of modern communication systems, enabling a wide range of applications from video conferencing to cloud gaming, live video, and teleoperation. A critical component of these systems is the rate control mechanism, which adapts the video bitrate to the available network capacity. State-of-the-art rate controllers, however, such as the Google Congestion Control (GCC) [1] algorithm used in WebRTC [2] have significant shortcomings. Specifically, GCC is slow to adapt to changes in network conditions, leading to both link under-utilization and latency spikes.
In recent years, numerous congestion control algorithms (CCA) have been proposed that achieve high utilization, low delay, and fast convergence [3, 4, 5, 6]. These algorithms are highly responsive to network variations, adapting within a few round-trip times (RTTs) while maintaining high utilization and low delay. In contrast, GCC and similar video rate control algorithms lag considerably. When network bandwidth opens up, GCC can take an order of magnitude longer than state-of-the-art congestion controllers to increase the video bitrate. This conservative approach can significantly hurt GCC's utilization and the video quality in variable networks. In our experiments using cellular network traces, GCC under-utilizes the network by 2-3x compared to Copa [3].
This sluggishness is not merely a limitation of GCC but a symptom of a broader issue: the inherent coupling between video encoders and rate control algorithms [7]. Current systems use an encoder-driven rate controller that adapts the video bitrate by controlling the encoder's target bitrate. In these systems, the instantaneous data transmission rate is dictated by the size of the video frames produced by the encoder. However, most video encoders are not designed to adjust to rapid fluctuations in network conditions. It generally takes several frames to adapt frame sizes to new target bitrates [7]. Moreover, the frame sizes are variable and only meet the target bitrate on average in a best-effort manner. To maintain low latency despite the vagaries of the encoder output, GCC sets the target bitrate conservatively and increases it slowly. Nevertheless, during times of significant congestion (e.g., due to link outages), the encoder cannot immediately adapt to capacity drops, and GCC experiences significant latency spikes.
Recently, Salisfy [7] addressed this challenge by modifying the encoder to be more adaptive to network variations. Such approaches, however, are challenging to deploy in practice. Changing the encoder usually requires changes to both the sender and receiver sides of the application. With the prevalence of hardware codec across billions of devices, such drastic changes have become virtually infeasible.
We present Vidaptive, a new rate control mechanism for low-latency video applications that significantly improves efficiency and responsiveness to network variability without modifying the encoder. Vidaptive's design is based on two key concepts.
The first is to decouple instantaneous packet transmission decisions from the encoder's output. Specifically, Vidaptive treats video streams as if they were backlogged flows for the purpose of rate control. It uses an existing delay-based CCA like Copa. If the encoder produces more packets than the
CCA is willing to send, it buffers them at the sender. On the other hand, Vidaptive sends "dummy packets" to fill the gap if the encoder does not produce enough packets to sustain the CCA's rate. This approach ensures that "on the wire", Vidaptive behaves identically to its adopted CCA running with a backlogged flow. The CCA's feedback loop can operate without disruption and track available bandwidth quickly.
Next, Vidaptive matches the video bitrate to the CCA's sending rate using mechanisms that adapt the frame rate, the encoder's target bitrate, and the video resolution. To keep frame latency within acceptable bounds when the encoder overshoots the CCA's rate, Vidaptive skips a frame if the delay at the sender exceeds a threshold (effectively reducing the frame rate to handle sudden latency spikes). Further, Vidaptive uses a novel online optimization algorithm to determine the encoder's target bitrate based on the CCA's sending rate and recent frame delay measurements. The optimization procedure provides a principled approach to navigate the tradeoff between video bitrate and frame rate, and importantly, it adapts automatically to the variability in the encoder's output and the network rate.
We implement Vidaptive atop WebRTC and test it using sixteen cellular network traces with significant variability on an emulated network link [8]. Compared to GCC, our key findings are:
1. Across all traces, Vidaptive improves link utilization by \(\sim\)2.5x and video bitrate by \(\sim\)2x on average across all traces, resulting in 1.6dB improvement in Peak-Signal-to-Noise-Ratio (PSNR) on average.
2. Across all frames in all traces, Vidaptive improves average PSNR by 1.9 dB and P95 PSNR by 2.2 dB.
3. Across all frames in all traces, Vidaptive increases median latency by 47 ms (148 ms \(\rightarrow\) 195 ms), but it reduces 95\({}^{th}\) percentile frame latency by 2687 ms (4120 ms \(\rightarrow\) 1433 ms). On a per-trace basis, Vidaptive improves P95 frame latency by \(\sim\) 2.7 seconds on average but is 45-384 ms worse on some traces.
4. Vidaptive reduces frame rate by \(\sim\) 10% on average per trace.
## 2 Motivation and Key Ideas
### The Problem
**Status Quo for Video Rate Control.** To understand how rate control for real-time video works today, we run Google Congestion Control (GCC) [1], the rate control mechanism inside WebRTC, on an emulated link that alternates between 2 Mbps and 500 Kbps every 40 seconds. The minimum network round-trip time (RTT) is 50 ms, and the buffer size at the bottleneck is large enough that there are no packet drops. In Fig. 0(a), GCC is sluggish to increase its rate when the stream begins and when the link capacity rises back to 2 Mbps at \(t=80\)s. Specifically, GCC takes _18 seconds_ to go from 500 Kbps to 2 Mbps, resulting in lower visual quality during that time (Fig. 0(b)). GCC's conservative nature also results in link under-utilization (85%) in the steady state. GCC is slow to react to capacity drops: in Fig. 0(c), when the link rate drops to 500 Kbps at \(t=40\)s, GCC's frame latency spikes to over a second and settles only after 12 _seconds_. This issue is because GCC continues to send at a higher rate even after the drop, causing queue buildup, added delay, and frame loss.
Contrast this behavior with traditional congestion control algorithms [3, 4, 9, 10] operating on _backlogged_ flows: they respond to such network events much faster, typically over few RTTs. For instance, the "Backlogged Copa" lines in Fig. 0(a) and Fig. 0(c) show that Copa [3], running on a backlogged flow on the same time-varying link is much more responsive to the network conditions. This wide disparity between GCC on real-time video traffic and Copa on backlogged traffic begs the question: Why does the state-of-the-art video rate control lag so far behind the state-of-the-art congestion control?
**Encoder-driven Rate Control Loop.** GCC has been carefully designed to work within the tight latency bounds of interactive video applications. Its rate control responds to increases or decreases in delay gradients over RTT timescales. It is also conservative in its link utilization to not overwhelm the network and cause delays or packet loss.
However, the real limiting factor is that, in current video congestion controllers like GCC, the _instantaneous rate_ at which data is transmitted on the wire is dictated by the size of the video frames produced by the encoder. GCC controls the video bitrate by adapting the encoder's target rate, but the encoded frame sizes can be highly variable. The encoder achieves its target bitrate only on average--usually throughout several frames [7]. Moreover, the encoder's bitrate cannot immediately adapt to changes in the target bitrate. We illustrate this behavior in Fig. 2 where we supply the VP8 encoder with a target bitrate that switches between 2 Mbps and 500 Kbps every 5s and observe its achieved bitrate. Every time the bitrate goes up from 500 Kbps to 2 Mbps, the encoder takes nearly 2 seconds to catch up. On the way down from 2 Mbps to 500 Kbps, it takes about a second to lower the bitrate.
The reason for this lag is that the size of an encoded frame is dependent on several factors, including quantization parameters, the encoder's internal state, and the motion, and is only known accurately after encoding. The encoder tries to rectify its over- and under-shootings by adjusting the quality of subsequent frames. Even once the encoder matches the target bitrate, it exhibits considerable variance around the average on a per-frame basis. Salisfy [7] deals with this unpredictability by encoding multiple versions of the same frame and picking the best match _after the fact_. However, putting aside the extra computational cost, this approach requires radical changes to the codec--at both the sender and the receiver--which hinders its real-world deployability.
The unpredictable nature of the encoder leads to two main issues: (1) Since the encoder cannot match the target bitrate on per-frame timescales, GCC cannot immediately reduce the
bitrate if the capacity suddenly drops. Instead, GCC has to be conservative and leave abundant bitrate headroom at all times (including in the steady state) so that it reduces the risk of congestion during fluctuations. Despite this, GCC still experiences occasional latency spikes (Fig. 0(c)). (2) GCC is very slow to grab available bandwidth. Whenever GCC increases its target bitrate, the encoder matches it over a few seconds, meaning it also takes several seconds for GCC to get feedback at the higher rate and increase its target bitrate again. This cycle ends up taking 15-20 seconds end-to-end (Fig. 0(a)).
**What about Probing Mechanisms?** A natural question at this point is if probing mechanisms, specifically those already supported within GCC [11], improve GCC's convergence in such scenarios. While periodic bandwidth probing has proved effective for some CCAs [4], the GCC mechanism is relatively ad-hoc. It fires a periodic timer and sends some bounded extra padding traffic (the frequency can vary but is often in the range of seconds (e.g., every 5 seconds)). Such an infrequent timer does not help on the finer RTT-level timescales required for precise rate control. A five-second timer is, in practice, very similar to a sluggish encoder that responds to the target bitrate over a few seconds. Fig. 3 shows how GCC with probing enabled reacts on a lossless periodic link with an RTT of 50 ms that alternates between 2 Mbps and 500 Kbps every 40 seconds. The padding traffic is only sent when GCC reduces the video bitrate significantly (t=40s) but does not help at all with bandwidth discovery (t=80s) when the link opens back up.
### Our Solution
**Decoupling the Encoder from the Rate on the Wire.** As illustrated in Fig. 0(a), a backlogged flow using a state-of-the-art CCA like Copa can adapt to time-varying network capacity on RTT timescales while also controlling network queueing delay. A key reason is that such CCAs have fine-grained control over when to send each packet, e.g., driven via the "ACK clock" [12]. Vidaptive makes video streams appear like a backlogged flow to the congestion controller. This allows Vidaptive to leverage existing CCAs optimized for high throughput, low delay, and fast convergence. In Fig. 1, Vidaptive using Copa for congestion control achieves nearly identical throughput and latency as a backlogged Copa flow. Vidaptive quickly increases its bitrate when bandwidth opens up, leading to higher image quality than GCC following each such event (Fig. 0(b)).
Vidaptive sends packets on the wire as dictated by the congestion controller. Specifically, when the encoder overshoots the available capacity, Vidaptive queues excess video packets
Figure 1: Utilization, frame quality, and latencies of Copa on a backlogged flow, GCC on a video flow, and Copa + Vidaptive on a video flow. GCC is very slow to match the available capacity and under-utilizes the link in the steady state. Copa + Vidaptive responds much faster to link variations and is similar to Copa’s performance on a backlogged flow.
Figure 3: Ad-hoc Probing Behavior in WebRTC on a periodic link that alternates between 2 Mbps and 500 Kbps every 40s. Padding traffic is used when GCC’s estimate severely drops (t=40s) but not used when needed during the capacity increase (t=80s).
Figure 2: Video encoder’s response to a time-varying “Target” input bitrate. “Achieved” reflects the encoder’s output rate. The encoder is slow to increase its output rate and exhibits a lot of variation around the average output rate in the steady state.
in a buffer and only sends them out when congestion control allows (e.g., according to the congestion window and in-flight packets for window-based CCAs). Conversely, when the encoder undershoots the available capacity, Vidaptive sends "dummy packets" to match the rate requested by the congestion control by padding the encoder output with additional traffic.1 This allows the CCA to operate without disruption (as in a backlogged flow) despite the encoder's varying frame sizes.
Footnote 1: This dummy traffic could also be repurposed for helpful information such as forward error correction (FEC) packets [13, 14] or keyframes for faster recovery from loss. We leave such enhancements to future work and focus solely on the impact of dummy traffic on video congestion control.
By decoupling the congestion control's decisions from the encoder output, Vidaptive can accurately track time-varying bottleneck rates. However, it is still important to match the actual video bitrate produced by the encoder to the congestion controller's sending rate. In particular, although buffering packets and sending dummy traffic can handle brief variations in the encoder output bitrate, the quality of experience will suffer if the encoder's output is persistently higher or lower than the congestion control's rate. In the former case, end-to-end frame latency would grow uncontrollably, and in the latter scenario, dummy packets would waste significant bandwidth.
Vidaptive includes two mechanisms that control the frame rate and encoder's target bitrate to match the video bitrate to the congestion controller's sending rate while meeting a delay constraint. First, it uses simple safeguards to ensure that end-to-end frame latency is not significantly affected by delay at the sender. If the delay at the sender exceeds a threshold, Vidaptive skips a frame. During severe latency spikes (e.g., caused by a network outage), Vidaptive drops buffered packets and resets the encoder using a keyframe.
Second, Vidaptive selects the encoder's target bitrate by solving an online optimization that decides how much _headroom_ to leave between the target bitrate and the CCA's sending rate (CC-Rate). Increasing the target bitrate to near the CC-Rate (lowering headroom) provides a higher video bitrate (and better quality). However, it risks latency increases due to the variability of the encoder's output frame sizes and future sending rate fluctuations. If these latency increases exceed the video's delay tolerance, Vidaptive has no choice but to reduce the frame rate. Thus, the choice of the target bitrate (headroom) is effectively about navigating a tradeoff between video bitrate and frame rate. This tradeoff depends on the inherent variability of the system. If the encoder's output and the bottleneck link rate (and hence CC-Rate) are stable and have low variance, then a small headroom can suffice to ensure low, consistent frame latency and, therefore, a high frame rate. However, the headroom must increase with more variability. Vidaptive's online optimization uses recent frame delay measurements to adapt to such variability automatically.
## 3 Vidaptive Design
### Overview
Our goal is to design a system for real-time video applications that responds quickly to any changes in network conditions, and maintains high utilization of available capacity _without altering the encoder_. Vidaptive achieves this by decoupling the behavior of the transport layer from the unreliable video encoder, and by closely matching the encoder bitrate to the CCA's sending rate (CC-Rate).
Fig. 4 shows Vidaptive's overall design. The video encoder encodes frames and sends them to an application-level media queue before sending the packets out into the network. At the transport layer, we add a modified window-based "Congestion Controller", a "Pacer" and a new "Dummy Generator" to decouple the rate at which traffic is sent on the wire from the encoder, as described in SS3.2. We introduce a new "Encoder Rate Controller" that monitors the delay frames are experiencing to trigger the latency safeguards described in SS3.3. The rate controller also uses the discrepancy between the CC-Rate and the video encoder's current bitrate, along with frame delays to adapt the target bitrate and resolution to efficiently trade off frame rate and frame quality (SS3.4).
### Transport Layer
**Congestion Controller.** To build a more responsive transport for real-time video, we start with a congestion controller that is more responsive to the network. A window-based algorithm keeps the amount of video packets in check without allowing them to grow uncontrollably and cause high latency, packet loss, and glitches. Specifically, the congestion window (_cwnd_) in Vidaptive tracks the maximum number of in-flight bytes between the sender and the receiver. _cwnd_ increases when the queueing delay is lower than what the CCA hopes to impose and decreases otherwise. The sender only sends out new packets when the amount in-flight is less than _cwnd_. Using delay as the congestion signal prioritizes the end-to-end frame latency and adjusts the window such that most frames are delivered in real time. Vidaptive can be used with any delay-controlling congestion control algorithm. We evaluate Vidaptive with two recently proposed such algorithms, Copa [3] and RoCC [9].
We compute the system's sending rate (CC-Rate) as the _cwnd_ divided by smoothed RTT and use it to configure the Pacer and the encoder target bitrate (SS3.4).
**Pacer.** The Pacer receives _cwnd_ and the CC-Rate from the congestion controller and paces out the video packets at CC-Rate. Since the encoder exhibits variance and its output bitrate may instantaneously overshoot the available capacity (SS2), the pacer is responsible for avoiding a sudden burst of packets. In Vidaptive, the pacing rate and the congestion window together determine when to send the next packet.
**Dummy Packets.** While the window-based CCA ensures ACK-clocked behavior, and the pacer prevents sudden bursts of video packets, neither ensures fast feedback between video
frame boundaries. The lack of feedback prevents us from quickly growing our window when bandwidth opens up (SS2). To emulate the behavior of a backlogged flow, we place "dummy packets" into the pacer's queue if the CCA is ready to send a packet but has no available video packets. Note that the dummy packets never stay in the pacer queue as they are only generated when the CCA wants to send a packet, but the pacer queue is empty.
To avoid network delay spikes caused by spurious dummy packets when the link rate suddenly drops, we do not send any dummy packets within a few milliseconds (5 ms in our implementation) of reading frames from the camera. The intuition behind this mechanism is that if the network is soon to deteriorate, the dummy packets sent a few milliseconds prior to a frame will induce a higher queuing delay in the network and thus increase the frame latency. On the other hand, if the network rate opens up, not sending packets for a few milliseconds will not slow down the congestion controller's convergence by much.
Lastly, we stop sending dummy packets if the video has reached a maximum bitrate (12 Mbps in our experiments). Since the video bitrate cannot increase further, sending extra traffic to discover more available bandwidth is not useful.
### Safeguarding against Latency Spikes
Transport design for any real-time video system must ensure low-latency frame delivery. As a result, we place two safeguards within Vidaptive to avoid transmitting frames that are unlikely to be successfully received on time. These safeguards essentially reduce the frame rate during highly congested periods to mitigate latency spikes; we discuss our principled strategy for trading off frame rate with quality in SS3.4.
**Encoder Pause.** Vidaptive monitors the time packets spend in the pacer queue before they are sent out. If the time spent by the oldest packet exceeds a pacer queue pause threshold (\(\tau\)), we _pause_ encoding and buffer the latest un-encoded frame. If the CC-Rate increases and the pacer queue is drained, we resume encoding and send video packets from the latest buffered frame if it is within \(\sim\) 17 ms (33 ms/2) of that frame being read. Otherwise, we skip this frame altogether and encode the next frame since we are closer in time to reading the next frame. We set \(\tau\) = 33 ms by default in our implementation, thereby pausing encoding if packets from the previous frame are yet to be sent out. The intuition here is that there is no point in encoding a frame that would have to sit in the Pacer queue, waiting for a previous frame to finish transmission.2 Instead, we always encode and transmit fresh frames when they have a high chance of reaching the receiver with acceptable latency.
Footnote 2: A high delay through the Pacer queue reflects congestion at the bottleneck link. If we ignore CC-Rate and transmit the packets stuck in the Pacer queue (as currently implemented in WebRTC), they would still have to wait at the bottleneck link.
**Encoder Reset.** If video packets have been stuck in the pacer for extended periods (> 1s), the network is likely experiencing an outage or extreme congestion. Packets already sent out will likely be lost, making their corresponding frames not decodable. Sending more video packets dependent on those un-decodable frames is wasteful and makes application-level recovery harder. Furthermore, packets from these frames have already incurred a huge latency in the pacer queue, and sending them out would mean very high end-to-end frame latency. Instead, we drain the pacer entirely and _reset_ the encoder by forcing it to send a keyframe. Since video packets received after the congestion event belong to a keyframe, the receiver's decoder has no errors when decoding them. This reset, similar to pausing, has the impact of controlling worst-case frame latency. It also allows Vidaptive to choose very conservative target bitrates and resolutions (SS3.4) in the aftermath of a congestion event that ensures that video packets get through to the receiver and give us fast feedback to help reset the system.
### Trading off Frame Rate and Quality
Vidaptive skips encoding some frames to reduce latency as described in SS3.3. Since this reduces the frame rate and affects
Figure 4: Vidaptive Design. Vidaptive uses a window-based Congestion Controller, Pacer, and a new Dummy Generator to decouple the rate at which traffic is sent on the wire from the encoder. The Encoder Rate Controller monitors frame delays to trigger latency safeguards and picks a new target bitrate and resolution based on the discrepancy between the CC-Rate and the video encoder’s current bitrate.
the smoothness of the video, Vidaptive is set up to reduce the frame bitrate proactively and, consequently, the frame quality in favor of letting more frames get through.
We formalize the tradeoff between frame rate and frame quality as a decision problem that picks a target bitrate for the encoder based on how much we prioritize achieving a high frame rate over high video quality. Specifically, we pick \(\alpha\), the fraction of the CC-Rate to supply as the target bitrate to the encoder. When the frame rate is low, we choose a smaller \(\alpha\) to create smaller frames but let more of them get through. When the frame rate is high, we choose a higher \(\alpha\) to obtain higher quality frames while sacrificing a little on the achieved frame rate. To affect significant and sudden changes in the video bitrate based on network conditions, we update the resolution in addition to setting the target bitrate.
**Preliminaries.** Vidaptive encodes each frame if the frame queueing delay (delay through the pacer queue) for the oldest unsent frame is not more than the pacer queue pause threshold \(\tau\). We define Vidaptive's _frame rate score_\(\mathcal{F}\) to capture how many frames it successfully delivers over a time interval \(T\). If there are \(N\) frames over a time interval \(T\) that experience delays through the pacer queue denoted by \(d_{i}\) for \(i\in\{1,2,..,N\}\), we define \(\mathcal{F}\) as the ratio of the number of frames successfully sent (those whose queuing delays do not exceed \(\tau\)) to \(N\), the total number of frames. In other words,
\[\mathcal{F}=\frac{\sum_{i=1}^{N}\mathbb{1}[d_{i}\leq\tau]}{N}, \tag{1}\]
where \(\mathbb{1}[d_{i}\leq\tau]=1\) if \(d_{i}\leq\tau\) and \(0\) otherwise. At higher \(\mathcal{F}\), most frames have \(d_{i}\leq\tau\) and do not pause the encoder which results in a higher frame rate. \(T=\)1s by default in our implementation.
If the camera's frame rate is \(f_{max}\) (typically 30 FPS), the gap between frames is \(\Delta=\frac{1}{f_{max}}\) (typically 33 ms). For maximum efficiency, the frame queueing delay should be close to \(\Delta\), such that the last packet of a frame is transmitted just as the next frame is encoded. Thus, to measure Vidaptive's efficiency and its impact on frame quality, we define its _bitrate score_\(B\) as,
\[B=\min\Big{(}\frac{\sum_{i=1}^{N}d_{i}}{N\Delta},1\Big{)} \tag{2}\]
Note that \(\frac{\sum_{i=1}^{N}d_{i}}{N}\) is the average frame queueing delay over the \(N\) samples in the last time interval \(T\), and its ratio relative to \(\Delta\) can be viewed as a proxy for utilization. For example, if \(B=0.2\), the system is sending 20% of the video traffic it can send to the link without causing additional delays.
**Choosing a Target Bitrate.** The frame queueing delay is a function of the estimation of the link rate, CC-Rate, and frame sizes. As a result, it is impacted by fluctuations in both the encoder's output and in the CC-Rate. These fluctuations are out of our control and can be viewed as a form of exogenous "noise" impacting frame delays. However, we can influence the _expected_ frame sizes by controlling the encoder's _target bitrate_. The crux of our method is to pick the target bitrate in a way that maximizes a weighted linear combination of \(\mathcal{F}\) and \(B\) based on recent per-frame queueing delay measurements. Assume a target bitrate \(\alpha\) - CC-Rate is given to the encoder where \(0<\alpha<1\). Increasing \(\alpha\) increases each frame's size and its \(d_{i}\) (frame size divided by CC-Rate). Since \(d_{i}\) depends on \(\alpha\), we rename it \(d_{i}(\alpha)\). Increasing \(d_{i}(\alpha)\) increases \(B\) but reduces \(\mathcal{F}\), i.e. \(\alpha\) induces a tradeoff between the frame rate and the frame quality. Our goal is to find \(a^{*}\) such that:
\[\text{maximize} \frac{\lambda}{1-\lambda}\mathcal{F}+B \tag{3}\] \[\text{s.t.} 0<\alpha<1\]
where \(\lambda\in(0,1)\) is a parameter that reflects how much the application favors higher frame rate over better frame quality. When \(\lambda\sim 1\), the application favors a high frame rate; when \(\lambda\sim 0\), the application favors larger frames and higher quality.
**Solving the Optimization.** To choose \(\alpha\), one would ideally want to solve the above optimization problem over _future_ frames. However, it is hard to model \(d_{i}(\alpha)\) for future frames since these can depend on future video content (e.g., the extent of motion) and how CC-Rate changes in the future. Instead, we use hindsight optimization [15] to solve for the best \(\alpha\) we could have picked in hindsight for recent _past_ frames. Estimating the effect \(\alpha\) would have had on the delays of previous frames is simple. Assume we have frame queueing delay measurements \(d_{i}\) for \(i\in\{1,2,..,N\}\) over a time interval \(T\), and we encoded these frames with a target bitrate \(\alpha_{i}\) - CC-Rate. Had all these frames been encoded by \(\alpha\) instead, the counterfactual frame queueing delay would have been \(\bar{d}_{i}(\alpha)=d_{i}\frac{\alpha}{\alpha_{i}}\). This estimate assumes that frame size is proportional to the target bitrate (and hence proportional to \(\alpha\)), and that changing the target bitrate would not have changed CC-Rate. Using these counterfactual delay estimates, we can now solve the optimization problem in Eq. (3).
Fig. 5 shows an example of this counterfactual optimization problem. Fig. 5(a) shows frame queueing delay samples and
Figure 5: Counterfactual optimization flow. Given a set of frame queueing delay samples (left), whose average is shown in green and outliers higher than \(\tau\) are shown in red, we evaluate Objective(\(\alpha\)) for discrete \(\alpha\) and find \(a^{*}\) that maximizes it (middle). We update the counterfactual values of frame queueing delay with \(a^{*}\) to have fewer outliers above \(\tau\) but a lower average (right).
their average (green line). The samples that are less than \(\tau\) are colored in blue, and those larger than \(\tau\) (which would cause a frame to be skipped) are colored in red. Fig. 4(b) shows the Objective(\(a\)) versus \(a\), and its maximizer \(\alpha^{*}\). Fig. 4(c) shows the counterfactual frame queueing delay values, \(\tilde{d}_{i}\), had \(\alpha^{*}\) been used to encode them. The scaled-down \(\tilde{d}_{i}\) values reduce the number of outliers above \(\tau\) (increasing frame rate), but their average is smaller (decreasing frame sizes). As we increase \(\lambda\) (giving more emphasis to frame rate), the optimal solution will select smaller and smaller values of \(\alpha\), reducing the number of outliers further. Computing \(\alpha^{*}\) can be done efficiently by evaluating the objective at only a finite set of \(\alpha=\min(\tau\cdot\frac{d_{i}}{a_{i}},1)\) for \(i\in\{1,2,...,N\}\) (see A.1 for details).
**How does \(\alpha\) work?** To demonstrate how \(\alpha\) reacts to link capacity variations, we run Vidaptive on a \(1.5\,\mathrm{Mbps}\) link that experiences 10s of high variability. We repeatedly feed the encoder with a fixed \(1280\times 720\) frame to remove encoder variance. Fig. 5(a) shows the values of \(\alpha\) and normalized frame rate, the ratio of frame rate and \(f_{max}\). Before the fluctuations start at 10s, Vidaptive operates at \(f_{max}\) with a very high \(\alpha\). During the noisy period (10s-20s), when the frame rate drops and the frame queueing delay increases, \(\alpha\) decreases to improve frame rate and reduce video bitrate. When the link steadies after the 20s, \(\alpha\) resets to its high value. To demonstrate how \(\alpha\) reacts to encoder variations, we tested Vidaptive on a fixed \(1.5\,\mathrm{Mbps}\) link with a dynamic video [16]. The encoded frame sizes increase during high-motion periods due to large differences from previous frames. Fig. 5(b) illustrates how \(\alpha\) adapts to the variable output of the encoder, decreasing in the aftermath of a large frame to improve frame rate before increasing again.
**Resolution Selection.** While tuning the target bitrate effectively adapts the video bitrate over smaller ranges, we change the resolution when more drastic changes are needed. Specifically, we make one of three decisions on every frame: maintain, increase, or decrease the current resolution. We decrease the resolution by one level (_e.g.,_ from 1080p to 720p) if the number of frames delivered in the last time interval \(T\) is below the minimum acceptable frame rate (\(5\,\mathrm{FPS}\) in Vidaptive) because this suggests that frames are too large for the current link capacity. In contrast, if \(\alpha\) is high and the measured video bitrate is far lower than the encoder's supplied target bitrate, the encoder is having trouble meeting its target bitrate and maintaining high utilization at the current resolution because the frames are too small. So, we increase the resolution by one level (_e.g.,_ 720p to 1080p). We simply maintain the resolution if none of the above cases are met. To avoid changing the resolution too frequently and ensure we have sufficient data points to make further changes, we only update the resolution if more than \(T\) seconds have passed since the latest resolution change. The details of the mechanism are in A.2.
## 4 Implementation
We implemented our system on top of Google's implementation of WebRTC [2].
**Congestion Controller.** We replace GCC within WebRTC with two window-based delay-sensitive algorithms, Copa [3] and RoCC [9]. We reused the logic from the original implementation of Copa [17]. Given \(rtt_{min}\), the minimum observed RTT, RoCC sets the congestion window (_cwnd_) to a small constant more than the number of bytes received in the last \((1+\gamma)rtt_{min}\) interval. To achieve high utilization with controlled delays, RoCC aims to maintain a network queueing delay of \(\gammartt_{min}\) where \(\gamma\) is the delay-sensitiveness parameter.
**Dummy Generator.** We repurpose the padding generator in WebRTC to generate dummy packets that are within the _cwnd_ and no more than 200 bytes each. Dummy packets are ACKed by the receiver but have a special _padding_ to ensure that the payload is ignored. We have implemented safeguards to limit the maximum rate of the dummy traffic to the maximum possible video bitrate (set as \(12\,\mathrm{Mbps}\)).
**Latency Safeguards.** The transport layer sets the encoder target bitrate to zero to signal a _pause_ if the oldest packet's age in the pacer queue exceeds the pacer queue pause threshold (\(\tau\)). We reuse WebRTC's support for buffering the latest unencoded camera frame. We force an _Encoder Reset_ if the
Figure 6: \(\alpha\)’s response to link and video encoder variations. \(\alpha\) picks lower values (more headroom) when the link capacity or encoder output varies significantly to maintain a good frame rate. The normalized frame rate is the ratio of achieved frame rate to maximum frame rate (\(30\,\mathrm{FPS}\)).
oldest packet age exceeds 1 second in Vidaptive by draining all the video packets in the pacer queue and signaling the video encoder to send a keyframe via an existing API call in WebRTC.
**Encoder Rate Controller.** Vidaptive has two modules to adapt the encoder to the network: encoder bitrate and resolution selection. We disabled the resolution logic in WebRTC [18] and moved the adaptation logic to occur prior to frame encoding. These modules record \(\alpha\) values and frame queueing delay samples received from the transport layer whenever a frame is sent out from the pacer. Vidaptive picks the next \(\alpha^{*}\) and frame resolution on a frame-by-frame basis by optimizing over the \(\alpha\) and frame queueing delay values over a sliding window of the last \(T\) seconds (Algorithm 1). We use \(T=1\)s by default. The sliding window ensures gradual changes in \(\alpha\) over time.
After picking \(\alpha^{*}\), the resolution module chooses whether to decrease, increase, or hold the current resolution on a per-frame basis as described in SS3.4. The resolution module tracks how many consecutive frames have signaled "increase" or "decrease' and changes the resolution if the number exceeds the threshold for that signal (15 frames for "decrease" and 30 for "increase") 3. It also waits at least \(T\) seconds before changing the resolution again.
Footnote 3: Vidaptive prioritizes responding to drops in capacity faster than increases.
## 5 Evaluation
We evaluate Vidaptive atop a WebRTC-based implementation on Mahimahi links. We describe our setup in SS5.1 and use it to compare existing baselines in SS5.2. In SS5.3, we delve deeper into Vidaptive's design components. Trace-level breakdowns of all results can be found in App. C.
### Setup
**Testbed.** Inspired by OpenNetLab [19], we built a testbed, implemented in C++, on top of WebRTC [20] that enables a headless peer-to-peer video call between two endpoints. The sender reads video frames from an input file and the receiver records the received frames to an output file. To match video frames between the sender and the receiver for visual quality and latency measurements, a unique 2D barcode is placed on each frame [7]. We emulate different network conditions between the sender and receiver by placing the receiver behind a Mahimahi [8] link shell. Vidaptive uses Copa [3] as the default CCA. All experiments are run for 2 min on a lossless link with a one-way delay of 25 ms.
**Metrics.** Two primary metrics are used to quantify the performance improvements of Vidaptive: frame quality and frame latency. Frame quality is measured by the Peak Signal-to-Noise Ratio (PSNR [21]) between received frames and the corresponding source frames. Vidaptive reports the time between _frame read_ at the sender and _frame display_ at the receiver as frame latency and deems the display time for frames that are not received at the receiver as the presentation time of the subsequent displayed frame [7]. We also report the network utilization and frame rate at the receiver.
**Network Traces.** We evaluate each scheme on a set of 16 cellular traces bundled with Mahimahi [8], and also use synthetic traces to illustrate the convergence behavior in 5.3.
**Videos.** We use a 1080p (i.e., 1920\(\times\)1080) video with a frame rate of 30 FPS YUV video dataset curated from YouTube. Tab. 1 describes the details in the appendix. All the experiments are on the first video of the dataset (Tab. 1) unless stated otherwise. Audio is disabled throughout the experiments.
**Baselines.** We evaluate Google Congestion Control algorithm (GCC), WebRTC's default transport mechanism. We also evaluate Vidaptive with Copa and RoCC. We use \(\gamma=0.5\) (SS4) for RoCC and \(\delta=0.9\) (see [3]) for Copa to maintain low-network delay. We choose \(\lambda=0.5\) to weigh frame rate and utilization equally when optimizing the target bitrate in Vidaptive (SS3.4). The pacer queue pause threshold is set to \(\tau=33ms\), and frame queueing delay measurement interval is set to \(T=1s\) for online optimization of the target bitrate.
### Overall Comparison
We summarize Vidaptive's performance improvements over WebRTC atop GCC on all Mahimahi cellular traces in Fig. 7. The X axis (symlog [22] format) shows the P95 latency improvement, and the Y axes show the average PSNR and video bitrate improvements of Vidaptive over GCC. On nearly all traces, Vidaptive improves PSNR (1.6 dB on average). Vidaptive also improves P95 latency on 10 out of 16 traces, achieving over 2.7 seconds improvement in P95 frame latency on average across all the traces. Vidaptive has 45-380 ms higher P95 latency on 5 of the traces, although it improves average PSNR by 0.8-3.4 dB on these traces.
To better understand per-frame behavior, Fig. 8 shows the CDFs of PSNR and frame latency of all the frames across all the traces. Vidaptive achieves a better PSNR at all the percentiles by sending larger frames when possible. Overall, it improves average PSNR by 1.9 dB and P95 PSNR by 2.2 dB.
Figure 7: Average PSNR improvement vs. P95 latency improvement of Vidaptive over GCC. Vidaptive improves both P95 latency and PSNR for almost half of the traces while improving one of the two on the rest.
Since Vidaptive generally sends larger frames, its minimum latency is higher than GCC, and it slightly increases the median latency (148 ms for GCC versus 195 ms for Vidaptive). However, GCC's frame latency becomes much worse beyond the 75\({}^{\text{th}}\) percentile. The high percentiles correspond to scenarios with high link rate variability and outages, where Vidaptive's CCA (Copa) responds faster than GCC. For example, Vidaptive reduces P95 frame latency by 2687 ms compared to GCC (4120 ms \(\rightarrow\) 1433 ms).
Fig. 9 shows the distribution of the normalized improvement of the Vidaptive's metrics compared to GCC per trace. The whiskers denote P5 and P95 values, the interquartile range shows P25-P75, the horizontal line shows P50 and the dot shows the average. Because Vidaptive's CCA is more responsive, Vidaptive, on average, achieves more than 2.5\(\times\) of GCC's link utilization. Vidaptive also achieves a higher video bitrate than GCC on nearly all traces, yielding an average of \(\sim\)2\(\times\) and up to 2.7\(\times\) improvement. Vidaptive improves the P95 latency by up to \(\sim\)2x. In Fig. 7, whenever Vidaptive has a higher P95 latency, it has higher video bitrate and quality. Vidaptive has \(\sim\)10% and 30% lower frame rate on average and in the worst case compared to GCC, resulting in frame rates of 27 FPS and 21 FPS respectively. This reduction in frame rate happens during outages when, unlike GCC, Vidaptive's CCA chooses not to send any frames and avoids further congestion. This caps Vidaptive's frame rate but achieves better frame latency.
### Understanding Vidaptive's Design
**Effect of Dummy Traffic.** To quantify the effect of dummy traffic, we disable the changes we made to the target bitrate selection logic and focus on transport layer changes (SS3.2). In Fig. 10, we emulate a link that starts with 5 Mbps of bandwidth for 40 s, drops to 2 Mbps for the next 40 s before jumping back to 5 Mbps. We compare the video and padding bitrate for "Copa" to Copa with dummy traffic ("Copa+Dummy"). Copa takes 6 s to match the network capacity, while Copa+Dummy takes 2 s. The dummy traffic is sent only when the video traffic cannot match the link capacity when it suddenly opens up (around 0s and 80s). Copa does not match capacity as fast because its rate on the wire is determined by the slow-reacting encoder (SS2). Further, "Copa+Dummy" has a more stable steady-state bitrate than "Copa" because the dummy traffic decouples the CCA's feedback from the video encoder's variable output, enabling more accurate link capacity estimation.
**Ablation Study.** To understand the impact of different components in Vidaptive's design, we incrementally evaluate the benefits of changing the congestion control and adding dummy traffic at the transport layer (SS3.2), enabling the latency safeguards (SS3.3), and running the encoder bitrate and resolution selection approach described in SS3.4. Fig. 11 shows the distribution of the normalized performance improvement compared to GCC on all the traces for different system variations.
In "Copa," we replace GCC with a window-based congestion control algorithm but keep the rest of the modules unchanged. Copa is more aggressive than GCC in bandwidth allocation, improving the average link utilization and video
Figure 8: CDF of frame PSNR and latency across all frames and all the traces. Vidaptive achieves higher PSNR on all percentiles while getting lower latency on higher percentiles. Vidaptive has higher latency in lower percentiles due to larger frame sizes.
Figure 10: Copa with dummy traffic exhibits faster convergence of the video bitrate to the available network capacity and maintains a smoother steady-state video bitrate.
Figure 9: Performance benefits for Vidaptive over GCC. Vidaptive achieves higher utilization and video bitrate compared to GCC. Vidaptive improves P95 latency on half the traces. Vidaptive reduces the frame rate because the its CCA stops sending frames during outages to maintain low latency. The whiskers are P5 and P95, the interquartile range shows P25, P50, and P75.
bitrate by over 2\(\times\). However, the aggressiveness causes an average increase of _3.1 seconds_ in the P95 latency. The frame rate also reduces because Copa's window-based mechanism, unlike GCC, simply stops sending when it detects outages.
In "Copa+Dummy," as the name suggests, we add dummy traffic (SS3.2) on top of Copa. Since dummy traffic speeds up bandwidth discovery, the link video bitrate and link utilization improves over "Copa." However, the P95 frame latency is still very high compared to GCC.
In "Copa+Dummy+Latency," we enable the latency safeguards on top of "Copa+Dummy" but keep the encoder bitrate selection logic unchanged. This reduces the P95 latency (Fig. (c)c) compared to GCC, "Copa," and "Copa+Dummy," yielding an average reduction of over _2.2 seconds_ in P95 latency compared to GCC. Since the safeguards pause encoding of frames that increase the latency, the overall frame rate, video bitrate, and utilization decrease compared to "Copa" and "Copa+Dummy."
Finally, in "Vidaptive", the system aims to find the right target video bitrate for the encoder by running the optimization described in SS3.4. Because this system is trying to balance the frame rate and frame quality, the video bitrate reduces, and the frame rate increases compared to "Copa+Dummy+Latency". Moreover, the latency further decreases because of the reduction of the video bitrate. The utilization is comparable across all schemes with dummy traffic since it pads any encoder output to match the link rate.
**Resolution Distribution.** Vidaptive uses a different resolution scheme than WebRTC. Fig. 12 shows the CDF of all the selected resolutions during the experiment across all the traces. More than 80% of the time, Vidaptive chooses a higher resolution than WebRTC, which often translates to higher video quality. When the link capacity is very low or highly variable, Vidaptive chooses to send the lowest resolution, manifesting itself in lower resolution values in low percentiles. In contrast, WebRTC's resolution mechanism [18] reacts slowly and causes huge latency spikes. Vidaptive currently supports the resolutions shown in Fig. 12.
**Using a Different Congestion Controller.** To show that Vidaptive can work with any delay-sensitive window-based CCA, we replaced Copa with RoCC [9]. Fig. (a)a shows the PSNR and P95 latency improvements of Vidaptive (RoCC) compared to GCC. Vidaptive (RoCC) follows similar trends as Vidaptive and improves the average video bitrate on almost all traces while improving the P95 latency for half of them. Fig. (b)b shows the distribution of the normalized performance improvements of Vidaptive (RoCC) over GCC on all traces. Like Vidaptive, Vidaptive (RoCC) achieves a higher link utilization and video bitrate on average (more than 3\(\times\) and 2\(\times\) respectively), while getting an improvement of up to \(\sim\)2\(\times\) in P95 latency and an increment of at most 360 ms. Vidaptive (RoCC)'s frame rate is \(\sim\)16% lower on average and 30% lower in the worst case than GCC, resulting in frame rates of 25 FPS and 21 FPS, respectively.
**Evaluation on More Videos.** We evaluated Vidaptive on all the videos described in SS5.1. Fig. 14 shows the average PSNR improvement against the P95 latency improvement over GCC. Vidaptive improves the average PSNR for \(\sim\) 90% of the settings while increasing the P95 latency by at most 455 ms. Since Vidaptive shows similar trends for different videos, we focus on one video and Copa for the remaining experiments.
Figure 11: Performance benefits over GCC with different Vidaptive components. “Copa” improves video bitrate and utilization but hurts frame latency. Dummy traffic improves video bitrate and utilization. Latency knobs in “Copa+Dummy+Latency” reduce the latency by _seconds_. With the encoder bitrate and resolution selection, Vidaptive has higher frame rate than previous versions. Since schemes with Copa do not send frames in outages, they have lower frame rate than GCC. The whiskers are P5 and P95, the interquartile range shows P25, P50, and P75.
Figure 12: CDF of frame resolutions across all the traces. Vidaptive selects higher resolutions but is conservative during outages by reducing the resolution quickly in lower percentiles.
### Effect of Parameter Choices
**Effect of \(\lambda\).** We evaluate the impact of the parameter \(\lambda\), which trades off video bitrate against frame rate (SS3.4). Fig. 15 shows the distribution of the normalized improvement of the metrics relative to GCC on all of the traces with \(\lambda=0.2,0.5,0.7,0.99\). When \(\lambda\) increases, the optimization framework in SS3.4 favors a higher frame rate over the video bitrate, hence video bitrate decreases (Fig. 14(b)), and average frame rate increases (Fig. 14(d)). Since Vidaptive uses dummy traffic, changes in the video bitrate do not affect CCA estimations and consequently do not change the overall link utilization. As a result, the overall link utilization (sum of the video and padding bitrates), shown in Fig. 14(a), does not change by selecting a different \(\lambda\). Vidaptive has safeguards to control the maximum latency; hence, changing \(\lambda\) does not significantly affect the P95 frame latency, as seen in Fig. 14(c). Note that during any outages, Vidaptive does not send any frames, which caps Vidaptive's frame rate. We chose \(\lambda=0.5\) as the default because it maintains a good video bitrate while keeping the P95 latency low with minimal reduction in frame rate (\(\sim\)10%).
**Pacer Queue Pause Threshold (\(\tau\)).** Fig. 16 shows how the pacer queue pause threshold \(\tau\) (SS3.3) affects Vidaptive. We tested Vidaptive with \(\tau=33\), 500, 1000 ms. Changing \(\tau\) does not change the network utilization (Fig. 15(a)) because dummy traffic decouples congestion control from the encoder, padding any encoder output to match the link rate. As \(\tau\) increases, the frame rate score increases (Eq. 1), and the encoder bitrate selection logic enforces a higher video bitrate (Fig. 15(b)). However, these higher-quality frames spend more time in the pacer queue and experience higher P95 latencies (Fig. 15(c)). At higher \(\tau\), the _Encoder Pause_ threshold is higher, so more frames are encoded, resulting in a higher frame rate (Fig. 15(d)). Vidaptive selects \(\tau=33ms\) as it has low P95 latency, relatively high frame rate and video bitrates when compared to GCC.
**Optimization Time Interval (\(T\)).** We show the impact of \(T\), the interval over which the frame rate and bitrate scores are calculated to strike a balance between them (SS3.4). Fig. 17
Figure 14: Average PSNR improvement vs. P95 latency improvement of Vidaptive over GCC for all the videos in the dataset. Each color denotes one trace. Vidaptive improves both P95 latency and PSNR for about half of the traces and videos while improving one of the two metrics on the rest.
Figure 13: Performance of Vidaptive using a different CCA. Vidaptive (RoCC) has a similar performance to Vidaptive (Copa).
Figure 15: Effect of \(\lambda\) on Vidaptive’s performance. Increasing the value of \(\lambda\) increases the frame rate and decreases the video bitrate and quality. The whiskers are P5 and P95, the interquartile range shows P25, P50, and P75.
shows performance improvements of Vidaptive compared to GCC for \(T=100,1000,10000\) ms. Again, Vidaptive's link utilization (Fig. 17a) is comparable across all variants because of dummy traffic. A smaller \(T\) means that Vidaptive reacts to any sudden and local changes in recent frame queueing delay data. \(T=100\) means that the encoder bitrate selection looks at utmost three measurements for a camera with \(30\,\mathrm{FPS}\) to optimize \(\alpha\). Any temporary decrease in the few frame queueing delay samples results in a higher encoder target bitrate that affects the slow encoder for a long period of time, resulting in higher video bitrate a lower frame rate and consequently a high latency. On the other hand, a large \(T\) makes the system insensitive to recent changes in frame queueing delay, and a few large frame queueing delay measurements will result in lower values of \(\alpha\), which reduces the video bitrate. Further, because the resolution changes at most every \(T\), a \(10\,\mathrm{second}\)\(T\) does not lower the resolution in time in outages, causing a reduction in the frame rate and severely affecting the latency. We picked \(T=1000ms\) for Vidaptive to ensure the bitrate selection is relatively stable while maintaining sensitivity to the recent frame queueing delay samples.
## 6 Related Work
**Congestion Control**. End-to-end congestion control approaches can be broadly categorized into delay-based [1, 23, 24, 6, 10, 25, 4, 26] or buffer-filling schemes [27, 26]. Delay-based protocols aim to minimize queuing by adjusting their sending rate based on queuing delay [28, 10, 25], or delay-gradients [1, 6, 24]. Buffer-filling algorithms [29, 30, 26] send as much traffic as possible until loss or congestion is detected. Some approaches like Nimbus [31] switch between delay-based and buffer-filling modes to improve fairness against competing traffic while maintaining high utilization. However, limited attention has been paid to congestion control for application-limited flows [32, 33] like video traffic that is generated at fixed intervals determined by the frame rate.
**WebRTC Systems.** Many video applications use Web Real-time Communication (WebRTC) [2] to deliver real-time video. GCC [34], WebRTC's rate control, uses delay gradients to adjust the sending rate. However, GCC's conservative behavior coupled with the variance in encoder output results in either under-utilization or latency spikes.
Salsify [7] previously observed a mismatch between video encoder output and available capacity, and rectified it by encoding multiple versions of the same frame and picking the better match. This requires changing the video codec at the sender and the receiver, making it hard to deploy. Vidaptive instead matches encoder output to network capacity without changes to the encoder. Adaptive bitrate algorithms [35, 36, 37, 38, 39] solve a similar problem for on-demand video using information about available bandwidth, buffer size, and current bitrate to determine the encoder's target bitrate. A recent proposal called SQP [40] achieves low end-to-end frame delay for interactive video streaming applications but operates in much higher bitrates than Vidaptive is designed for.
Figure 16: Effect of pacer queue pause threshold (\(\tau\)) on Vidaptive. As \(\tau\) increases, the P95 latency increases as frames spend a longer wait time in the pacer queue but results in higher video bitrates. Increasing \(\tau\) first decreases the received frame rate as frames spend a long time in the pacer queue, but then it increases the received frame rate because _Encoder Reset_ is triggered and new frames are encoded at lower resolution. The whiskers are P5 and P95, the interquartile range shows P25, P50, and P75.
Figure 17: Performance comparison of Vidaptive using different intervals \(T\), for encoder bitrate selection. The duration of \(T\) affects the sensitivity to recent frame queueing delay measurements, and consequently the frame rate, latency and video bitrate of the system. The whiskers are P5 and P95, the interquartile range shows P25, P50, and P75.
Conclusion
This paper proposes Vidaptive, a new rate control mechanism for low-latency video applications that is highly efficient and adapts rapidly to changing network conditions without modifications to the video encoder. Vidaptive injects "dummy" traffic to make video traffic appear like a backlogged flow running a delay-based congestion controller. Vidaptive also continuously adapts the frame rate, encoder's target bitrate, and video resolution to reduce discrepancies between the encoder output bitrate and link rate. We leave to future work an exploration of leveraging dummy traffic for purposes like FEC or keyframes, and the benefits from functional encoders like Salsify [7] in Vidaptive for improved real-time experience.
|
2309.07964 | Improved Shortest Path Restoration Lemmas for Multiple Edge Failures:
Trade-offs Between Fault-tolerance and Subpaths | The restoration lemma is a classic result by Afek, Bremler-Barr, Kaplan,
Cohen, and Merritt [PODC '01], which relates the structure of shortest paths in
a graph $G$ before and after some edges in the graph fail. Their work shows
that, after one edge failure, any replacement shortest path avoiding this
failing edge can be partitioned into two pre-failure shortest paths. More
generally, this implies an additive tradeoff between fault tolerance and
subpath count: for any $f, k$, we can partition any $f$-edge-failure
replacement shortest path into $k+1$ subpaths which are each an
$(f-k)$-edge-failure replacement shortest path. This generalized result has
found applications in routing, graph algorithms, fault tolerant network design,
and more.
Our main result improves this to a multiplicative tradeoff between fault
tolerance and subpath count. We show that for all $f, k$, any $f$-edge-failure
replacement path can be partitioned into $O(k)$ subpaths that are each an
$(f/k)$-edge-failure replacement path. We also show an asymptotically matching
lower bound. In particular, our results imply that the original restoration
lemma is exactly tight in the case $k=1$, but can be significantly improved for
larger $k$. We also show an extension of this result to weighted input graphs,
and we give efficient algorithms that compute path decompositions satisfying
our improved restoration lemmas. | Greg Bodwin, Lily Wang | 2023-09-14T18:01:00Z | http://arxiv.org/abs/2309.07964v2 | Improved Shortest Path Restoration Lemmas for Multiple Edge Failures: Trade-offs Between Fault-tolerance and Subpaths\({}^{*}\)
###### Abstract
The _restoration lemma_ is a classic result by Afek, Bremler-Barr, Kaplan, Cohen, and Merritt [PODC '01], which relates the structure of shortest paths in a graph \(G\) before and after some edges in the graph fail. Their work shows that, after one edge failure, any replacement shortest path avoiding this failing edge can be partitioned into two pre-failure shortest paths. More generally, this implies an _additive_ tradeoff between fault tolerance and subpath count: for any \(f,k\), we can partition any \(f\)-edge-failure replacement shortest path into \(k+1\) subpaths which are each an \((f-k)\)-edge-failure replacement shortest path. This generalized result has found applications in routing, graph algorithms, fault tolerant network design, and more.
Our main result improves this to a _multiplicative_ tradeoff between fault tolerance and subpath count. We show that for all \(f,k\), any \(f\)-edge-failure replacement path can be partitioned into \(O(k)\) subpaths that are each an \((f/k)\)-edge-failure replacement path. We also show an asymptotically matching lower bound. In particular, our results imply that the original restoration lemma is exactly tight in the case \(k=1\), but can be significantly improved for larger \(k\). We also show an extension of this result to weighted input graphs, and we give efficient algorithms that compute path decompositions satisfying our improved restoration lemmas.
## 1 Introduction
Suppose we want to route information, traffic, goods, or anything else along shortest paths in a distributed network. In practice, network edges can be prone to _failures_, in which a link is temporarily unusable as it awaits repair. It is therefore desirable for a system to be able to adapt to these failures, efficiently rerouting paths on the fly into new replacement shortest paths that avoid the currently-failing edges.
An algorithm that repairs a shortest path routing table following one or more edge failures is called a _restoration algorithm_[BBAK\({}^{+}\)01]. An ideal restoration algorithm will avoid recomputing shortest paths from scratch after each new failure event, instead leveraging its knowledge of the pre-failure shortest paths to speed up the computation of the post-failure replacement shortest paths. Therefore, when designing restoration algorithms, it is often helpful to understand exactly how shortest paths in a graph can evolve following edge failures. A _restoration lemma_ is the general name for a structural result relating the form of pre-failure shortest paths to post-failure shortest paths in a graph, named for their applications in restoration algorithms.
The original restoration lemma was pioneered in a classic paper by Afek, Bremler-Barr, Kaplan, Cohen, and Merritt [BBAK\({}^{+}\)01]. All graphs in this discussion are undirected and unweighted, until otherwise indicated.
**Definition 1** (Replacement Paths).: A path \(\pi\) in a graph \(G=(V,E)\) is an \(f\)_-fault replacement path_ if there exists a set of edges \(F\subseteq E,|F|\leq f\) such that \(\pi\) is a shortest path in the graph \(G\setminus F\).
**Theorem 2** (Original Restoration Lemma [BBAK\({}^{+}\)01]).: _In any graph \(G\), every \(f\)-fault replacement path can be partitioned into \(f+1\) subpaths that are each a shortest path in \(G\)._
This restoration lemma suggests a natural approach for restoration algorithms: when \(f\) edges fail and an \(s\rightsquigarrow t\) shortest path is no longer usable, we can find a replacement \(s\rightsquigarrow t\) shortest path by searching only over \(s\rightsquigarrow t\) paths that can be formed by concatenating \(f+1\) shortest paths that we have already computed in the current routing table. Up to some subtleties involving shortest path tiebreaking [1], this approach works, and has been experimentally validated as an efficient restoration strategy [1]. It has also found widespread theoretical application, e.g. in pricing algorithms [11], replacement path algorithms [1, 1,
* _(Lower Bound) There are graphs \(G\) and \(f\)-fault replacement paths \(\pi\) that cannot be partitioned into \(2k\) subpaths that are each an \((\lfloor f/k\rfloor-2)\)-fault replacement path in \(G\)._
In our view, Theorem 4 contains both good news and bad news for the area. The good news is that the restoration lemma tradeoff is in fact multiplicative in nature, and so it can be substantially improved for most choices of \(k\). This potentially opens up new avenues for improved restoration algorithms for routing table recovery, as explored by Afek et al. [1]. The bad news is that, in the case \(k=1\), our new lower bound shows that the previous restoration lemma was tight: there are examples in which one cannot decompose an \(f\)-fault replacement path into two replacement paths avoiding \(f-2\) faults each. This case \(k=1\) is particularly important in applications, especially to spanner and preserver problems [1, 1], and so this lower bound may close a promising avenue for progress on these applications.
### Weighted Restoration Lemmas and Second Main Result
The original paper by Afek et al. [1] also proved a _weighted_ restoration lemma, which gives a weaker decomposition, but which holds also for weighted input graphs:
**Theorem 5** (Weighted Restoration Lemma [1]).: _For any **weighted** graph \(G\) and any \(1\leq k\leq f\), every \(f\)-fault replacement path \(\pi\) can be partitioned into \(k+1\) subpaths and \(k\) individual edges, where each subpath in the partition is an \((f-k)\)-fault replacement path in \(G\)._
More specifically, this theorem promises that the subpaths and individual edges occur in an alternating pattern (although some of these subpaths in this pattern may be empty). One can again ask whether this additive tradeoff between subpath count and fault tolerance per subpath is optimal. We show that it is not, and that it can be improved to a multiplicative tradeoff, similar to Theorem 4.
**Theorem 6** (Main Result, Weighted Setting).: _For any **weighted** graph \(G\) and any \(1\leq k\leq f\), every \(f\)-fault replacement path \(\pi\) can be partitioned into \(O(k)\) subpaths and \(O(k)\) individual edges, where each subpath in the partition is an \((f/k)\)-fault replacement path in \(G\)._
For most graphs of interest, this theorem can be simplified. For example, suppose we consider the setting of _metric_ input graphs, in which every edge must be a shortest path between its endpoints. Then we can consider the \(O(k)\) individual edges in the decomposition to be \(0\)-fault replacement paths, and so we could correctly state that \(\pi\) can be partitioned into \(O(k)\) subpaths that are each at most \((f/k)\)-fault replacement paths. Every unweighted graph is a metric graph, and so in this sense our weighted main result generalizes our unweighted upper bound. However, we also note that our weighted main result cannot be simplified _in general_: one can easily construct weighted graphs containing edges \((u,v)\) that are \(f\)-fault replacement paths between their endpoints, but not \((f-1)\)-fault replacement paths, and therefore any weighted restoration lemma will need to include some exceptional edges, as in [1] and Theorem 6.
### Algorithmic Considerations
In the main body of the paper, our restoration lemmas (both weighted and unweighted) are proved using a simple but slow greedy decomposition strategy to determine the subpaths; essentially, we repeatedly peel off the longest possible prefix from the input path \(\pi\) that is an \(f/k\)-fault replacement path. All of our technical work is in proving a bound of \(O(k)\) on the number of subpaths that arise from this decomposition. However, we note that this process requires exponential time in the number of faults \(f\). That is, given a subpath \(\pi_{i}\subseteq\pi\), we can straightforwardly test whether \(\pi_{i}\) is an \(f/k\)-fault replacement path via brute force search over every subset of faults \(F^{\prime}\subseteq F,|F^{\prime}|\leq f/k\). This requires \(\operatorname{poly}(n)\cdot\exp(f)\) time, and it is not clear if this \(\exp(f)\) factor can be improved.
We thus revisit the decomposition strategy in Section 6, and show a more involved algorithm that implements our restoration lemmas in \(\operatorname{poly}(n,f)\) time. That is:
**Theorem 7** (Unweighted Algorithmic Restoration Lemma).: _There is an algorithm that take on input a graph \(G\), a set \(F\) of \(|F|=f\) edge faults, a shortest path \(\pi\) in \(G\setminus F\), and a parameter \(k\), and which returns:_
* _A partition_ \(\pi=\pi_{0}\circ e_{0}\circ\pi_{1}\circ\cdots\circ\pi_{q}\) _into_ \(q=O(k)\) _subpaths, and_
* _Fault sets_ \(F_{0},\ldots,F_{q}\subseteq F\) _with each_ \(|F_{i}|\leq f/k\)_, such that each path_ \(\pi_{i}\) _in the decomposition is a shortest path in_ \(G\setminus F_{i}\)__
_(hence the algorithm implements Theorem 4). This algorithm runs in polynomial time in both the number of nodes \(n\) and the number of faults \(f\)._
The core of of our new decomposition approach is a reduction to the algorithmic version of Hall's theorem; this is somewhat involved, and so we overview it in more depth in the next part of this introduction. Using roughly the same algorithm, we also show the algorithmic restoration lemma in the weighted setting.
**Theorem 8** (Weighted Algorithmic Restoration Lemma).: _There is an algorithm that take on input a weighted graph \(G\), a set \(F\) of \(|F|=f\) edge faults, a shortest path \(\pi\) in \(G\setminus F\), and a parameter \(k\), and which returns:_
* _A partition_ \(\pi=\pi_{0}\circ e_{0}\circ\cdots\circ\pi_{q-1}\circ e_{q-1}\circ\pi_{q},\) _where each_ \(\pi_{i}\) _is a (possibly empty) subpath, each_ \(e_{i}\) _is a single edge, and_ \(q=O(k)\)_, and_
* _Fault sets_ \(F_{0},\ldots,F_{q}\subseteq F\) _with each_ \(|F_{i}|\leq f/k\)_, such that each path_ \(\pi_{i}\) _in the decomposition is a shortest path in_ \(G\setminus F_{i}\)__
_(hence the algorithm implements Theorem 6). This algorithm runs in polynomial time in both the number of nodes \(n\) and the number of faults \(f\)._
### Technical Overview of Upper Bounds
The more involved parts of the paper are the upper bound in Theorem 4, and Theorem 6. We will overview the proof in the unweighted setting (Theorem 4) here. The weighted setting carries a few additional details, but more or less follows the same proof strategy.
Let \(\pi\) be an \(f\)-fault replacement path in an input graph \(G\) with endpoints \((s,t)\); in particular, let \(F\) be a set of \(|F|\leq f\) edge faults, and suppose that \(\pi\) is a shortest \(s\rightsquigarrow t\) path in the graph \(G\setminus F\). We are also given a parameter \(f^{\prime}<f\), and our goal is to partition \(\pi\) into as few subpaths as possible, subject to the constraint that each subpath is a replacement path avoiding at most \(f^{\prime}\) faults.
The Partition of \(\pi\).We use a simple greedy process to determine the partition of \(\pi\). We will determine a sequence of nodes \((s=x_{0},x_{1},\ldots,x_{k},x_{k+1}=t)\) along \(\pi\), which form the boundaries between subpaths in the decomposition. Start with \(s=:x_{0}\), and given node \(x_{i}\), define \(x_{i+1}\) to be the furthest node following \(x_{i}\) such that the subpath \(\pi[x_{i},x_{i+1}]\) is an \(f^{\prime}\)-fault replacement path. We will denote the subpath \(\pi[x_{i},x_{i+1}]\) as \(\pi_{i}\), and so the decomposition is
\[\pi=\pi_{0}\circ\cdots\circ\pi_{k}.\]
We will let \(F_{i}\subseteq F,|F_{i}|\leq f^{\prime}\) be an edge set such that \(\pi_{i}\) is a shortest \(x_{i}\rightsquigarrow x_{i+1}\) path in the graph \(G\setminus F_{i}\). Each subpath \(i\) might allow several valid choices of fault set \(F_{i}\); it will be important for our argument to define \(F_{i}\) to be a fault set of minimum size \(|F_{i}|\).
Our goal is now to show that the parameter \(k\), defined as (one fewer than) the number of subpaths that arise from the greedy decomposition, satisfies \(kf^{\prime}\leq O(f)\).
Argument Under Simplifying Assumptions.Our proof strategy will be to prove that an arbitrary faulty edge \(e\in F\) can appear in only a constant number of subpath fault sets \(F_{i}\), which implies that \(kf^{\prime}\leq O(f)\) by straightforward counting. To build intuition, let us see how the proof works under two rather strong simplifying assumptions:
* **(Equal Subpath Assumption)** We will assume that all subpaths in the decomposition have equal length: \(|\pi_{0}|=\cdots=|\pi_{k}|\).
* **(First Fault Assumption)** Let us say that a _shortcut_ for a subpath \(\pi_{i}\) is an alternate \(x_{i}\rightsquigarrow x_{i+1}\) path in the original graph \(G\) that is strictly shorter than \(\pi_{i}\). Every shortcut must contain at least one fault in \(F_{i}\), and conversely, every fault in \(F_{i}\) lies on at least one shortcut (or else it may be dropped from \(F_{i}\)). Our second simplifying assumption is that, more specifically, for each \(e\in F_{i}\) there exists a shortcut \(\sigma\) for \(\pi_{i}\) such that \(e\) is the _first_ fault in \(F_{i}\) on \(\sigma\).
With these two assumptions in hand, we are ready to prove that each faulty edge \(e\) appears in only \(O(1)\) many fault sets \(F_{i}\). Suppose for contradiction that there are three separate subpaths that all have shortcuts that use \(e\) as their first edge, and moreover that these shortcuts use \(e\) with the same orientation. Consider the first and last of these shortcut prefixes, which we will denote as \(q(x_{1},u)\) and \(q(x_{3},u)\). In Figure 2, the shortcuts are represented as dotted paths, and \(q(x_{1},u),q(x_{3},u)\) are colored red. Notice that \(q(x_{1},u)\cup q(x_{3},u)\) form an alternate \(x_{1}\leadsto x_{3}\) path. Since \(e\) is assumed to be the first fault on these shortcuts, this alternate \(x_{1}\leadsto x_{3}\) path avoids all faults in \(F\). Additionally, by definition of shortcuts we have
\[|q(x_{1},u)|+|q(x_{3},u)|<|\pi_{1}|+|\pi_{3}|\,.\]
Since we have assumed that all subpaths have the same length, we can amend this to
\[|q(x_{1},u)|+|q(x_{3},u)|<|\pi_{1}|+|\pi_{2}|\,.\]
But this implies that \(q(x_{1},u)\cup q(x_{3},u)\) forms an \(x_{1}\leadsto x_{3}\) path that is strictly shorter than the one used by \(\pi\), which contradicts that \(\pi\) is a shortest path in \(G\setminus F\). This completes the simplified proof, but the challenge is now to relax our two simplifying assumptions, which are currently doing a lot of work in the argument.
Relaxing the Equal-Subpath-Length Assumption.The equal-subpath-length assumption is the easier of the two to relax. It is only used in one place in the previous proof: to replace \(|\pi_{3}|\) with \(|\pi_{2}|\) in the inequality. When we drop the assumption, if we get lucky and have \(|\pi_{2}|\geq|\pi_{3}|\), then the previous proof still works. The bad case is when \(|\pi_{2}|<|\pi_{3}|\).
To handle this bad case, we follow a proof strategy from [1]. Let us say that a subpath is _pre-light_ if it is no longer than the preceding subpath, or _post-light_ if it is no longer than the following subpath. In the above example, \(\pi_{2}\) is pre-light if we have \(|\pi_{2}|\leq|\pi_{1}|\), and it is post-light if \(|\pi_{2}|\leq|\pi_{3}|\). It is possible for a particular subpath to be both pre- and post-light, or for a particular subpath to be neither. A simple counting argument shows that either a constant fraction (nearly half) of the subpaths are pre-light, or a constant fraction are post-light. We will specifically assume in the following discussion that a constant fraction of the subpaths are post-light; the other case is symmetric.
Post-light subpaths are exactly those that avoid the previous bad case, and so now we can simply restrict the previous counting argument to the post-light subpaths only. That is, we can argue that for each fault \(e=(u,v)\) considered with orientation, there are only constantly many post-light subpaths for which it appears as the first fault of a shortcut. The same counting argument then implies an upper bound of \(|F_{i}|\leq O(f/k)\) for the fault sets \(F_{i}\) associated to post-light subpaths \(\pi_{i}\), which completes the proof.
This still uses the first-fault assumption, and we next explain how this can be relaxed, which we regard as the main technical part of the paper.
Relaxing the First-Fault Assumption.Let us now consider the case where there is a fault \(e\in F_{i}\) that is _not_ the first fault of any shortcut for \(\pi_{i}\). We can still assume that there exists at least one shortcut \(\sigma\) for
Figure 2: Under the equal subpath and first fault assumptions, we can reach contradiction if we assume that there are three different subpaths that all have shortcuts that use \(e\) as their first fault.
with \(e\in\sigma\) (otherwise, we can safely drop \(e\) from \(F_{i}\)). Let \(e^{*}\) be the first fault along that shortcut \(\sigma\). We will shift the focus of our counting argument. Previously, we considered each \((e\in F_{i},\pi_{i})\) as a pair, and our goal was to argue that faults \(e\) can only be paired with a constant number of subpaths \(\pi_{i}\). Now, our strategy is to map the pair \((e\in F_{i},\pi_{i})\) to the different pair \((e^{*},\pi_{i})\), and our goal is to argue that each fault \(e^{*}\) can only be paired with a constant number of subpaths \(\pi_{i}\). We call these new pairs \((e^{*},\pi_{i})\)_Fault-Subpath (FS) Pairs_, and we formally describe their generation in Section 4.2. (We note that, for a technical reason, we actually generate FS pairs using _augmented subpaths_ that attach one additional node to \(\pi_{i}\) - but to communicate intuition about our proof, we will ignore this detail for now.)
Although we can bound the number of FS pairs \((e^{*},\pi_{i})\) as before, this only implies our desired bound on the size of the fault sets \(|F_{i}|\) if we can _injectively_ map each pair \((e\in F_{i},\pi_{i})\) to a _distinct_ FS pair \((e^{*},\pi_{i})\). The main technical step in this part of the proof is to show that this injective mapping is possible. Let \(\Gamma_{i}\) be a bipartite graph between vertex sets \(F_{i}\) and \(F\). Put an edge between nodes \(e\in F_{i},e^{*}\in F\) iff there exists a shortcut \(\sigma\) for \(\pi_{i}\), in which \(e\in\sigma\) and \(e^{*}\) is the first fault in \(\sigma\). An injective mapping to FS pairs corresponds to a matching in \(\Gamma\) in of size \(|F_{i}|\), i.e., a matching of maximum possible size (given that one of the sides of the bipartition has only \(|F_{i}|\) nodes). The purpose of this graph construction is to enable the following new connection to Hall's theorem:
**Lemma 9** (Hall's Theorem).: _The following are equivalent:_
* _The graph_ \(\Gamma\) _has a matching of size_ \(|F_{i}|\)_. (Equivalently, one can associate each pair_ \((e\in F_{i},\pi_{i})\) _to a_ _unique_ _FS pair.)_
* _There does not exist a subset of faults_ \(F^{\prime}_{i}\subseteq F_{i}\) _whose neighborhood in_ \(\Gamma_{i}\) _is strictly smaller than_ \(F^{\prime}_{i}\) _itself (that is,_ \(|N(F^{\prime}_{i})|<|F^{\prime}_{i}|\)_)._
In fact, we show that the latter property is implied by minimality of \(F_{i}\). If there is a violating subset \(F^{\prime}_{i}\subseteq F_{i}\) with \(|N(F^{\prime}_{i})|<|F^{\prime}_{i}|\), then we can replace \(F^{\prime}_{i}\) with \(N(F^{\prime}_{i})\), and argue that \(\pi_{i}\) is still a replacement shortest path under this smaller set of edge failures. See Lemma 18 and surrounding discussion for details.
## 2 Preliminaries
**Definition 10**.: Relative to a value of \(f\), we call the pair \((q,r)\)**restorable** if in every graph \(G\), any \(f\)-fault replacement path can be partitioned into \(q\) subpaths which are each \(r\)-fault replacement subpaths in \(G\).
Throughout this paper, we'll use the following notation in discussing restoration. We'll denote the set of \(f\) faults as \(F\). We'll assume that our fault-avoiding replacement path connects vertex \(s\) to vertex \(t\), and denote it as \(\pi(s,t\mid F)\). Additionally, \(\pi(s,t\mid F)[u,v]\) will denote the subpath of \(\pi(s,t\mid F)\) between vertices \(u\) and \(v\). We will denote the \(q\)\(r\)-fault replacement subpaths as \(\pi(x_{i},x_{i+1}\mid F_{i+1})\) with \(x_{0}:=s\) and \(x_{q}:=t\), and each fault (sub)set \(|F_{i}|\leq r\). Then \(\pi(s,t\mid F)\) can be represented as
\[\pi(s,t\mid F)=\pi(x_{0},x_{1}\mid F_{1})\circ\pi(x_{1},x_{2}\mid F_{2})\circ \ldots\circ\pi(x_{q-1},x_{q}\mid F_{q}).\]
Equivalently, for each \(i\),
\[\pi(s,t\mid F)[x_{i},x_{i+1}]=\pi(x_{i},x_{i+1}\mid F_{i+1}).\]
Figure 3: In order to relax the first fault assumption, instead of counting \((e_{1},\pi_{i})\) and \((e_{2},\pi_{i})\) as pairs, we can map these to distinct FS pairs \((e^{*}_{1},\pi_{i}),(e^{*}_{2},\pi_{i})\). Our main technical step is to show that this distinct mapping is always possible.
**Remark 11**.: (Monotonicity of Restorability) If \((q,r)\) is restorable, then both \((q+1,r)\) and \((q,r+1)\) are restorable. Equivalently, if \((q,r)\) is not restorable, neither \((q-1,r)\) nor \((q,r-1)\) are restorable.
## 3 Lower Bounds
**Proposition 12**.: _For all \(f\geq 2\), \((2,f-2)\) is not restorable._
Proof.: We will first assume for convenience that \(f\) is even, and return to the case where \(f\) is odd at the end. Let \(g=f/2\) and let \(G_{f}\) be the graph as illustrated in Figure 4. Formally: the vertices of \(G_{f}\) are \(1,2\ldots N:=2^{g+1}-1\) (labeled clockwise in Figure 4), and its edge set is \(E_{1}\cup E_{2}\cup E_{3}\), where
\[E_{1}:=\{(2^{k},2^{g+1}-2^{k+2}),0\leq k\leq g-3\}\]
\[E_{2}:=\{(2^{k+2},2^{g+1}-2^{k}),0\leq k\leq g-3\}\]
\[E_{3}:=\{(i,i+1),1\leq i\leq N-1\}\]
In the diagram, \(E_{1}\) is indicated in blue and slopes upwards to the right, and \(E_{2}\) is indicated in yellow and slopes upwards to the left. \(E_{3}\) is in black and forms the outer curve. Let \(F:=E_{1}\cup E_{2}\), and notice there is a unique replacement path \(\pi(1,N\mid F)\) which consists of \(E_{3}\), the outer curve. Note that \(G_{f}\) is symmetric about the vertex \(m=2^{g}\), which is also the midpoint of \(\pi(1,N\mid F)\).
Consider any partition of the path \(\pi(1,N\mid F)\) into two \((f-1)\)-fault replacement paths with fault sets \(F_{1}\) and \(F_{2}\), and let \(x\) be the vertex on which we concatenate them. Label the two subpaths as \(\pi(1,x\mid F_{1})\) and \(\pi(x,N\mid F_{2})\). Define the "half-arcs" of this graph as \(\pi(1,m\mid F)\) and \(\pi(m,N\mid F)\), the two subpaths
Figure 4: If the blue and yellow edges fail (i.e. all straight-line edges on the inside of the outer semicircle), then we can’t partition the remaining shortest path (black edges along the outer semicircle) into two subpaths that are both \((f-2)\)-fault replacement paths.
partitioning \(\pi(1,N\mid F)\) into equal parts divided at midpoint \(m\). (We note that this partitioning will be used again in Lemma 14 as well.)
Any choice of \(x\) which divides \(\pi(1,N\mid F)\) into two \((f-1)\)-fault replacement subpaths will have at least one of the two subpaths entirely contained in one of the half-arcs. Since the construction is symmetric around the vertex \(m\), we may assume without loss of generality that \(x\geq m\), and \(\pi(1,x)\) contains \(\pi(1,m\mid F)\).
With this assumption, we will proceed to show that \(F_{1}\) must contain every edge in \(E_{1}\) and can exclude at most one edge of \(E_{2}\), and hence \(|F_{1}|\geq f-1\).
First Part (\(E_{1}\subseteq F_{1}\)).Consider \(\pi(1,x\mid F_{1})\); suppose for a contradiction that there is an edge \((2^{c},2^{g+1}-2^{c+2})\in E_{1}\setminus F_{1}\). Then we can construct a shortcut to \(x\) from \(1\) by traversing through \((2^{c},2^{g+1}-2^{c+2})\) and then using edges in \(E_{3}\) to get to \(x\). Explicitly, this path is:
\[\left\{\begin{array}{ll}(1,2,\ldots 2^{c})\circ(2^{c},2^{g+1}-2^{c+2})\circ(2 ^{g+1}-2^{c+2},\ldots x+1,x)&\text{ if }x\leq 2^{g+1}-2^{c+2}\\ (1,2,\ldots 2^{c})\circ(2^{c},2^{g+1}-2^{c+2})\circ(2^{g+1}-2^{c+2},\ldots x-1,x)& \text{ if }x>2^{g+1}-2^{c+2},\end{array}\right.\]
which has length \(2^{c}+2^{g+1}-2^{c+2}-x\leq 2^{c}-2^{c+2}+x\) in the first case, or length \(2^{c}+2^{c+2}-2^{g+1}+x\) in the second. In either case, the length is strictly less than \(x-1\), the length of \(\pi(1,x\mid F)\). We must therefore have \(E_{1}\subseteq F_{1}\).
Second Part (\(|E_{2}\setminus F_{1}|\leq 1\)):For \(E_{2}\), suppose for a contradiction that there are two edges of \(E_{2}\) which \(F_{1}\) does not contain: \((2^{a+2},2^{g+1}-2^{a})\) and \((2^{b+2},2^{g+1}-2^{b})\) with \(a<b\). Then in \(G\setminus F_{1}\) we have a walk
\[(1,2,\ldots 2^{a+2}) \circ(2^{a+2},2^{g+1}-2^{a})\circ(2^{g+1}-2^{a},2^{g+1}-2^{a}-1, \ldots 2^{g+1}-2^{b})\] \[\circ(2^{g+1}-2^{b},2^{b+2})\circ(2^{b+2},2^{b+2}+1,\ldots,x-1,x),\]
of length
\[x-3(2^{b}-2^{a})+1<x-1=|\pi(1,x\mid F)|.\]
Thus we must include all of \(F\) in \(F_{1}\) except at most one edge from \(E_{2}\).
Finally, in the case that \(f\) is odd, we instead construct \(G_{f}\) with \(g=\lceil f/2\rceil\), and take any edge out of \(E_{1}\) or \(E_{2}\), which does not change the analysis.
Our lower bound with two subpaths generalises to our main lower bound result, which we rewrite below:
**Proposition 13**.: _For any \(k\in\mathbb{N}\), \((2k,\lfloor f/k\rfloor-2)\) is not restorable._
Proof.: Assume for convenience that \(k\) divides \(f\). We will glue \(k\) copies of the graph with \(f/k\) faults in the previous proposition together, and then show that for any division of a particular \(f\)-fault replacement path into subpaths, one subpath must contain one of the half-arcs as defined before, and its fault set will have to include \(f/k-1\) faults.
We take \(k\) copies of \(G_{f/k}\) from before, denoted by \(G_{1,f/k},G_{2,f/k},\ldots G_{k,f/k}\), labeling the vertices of \(G_{i,f/k}\) as \((i,j)\) where \(j\) is the label of the corresponding vertex in \(G_{f/k}\). We identify each \((i,2^{g+1}-1)\) with \((i+1,1)\). The edges in this graph are the union of all edges of the \(G_{i,f/k}\) (see Figure 5), and we define \(F\) as the union of the fault sets of each \(G_{f/k}\) as defined in the proof of Proposition 12. Let \(E_{j,i}\) denote the \(E_{j}\) for \(G_{i,f/k}\), so that formally
\[F:=\bigcup_{i=1}^{k}\big{(}E_{1,i}\cup E_{2,i}\big{)}.\]
Let \(s:=(1,1)\), \(t:=(k,2^{g+1}-1)\). Consider \(\pi(s,t\mid F)\). This \(f\)-fault replacement path is precisely the non-fault edges in \(G\), or the union of \(E_{3,i}\) over each of the \(G_{i,f/k}\).
We now bring in the previous half-arc structure from the case with two subpaths. This graph contains all the half-arcs of each \(G_{i,f/k}\), and the half-arcs can be expressed either as \(\pi(s,t)[(i,1),(i,m)]\) or \(\pi(s,t)[(i,m),(i,2^{g+1}-1)]\). From Proposition 12, we have the following:
**Lemma 14**.: _A path containing a half-arc cannot be a \((f/k-2)\)-fault replacement path._
Proof.: Following from the argument of Proposition 12, a fault replacement path containing a half arc of \(G_{i,2f/k}\) must have its fault set contain at least every edge in \(E_{1,i}\cup E_{2,i}\) except possibly one. Thus any fault set of that path has size at least \(f/k-1\).
We will show that any division of \(\pi(s,t\mid F)\) into \(2k\) subpaths will result in one subpath containing a half-arc, and thus failing to be a \((f/k-2)\)-fault replacement path. Suppose we have some choice of boundary vertices \(x_{1},x_{2},\ldots x_{2k-1}\) and corresponding fault subsets \(F_{1},F_{2},\ldots F_{2k}\), so that each \(\pi(s,t)[x_{i-1},x_{i}]\) is a shortest path in \(G\setminus F_{i}\).
Let the _interior vertices_ of a path denote all its vertices except its first and last. Note that \(\pi(s,t\mid F)\) contains \(2k\) half-arcs, and any half-arc which does not have any \(x_{i}\) in its interior vertices will be completely contained in some \(\pi(s,t)[x_{j-1},x_{j}]\). The interior vertices of all \(2k\) half-arcs are disjoint, and we only have \(2k-1\)\(x_{i}\) which can be in the interior of half arcs. Therefore some subpath \(\pi(s,t)[x_{i-1},x_{i}]\) must contain a half-arc, and its fault set \(|F_{i}|\) must have size at least \(f/k-1\). Thus we will always get that one of the subpaths cannot be a \((f/k-2)\)-fault replacement subpath, proving the lower bound.
In the case when \(k\) does not divide \(f\), we choose graphs which are as even as possible to combine; Let \(a\) be the remainder of \(f\) divided by \(k\). We glue \(a\) copies of \(G_{\lceil f/k\rfloor+1}\) to \((k-a)\) copies of \(G_{\lfloor f/k\rfloor}\). In this case the subpath which contains a half-arc might contain a half arc of \(G_{\lfloor f/k\rfloor}\), and will enforce a fault set of size only \(\lfloor f/k\rfloor-1\).
If we want a similar result using this method for the case for an odd number of subpaths, say \(2k-1\), we still need to construct \(k\) copies of \(G_{f/k}\), since half-arcs come in pairs, and we get the same bound on fault sets. Alternatively, we can also use monotonicity to directly get:
**Corollary 15**.: _For any \(k\in\mathbb{N}\), \((2k-1,\lfloor f/k\rfloor-2)\) is not restorable._
## 4 Upper Bound
We now prove the main result of Theorem 4. Fix any \(s,t\) and replacement path \(\pi(s,t\mid F)\), where \(|F|=:f\). Recall that, to prove Theorem 4, our goal is will show that for any \(c\in\mathbb{R}\), we can partition \(\pi(s,t\mid F)\) into \(O(k)\)\((f/k)\)-fault replacement subpaths.
Figure 5: The top figure depicts one copy of \(G_{f/k}\), and the bottom depicts all the copies combined together.
### Subpath Generation
We generate \(x_{i}\), the vertices where we split up \(\pi(s,t\mid F)\) by traversing along \(\pi(s,t\mid F)\) and adding vertices greedily to the current subpath until adding one more vertex would make that subpath no longer a \(f/k\)-fault replacement path. More precisely, we set \(x_{0}:=s\), and then we pick each \(x_{i+1}\) to maximize \(|\pi(s,t\mid F)[x_{i},x_{i+1}]|\) under the constraint that \(\pi(s,t\mid F)[x_{i},x_{i+1}]\) is a \(f/k\)-fault replacement subpath. Suppose we get \(q\) such \(x_{i}\), to \(x_{q}\). We will denote these \(q\) subpaths of \(\pi(s,t\mid F)\) by
\[\pi_{i}=\pi(s,t\mid F)[x_{i},x_{i+1}]\quad\forall 0\leq i\leq q-1.\]
Here we remark that the last subpath \(\pi_{q-1}\) differs from the others as it is bounded by the end of \(\pi(s,t\mid F)\) and is not generated greedily. We will use \(\pi^{\prime}_{i}\) to denote the subpaths with one additional vertex included going along the \(\pi(s,t\mid F)\) path from \(s\) to \(t\), for \(0\leq i\leq q-2\). We call \(\pi^{\prime}_{i}\) the **augmented subpaths**. Note that no augmented subpath \(\pi^{\prime}_{i}\) can be a \(f/k\)-fault replacement subpath in \(G\) by greedy choice of \(x_{i}\).
For each \(i\), fix \(F_{i}\) to be any minimum size fault set such that \(\pi^{\prime}_{i}\) is a shortest path in \(G\setminus F_{i}\). That is, \(\pi^{\prime}_{i}\) is a \(|F_{i}|\)-fault replacement path but not a \(c\)-fault replacement path for any \(c<|F_{i}|\). By our choice of \(x_{i}\), we must have \(|F_{i}|\geq f/k+1\). Finally, let \(x^{\prime}_{i}\) denote the vertex on \(\pi(s,t\mid F)\) immediately after \(x_{i}\), so that
\[\pi^{\prime}_{i}=\pi(s,t\mid F)[x_{i},x^{\prime}_{i+1}].\]
Following the notation of [1], we will denote \(\pi^{\prime}_{i}\) as **pre-light** if its length is less than or equal to the length of \(\pi^{\prime}_{i-1}\), and **post-light** if its length is less than or equal to the length of \(\pi^{\prime}_{i+1}\). Also from [1], at least half of the \(\pi^{\prime}_{i}\) are pre-light, or at least half are post-light. We will assume without loss of generality that for at least \(\frac{q-1}{2}\)\(i\), \(\pi^{\prime}_{i}\) is post-light. The other case, where at least half of the \(\pi^{\prime}_{i}\) are pre-light, follows from a symmetric argument.1
Footnote 1: In particular, in the case where at least half of the \(\pi^{\prime}_{i}\) are pre-light, one can use the following argument but substitute “left ends” for “right ends”, and “(left) FS-pairs” for “(right) FS-pairs”.
### FS-pair Generation
Next, we generate a set of **FS-pairs**. The following process specifically generates **right FS-pairs**; this is because we assume above that most subpaths are post-light. If in the other case, where most subpaths are pre-light, we would instead generate **left FS-pairs** through a symmetric process.
FS-pairs are denoted by \((e^{*},\pi^{\prime}_{i})\), where \(e^{*}\) is a fault in \(F\), and \(\pi^{\prime}_{i}\) is a post-light augmented subpath. Every FS-pair has the property that there exists a fault-free path from the right end \(x^{\prime}_{i+1}\) of \(\pi^{\prime}_{i}\) to \(e^{*}\). Before we generate these, we will set up some notation.
**Definition 16**.: For any \(u,v\)-path \(p^{\prime}\), we say a \(u,v\)-path \(p\) is a **shortcut** of \(p^{\prime}\) if \(\operatorname{len}(p)<\operatorname{len}(p^{\prime})\).2
Footnote 2: We include the possibility of non-simple shortcuts, which may repeat nodes. Our existential upper bound proof would work equally well if we restricted attention to _simple_ shortcuts, but this expanded definition will be more convenient for algorithmic reasons outlined in Section 6.
Fix any post-light augmented subpath \(\pi^{\prime}_{i}\). For a given \(e\in F_{i}\) which we refer to as the **generating fault**, let \(S_{e}\) be the set of \(x^{\prime}_{i+1},x_{i}\) shortcuts for \(\pi^{\prime}_{i}\) which contain \(e\).3 For a shortcut \(p\in S_{e}\), we define the **base fault**\(b(p)\in F\) of \(p\) as the first fault in \(p\) from \(x^{\prime}_{i+1}\) to \(x_{i}\). More precisely, we define
Footnote 3: Note that \(S_{e}\) depends on the choice of subpath \(\pi^{\prime}_{i}\), although we do not include this parameter in the notation.
\[b(p):=e_{\min\{j:e_{j}\in F\}}\text{ where }p=x^{\prime}_{i+1}e_{1}v_{1}e_{2} \ldots e_{m}x_{i}.\]
For each \(e\in F_{i}\), we define its set of base faults for \(\pi^{\prime}_{i}\) as
\[B(e):=\{b(p):p\in S_{e}\}\]
Finally, we define the family of base faults for \(\pi^{\prime}_{i}\) as
\[\mathcal{B}(\pi^{\prime}_{i})=\{B(e):e\in F_{i}\}\]
Our next goal will be to choose a distinct base fault from each base fault set \(B(e)\in\mathcal{B}(\pi^{\prime}_{i})\) in order to define the FS-pairs.
Explicitly, we define an auxiliary bipartite graph \(\Gamma_{i}\) where one side of the bipartition is \(F_{i}\) and the other is the set of faults \(F\), and where \((e,e^{*})\in E(\Gamma_{i})\) iff \(e^{*}\in B(e)\). In this set up, choosing a distinct base fault for each \(B(e)\) is equivalent to finding a matching of \(\Gamma_{i}\) which saturates \(F_{i}\). Hall's Theorem gives us a condition for this:
**Lemma 17** (Hall's Condition).: _If for every \(A\subseteq F_{i}\) we have \(|N(A)|\geq|A|\), where \(N(A)\) is the neighborhood of \(A\) in \(\Gamma_{i}\), then \(\Gamma_{i}\) contains a matching that saturates \(F_{i}\) (and therefore it is possible to choose a distinct base fault for each \(B(e)\))._
We therefore only need to verify the premises of Hall's Condition. The following lemma will be helpful.
**Lemma 18**.: _For any \(A\subseteq F_{i}\), the fault set \(F^{\prime}_{i}:=F_{i}\setminus A\cup N(A)\) is also a valid fault set for \(\pi^{\prime}_{i}\). (That is, \(\pi^{\prime}_{i}\) is a shortest path in \(G\setminus F^{\prime}_{i}\).)_
Proof.: First, we observe that
\[N(A)=\left|\bigcup_{e\in A}B(e)\right|.\]
This holds because the neighbours of each generating fault \(e\) is the set of base faults \(B(e)\) it generates.
We now need to prove that no shortcuts for \(\pi^{\prime}_{i}\) survive in \(G\setminus F^{\prime}_{i}\). Let \(p\) be an arbitrary shortcut for \(\pi^{\prime}_{i}\) in \(G\). Then it must contain some fault \(e^{\prime}\in F_{i}\), since \(F_{i}\) is a valid fault set. There are two cases:
* If \(e^{\prime}\in F^{\prime}_{i}\), then the shortcut \(p\) does not survive in \(G\setminus F^{\prime}_{i}\).
* Otherwise, suppose that \(e^{\prime}\notin F^{\prime}_{i}\), and so in particular \(e^{\prime}\in A\). In this case, \(p\)'s base fault \(b(p)\) is in \(B(e^{\prime})\subseteq F^{\prime}_{i}\), and thus not in \(G\setminus F^{\prime}_{i}\).
Therefore there are no surviving shortcuts for \(\pi^{\prime}_{i}\) in \(G\setminus F^{\prime}_{i}\).
Notice that Lemma 17 follows from Lemma 18: since we assume that \(F_{i}\) is a _minimal_ fault set, we must have that \(|N(A)|\geq A\) for all \(A\subseteq F_{i}\), since otherwise we would have \(|F^{\prime}_{i}|<|F_{i}|\). Since Hall's condition holds, over any augmented subpath \(\pi^{\prime}_{i}\), we can assign a unique base fault to every generating fault. Accordingly, we can define an injective function \(\phi_{i}:F_{i}\to F\) where \(\phi_{i}(e)\in B(e,\pi^{\prime}_{i})\).
We will construct our FS-pairs for \(\pi^{\prime}_{i}\) as \(\{(\phi_{i}(e),\pi^{\prime}_{i})\mid e\in F_{i}\}\), and repeat this process for every post-light augmented subpath. It follows that we will generate at least \((q-1)f/(2k)\) FS-pairs, since we have \((q-1)/2\) post-light augmented subpaths which have corresponding fault sets \(F_{i}\) with at least \(f/k\) faults, each of which can be assigned to one unique base fault in context of the augmented subpath.
### Analysis of FS-pairs
**Lemma 19**.: _Each fault in \(F\) will be in at most 4 FS-pairs._
Proof.: Recall that only post-light \(\pi^{\prime}_{i}\) will be in FS-pairs. Suppose, for a contradiction, that there is some base fault \(e=(u,v)\) associated with 5 \(\pi^{\prime}_{i}\). Then without loss of generality, at least 3 of the \(\pi^{\prime}_{i}\) have fault-free paths as subsets of some shortcut from their right ends \(x^{\prime}_{i+1}\) to \(u\). Let these subpaths be \(\pi^{\prime}_{a}\), \(\pi^{\prime}_{b}\), and \(\pi^{\prime}_{c}\), with \(a<b<c\). We will also label the fault-free paths as \(p_{a}\), \(p_{b}\), and \(p_{c}\). We have
\[|p_{a}|\leq|\pi^{\prime}_{a}|-2\quad\text{and}\quad|p_{c}|\leq|\pi^{\prime}_{ c}|-2\]
since the shortcut of \(\pi^{\prime}_{a}\) which \(p_{a}\) is on has length at least \(|p_{a}|+1\) when we include \(e\), and same with \(p_{c}\).
Since \(\pi^{\prime}_{a}\) is post-light, we have
\[|\pi^{\prime}_{a+1}|\geq|\pi^{\prime}_{a}|.\]
With each \(\pi^{\prime}_{i}\) being extended from \(\pi_{i}\) by one vertex, we have also
\[|\pi_{a+1}|\geq|\pi_{a}|.\]
Moreover, since \(a<b<c\), \(a+1\neq c\). Note that the distance from \(x^{\prime}_{a+1}\) to \(x^{\prime}_{c+1}\) in \(G\setminus F\) is their distance along any shortest path they're both on, which gives us a lower bound of
\[d_{G\setminus F}(x^{\prime}_{a+1},x^{\prime}_{c+1}) =\pi(s,t\mid F)[x^{\prime}_{a+1},x^{\prime}_{c+1}]\] \[=\sum_{i=a+1}^{c}|\pi_{i}|\] \[\geq|\pi_{a+1}|+|\pi_{c}|\] \[\geq|\pi_{a}|+|\pi_{c}|.\]
However, \(p_{a}\) and \(p_{c}\) give a fault-free path from \(x^{\prime}_{a+1}\) to \(x^{\prime}_{c+1}\) also, which upper bounds their distance as
\[d_{G\setminus F}(x^{\prime}_{a+1},x^{\prime}_{c+1}) \leq d_{G\setminus F}(x^{\prime}_{a+1},u)+d_{G\setminus F}(u,x^ {\prime}_{c+1})\] \[\leq|p_{a}|+|p_{c}|\] \[\leq|\pi^{\prime}_{a}|+|\pi^{\prime}_{c}|-4\] \[=|\pi_{a}|+|\pi_{c}|-2.\]
Which contradicts \(\pi(s,t\mid F)\) being a shortest path. Therefore, each base fault in \(F\) is associated with at most \(4\)\(\pi^{\prime}_{i}\) over all FS-pairs.
We are now ready to finish the proof of Theorem 4. Since we can generate \(\frac{(q-1)f}{2k}\) FS-pairs, each base fault can only be in \(4\) FS-pairs, and there are at most \(f\) possible choice of base faults, we have
\[4f\geq\frac{(q-1)f}{2k}\implies q\leq 8k+1.\]
**Corollary 20**.: _For any partition of \(\pi(s,t\mid F)\) into subpaths \(\pi_{i}\), there are at most \(4f\) right FS-pairs containing post-light augmented subpaths \(\{\pi^{\prime}_{i}\}\)._
Again, in the other case where most subpaths are pre-light, the relevant corollary is that there are at most \(4f\) left FS-pairs containing pre-light augmented subpaths \(\{\pi^{\prime}_{i}\}\). The proof is essentially identical.
## 5 Weighted Upper Bound
We next prove Theorem 6. Recall that the goal is to prove that in any weighted graph \(G\), every \(f\)-fault replacement path \(\pi\) can be partitioned into
\[\pi=\pi_{0}\circ e_{0}\circ\pi_{1}\circ e_{1}\circ\cdots\circ e_{q-2}\circ\pi _{q-1}\]
where each \(e_{i}\) is an edge and each \(\pi_{i}\) is a (possibly empty) subpath of \(\pi\) that is an \((f/k)\)-fault replacement path in \(G\), with \(q=O(k)\).
Our proof strategy will be similar to the previous argument with some minor changes: we still choose \(\pi_{i}\) greedily as the longest subpath which is an \((f/k)\)-fault replacement path, and we will take the next edge in the subpath as the \(e_{i}\) to interweave. Let \(q\) be the number of subpaths resulting from this decomposition; our goal is to upper bound \(q\) to be linear in \(k\).
We will define \(\pi^{\prime}_{i}\) as \(\pi_{i}\) augmented with \(e_{i}\) (again \(\pi^{\prime}_{q}\) is undefined). We define \(x_{i}\) as the vertex at the end of \(\pi^{\prime}_{i-1}\) and at the beginning of \(\pi_{i}\), so that for any \(i\),
\[\pi(s,t\mid F)[x_{i},x_{i+1}]=\pi^{\prime}_{i}=\pi_{i}\circ e_{i}.\]
Unlike in the unweighted setting, we no longer have overlaps in the \(\pi^{\prime}_{i}\). We will assess whether subpaths \(\pi^{\prime}_{i}\) are pre-light or post-light based on their weighted length, and proceed supposing that at least half of the subpaths are post-light. We generate FS-pairs with post-light subpaths as before, using the property that by maximality of \(\pi_{i}\), each \(\pi^{\prime}_{i}\) necessarily fails to be an \((f/k)\)-fault replacement path. Using the same argument based on Hall's Theorem as before, this guarantees that we get at least \(\frac{(q-1)f}{2k}\) distinct FS-pairs. Now we can complete the proof of Theorem 6 by the following lemma, which is analogous to Lemma 19, and will be proved similarly.
**Lemma 21**.: _Each (weighted) fault in \(F\) will be in at most 4 FS-pairs._
Proof.: Similarly to Lemma 19 we will prove the lemma by showing that no fault can be in 5 FS-pairs. Suppose, for a contradiction, that we have fault \(e=(u,v)\) in 5 FS-pairs. Without loss of generality at least 3 subpaths \(\pi^{\prime}_{a}\), \(\pi^{\prime}_{b}\), and \(\pi^{\prime}_{c}\) have fault-free paths which are contained in shortcuts from their right ends \(x_{a+1}\), \(x_{b+1}\), and \(x_{c+1}\) to \(u\). Let these paths be \(p_{a}\), \(p_{b}\), and \(p_{c}\). Since each path is contained in a shortcut using \(e\), we have
\[w(p_{a})<w(\pi^{\prime}_{a})-w(e)\quad\text{and}\quad w(p_{c})<w(\pi^{\prime}_{ c})-w(e).\]
Since \(\pi^{\prime}_{a}\) is post-light, we have
\[w(\pi^{\prime}_{a+1})\geq w(\pi^{\prime}_{a}).\]
Again we can use that \(a+1<c\) and that \(\pi(s,t\mid F)\) is a shortest path to lower bound the weighted distance of \(x_{a+1}\) to \(x_{c+1}\) in \(G\setminus F\) as
\[d_{G\setminus F}(x_{a+1},x_{c+1}) =w(\pi(s,t\mid F)[x_{a+1},x_{c+1}])\] \[=\sum_{i=a+1}^{c}w(\pi^{\prime}_{i})\] \[\geq w(\pi^{\prime}_{a+1})+w(\pi^{\prime}_{c})\] \[\geq w(\pi^{\prime}_{a})+w(\pi^{\prime}_{c}).\]
However we can use the fault free paths of \(p_{a}\) and \(p_{c}\) to upper bound the distance from \(x_{a+1}\) to \(x_{c+1}\) in \(G\setminus F\) to get a contradiction with the previous lower bound:
\[d_{G\setminus F}(x_{a+1},x_{c+1}) \leq d_{G\setminus F}(x_{a+1},u)+d_{G\setminus F}(u,x_{c+1})\] \[\leq w(p_{a})+w(p_{c})\] \[<w(\pi^{\prime}_{a})+w(\pi^{\prime}_{c})-2w(e).\qed\]
In the case that at least half of the subpaths are pre-light, we will generate FS-pairs with pre-light subpaths by defining base faults relative to the left ends \(x_{i}\) of subpaths \(\pi^{\prime}_{i}\). In the analysis, we replace \(x_{a+1}\), \(x_{b+1}\), and \(x_{c+1}\) with \(x_{a}\), \(x_{b}\) and \(x_{c}\). Our analysis of \(p_{a}\), \(p_{b}\), and \(p_{c}\) are unchanged. Comparing subpaths, we instead use the pre-light property of \(\pi^{\prime}_{c}\) to get
\[w(\pi^{\prime}_{c-1})\geq w(\pi^{\prime}_{c}).\]
Then the analysis on the distance is a lower bound of
\[d_{G\setminus F}(x_{a},x_{c}) =w(\pi(s,t\mid F)[x_{a},x_{c}])\] \[=\sum_{i=a}^{c-1}w(\pi^{\prime}_{i})\] \[\geq w(\pi^{\prime}_{a})+w(\pi^{\prime}_{c-1})\] \[\geq w(\pi^{\prime}_{a})+w(\pi^{\prime}_{c}),\]
and an upper bound of
\[d_{G\setminus F}(x_{a},x_{c}) \leq d_{G\setminus F}(x_{a},u)+d_{G\setminus F}(u,x_{c})\] \[\leq w(p_{a})+w(p_{c})\] \[<w(\pi^{\prime}_{a})+w(\pi^{\prime}_{c})-2w(e).\]
## 6 Algorithmic Path Decomposition
We will next prove Theorem 7, which holds for unweighted input graphs, and then afterwards describe the (minor) changes needed to adapt the algorithm to the weighted setting. As a reminder of our goal: we are given a graph \(G\), a fault set \(F\), a replacement path \(\pi(s,t\mid F)\), and a parameter \(k\) on input. Our goal is to find nodes \(\{x_{i}\}\) and fault sets \(F_{i}\), which partitions \(\pi(s,t\mid F)\) into \(q=O(k)\) replacement paths avoiding \(f/k\) faults each, as
\[\pi(s,t\mid F)=\pi(x_{0},x_{1}\mid F_{1})\circ\pi(x_{1},x_{2}\mid F_{2})\circ \ldots\circ\pi(x_{q-1},x_{q}\mid F_{q}).\]
### Fault Set Reducing Subroutine
Before describing our main algorithm, we will start with a useful subroutine, driven by an observation about the matching step in FS-pair generation. In our upper bound proof, we used a process for generating FS-pairs to bound the number of subpaths in the decomposition. We used _minimum size_ of the fault set \(F_{i}\) associated to each augmented subpath \(\pi^{\prime}_{i}\) to argue that we could generate \(|F_{i}|\) distinct FS-pairs.
The observation is that, letting \(F_{i}\) be _any_ (not necessarily minimum) valid fault set for \(\pi^{\prime}_{i}\) (that is, \(\pi^{\prime}_{i}\) is a shortest path in \(G\setminus F_{i}\)), if we can produce an FS-pair for every fault in \(F_{i}\) then our previous argument works. On the other hand, if we cannot produce an FS-pair for every fault in \(F_{i}\), then our previous argument gives us a process by which we can find a strictly smaller fault set \(F^{\prime}_{i}\) that is also valid for \(\pi^{\prime}_{i}\), by replacing the subset of \(F_{i}\) with the reduced set of their base faults.
The subroutine FaultReduce runs this process iteratively, in order to find a fault set \(F_{i}\) for the input subpath \(\pi_{i}\) that can be used to generate \(|F_{i}|\) FS-pairs (from both the left and right). We note the subtlety that \(F_{i}\) is not necessarily a minimum valid fault set for \(\pi_{i}\): as in Figure 6, there may exist a smaller valid fault set, but the algorithm will halt nonetheless if it can certify that the appropriate number of FS-pairs can be generated.
The essential properties of Algorithm 1 are captured by the following lemma.
**Lemma 22**.: _Relative to a graph \(G\) and fault set \(F\), there is a subroutine (Algorithm 1 - FaultReduce) that runs in polynomial time with the following behavior:_
* _The input is a path_ \(\pi_{i}\) _that is a shortest path in_ \(G\setminus F\)_._
* _The output is a fault set_ \(F_{i}\subseteq F\)_, such that:_
* \(\pi_{i}\) _is a shortest path in_ \(G\setminus F_{i}\)_, and_
* _one can generate_ \(|F_{i}|\) _left- and_ \(|F_{i}|\) _right-FS-pairs of_ \(\pi_{i}\) _from_ \(F_{i}\)_._
We will next provide additional details on some of the steps in Algorithm 1, and then prove Lemma 22.
Figure 6: This subpath and fault set \(F_{i}\) produces multiple FS-pairs via a saturated matching using the faults on the left as base faults, but its minimum fault set is only one edge \(\{e\}\).
Construction of \(\Gamma_{L}\) and \(\Gamma_{R}\).As in our previous proof, the graphs \(\Gamma_{L},\Gamma_{R}\) are the graphs representing the association between faults in \(F_{i}\) and left or right (respectively) base faults in \(F\). More specifically:
* Both \(\Gamma_{L}\) and \(\Gamma_{R}\) are bipartite graphs with vertex set \(F_{i}\cup F\), where \(F_{i}\) is the current fault set, and \(F\) is all initial faults. Thus faults in \(F_{i}\) are represented by two vertices, one on each side of the bipartition.
* In \(\Gamma_{L}\), we place an edge from \(e\in F_{i}\) to \(e_{b}\in F\) iff \(e_{b}\) is a left base fault for \(e\). The edges of \(\Gamma_{R}\) are defined similarly, with respect to right base faults.
These graph constructions require us to efficiently check whether or not a particular fault \(e_{b}\in F\) acts as a (left or right) base fault for some \(e\in F_{i}\). We next describe this process:
**Lemma 23**.: _Given a subpath \(\pi_{i}\), a valid fault set \(F_{i}\), and faults \(e\in F_{i},e_{b}\in F\), we can check whether or not \(e_{b}\) is a left and/or right base fault of \(e\) in polynomial time._
Proof.: First, the following notation will be helpful. Let \(x_{i},x_{i+1}\) be the endpoints of the input subpath \(\pi_{i}\). We will write \(d(x_{i},x_{i+1}\mid e_{b}\rightsquigarrow e)\) for the length of the shortest (possibly non-simple) \((x_{i},x_{i+1})\)-path that contains both \(e_{b}\) and \(e\), and which specifically uses \(e\) as the first fault in \(F\) along the path. We define \(d(x_{i+1},x_{i}\mid e_{b}\rightsquigarrow e)\) similarly. Note that \(e_{b}\) is a left base fault for \(e\) iff
\[d(x_{i},x_{i+1}\mid e_{b}\rightsquigarrow e)<|\pi_{i}|\]
and that \(e_{b}\) is a right base fault for \(e\) iff
\[d(x_{i+1},x_{i}\mid e_{b}\rightsquigarrow e)<|\pi_{i}|.\]
Thus, it suffices to compute the values of the left-hand side of these two inequalities. We will next describe computation of \(d(x_{i},x_{i+1}\mid e_{b}\rightsquigarrow e)\); the other computation is symmetric. There are two cases, depending on whether or not \(e_{b}=e\). Let \(e=(u,v)\), \(e_{b}=(u_{b},v_{b})\). When \(e_{b}\neq e\), the formula is:
\[d(x_{i},x_{i+1}\mid e_{b}\rightsquigarrow e)=\min \{d_{G\setminus F}(x_{i},u_{b})+d_{G}(v_{b},u)+d_{G}(v,x_{i+1})+2,\] \[d_{G\setminus F}(x_{i},u_{b})+d_{G}(v_{b},v)+d_{G}(u,x_{i+1})+2,\] \[d_{G\setminus F}(x_{i},v_{b})+d_{G}(u_{b},u)+d_{G}(v,x_{i+1})+2,\] \[d_{G\setminus F}(x_{i},v_{b})+d_{G}(u_{b},v)+d_{G}(u,x_{i+1})+2\}.\]
The four parts are needed since we consider paths that use \(e,e_{b}\) with either orientation, and the \(+2\) term arises to count the contribution of the edges \(e,e_{b}\) themselves. In the case where \(e_{b}=e\), the formula is
\[d(x_{i},x_{i+1}\mid e\rightsquigarrow e)=\min\{d_{G\setminus F}(x_{i},u)+d_{ G}(v,x_{i+1})+1,d_{G\setminus F}(x_{i},v)+d_{G}(u,x_{i+1})+1\}.\qed\]
Reducing \(F_{i}\).Next, we provide more detail on the step of reducing the fault set \(F_{i}\). This uses Hall's condition, in an analogous way to our previous proof. When we compute max matchings \(M_{L},M_{R}\) for \(\Gamma_{L},\Gamma_{R}\), if we successfully find matchings of size \(|M_{L}|\geq|F_{i}|\) or \(|M_{R}|\geq|F_{i}|\), then we have certified the ability to generate \(|F_{i}|\) left and right FS-pairs as in Section 4.2, and so the algorithm can return \(F_{i}\) and halt. Otherwise, suppose without loss of generality that \(|M_{L}|<|F_{i}|\). By Hall's condition, that means there exists a fault subset \(A\subseteq F_{i}\) such that the set of base faults \(B\subseteq F\) used by faults in \(A\) is strictly smaller than \(A\) itself. For the reduction step, we set \(F_{i}\gets F_{i}\cup B\setminus A\), which reduces the size of \(|F_{i}|\). By Lemma 18, this maintains the invariant that \(F_{i}\) is a valid fault set for the input path \(\pi_{i}\).
In order to efficiently find the non-expanding fault subset \(A\subseteq F_{i}\), we may compute the max matching in \(\Gamma_{L}\) (or \(\Gamma_{R}\)) using a primal-dual algorithm that returns both a max matching and a certificate of maximality of this form. For example, the Hungarian algorithm will do [2].
### Main Algorithm
ComputeSubpaths, described in Algorithm 2, performs a greedy search for subpath boundaries. In each round, we set the next subpath boundary node \(x_{i+1}\) to be the furthest node from the previous subpath boundary node \(x_{i}\), such that the corresponding subpath is certified by the algorithm FaultReduce to have size \(\leq f/k\). Thus, considering the augmented subpath that we get by adding an additional node to \(\pi_{i}\), we can generate \(>f/k\) left and right FS-pairs from this subpath.
We next state the algorithm; for ease of notation we label the vertices of the input path \(\pi(s,t\mid F)\) as \(v_{0},v_{1},\ldots,v_{\ell}\).
```
\(x_{0}\gets s\). \(i\gets 0\). while\(x_{i}\neq v_{\ell}\)do Binary search for largest \(y\) such that the fault set returned by FaultReduce\((\pi(s,t)[x_{i},v_{y}])\) has size \(\leq f/k\). \(i\gets i+1\). \(x_{i}\gets v_{y}\). return\(\{x_{j}\}_{j=0}^{i}\)
```
**Algorithm 2** ComputeSubpaths \((\pi(s,t\mid F),F,k)\)
**Theorem 24**.: _Algorithm 2 is correct and runs in polynomial time._
Proof.: In Corollary 20 from our upper bound section, we showed that there exist only \(O(f)\) total right FS-pairs using post-light subpaths (and, symmetrically, there exist only \(O(f)\) left FS-pairs using pre-light subpaths). Since at least half of the augmented subpaths are pre-light or half are post-light, and by Lemma 22 every augmented subpath can generate at least \(f/k\) left and right FS-pairs, altogether we will have at most \(O(k)\) subpaths.
For runtime, we always generate a linear number of subpaths, and locating the endpoint of each requires calling the subroutine \(\log n\) times. Thus the entire algorithm runs in polynomial time.
A similar approach works in the weighted setting, since the method of counting FS-pairs extends to the structure in Theorem 6 and upper bounds the number of interweaved subpaths and edges. The construction of auxiliary graphs \(\Gamma_{L}\) and \(\Gamma_{R}\) requires checking the weighted distance, but the matching and FS-pair generation is the same. We change the algorithm to add the next edge into the decomposition of \(\pi(s,t\mid F)\) after finding a maximal subpath with fault set at most \(f/k\). The upper bound for the number of subpaths based on enough FS-pairs being generated follows from the analysis in Theorem 6.
|
2310.20260 | Learning to Play Chess from Textbooks (LEAP): a Corpus for Evaluating
Chess Moves based on Sentiment Analysis | Learning chess strategies has been investigated widely, with most studies
focussing on learning from previous games using search algorithms. Chess
textbooks encapsulate grandmaster knowledge, explain playing strategies and
require a smaller search space compared to traditional chess agents. This paper
examines chess textbooks as a new knowledge source for enabling machines to
learn how to play chess -- a resource that has not been explored previously. We
developed the LEAP corpus, a first and new heterogeneous dataset with
structured (chess move notations and board states) and unstructured data
(textual descriptions) collected from a chess textbook containing 1164
sentences discussing strategic moves from 91 games. We firstly labelled the
sentences based on their relevance, i.e., whether they are discussing a move.
Each relevant sentence was then labelled according to its sentiment towards the
described move. We performed empirical experiments that assess the performance
of various transformer-based baseline models for sentiment analysis. Our
results demonstrate the feasibility of employing transformer-based sentiment
analysis models for evaluating chess moves, with the best performing model
obtaining a weighted micro F_1 score of 68%. Finally, we synthesised the LEAP
corpus to create a larger dataset, which can be used as a solution to the
limited textual resource in the chess domain. | Haifa Alrdahi, Riza Batista-Navarro | 2023-10-31T08:26:02Z | http://arxiv.org/abs/2310.20260v1 | # Learning to Play Chess from Textbooks (LEAP):
###### Abstract
Learning chess strategies has been investigated widely, with most studies focussing on learning from previous games using search algorithms. Chess textbooks encapsulate grandmaster knowledge, explain playing strategies and require a smaller search space compared to traditional chess agents. This paper examines chess textbooks as a new knowledge source for enabling machines to learn how to play chess--a resource that has not been explored previously. We developed the LEAP corpus, a first and new heterogeneous dataset with structured (chess move notations and board states) and unstructured data (textual descriptions) collected from a chess textbook containing 1164 sentences discussing strategic moves from 91 games. We firstly labelled the sentences based on their relevance, i.e., whether they are discussing a move. Each relevant sentence was then labelled according to its sentiment towards the described move. We performed empirical experiments that assess the performance of various transformer-based baseline models for sentiment analysis. Our results demonstrate the feasibility of employing transformer-based sentiment analysis models for evaluating chess moves, with the best performing model obtaining a weighted micro \(F_{1}\) score of 68%. Finally, we synthesised the LEAP corpus to create a larger dataset, which can be used as a solution to the limited textual resource in the chess domain.
## Background & Summary
Chess is a stochastic environment controlled by the rules of playing, which are simple to comprehend, yet it is challenging to make accurate decisions in the game. Hence, chess lends itself well to the development of an artificial intelligence (AI) system that simulates real-life problems, such as decision-making [1]. Meanwhile, chess grandmasters had produced, and continue to produce chess-teaching textbooks, in unstructured data format, for sharing their practical knowledge on strategies. Chess-teaching textbooks comprise a substantial knowledge source, out of many others (e.g., game commentaries), that chess players continuously use to grasp knowledge of strategies and tactics and improve their skills [2]. Over the years, little effort has been put to explore chess knowledge from unstructured data sources. Many machine learning algorithms for playing chess have thus far overlooked the potential of obtaining knowledge from chess-teaching textbooks. Instead, knowledge is typically obtained from databases of chess moves, such as _DeepChess_[3]. Such an approach is reliant on large curated structured datasets of chess moves capturing strategies of expert players, such as _Chess Database_[4]. The production of such datasets is often laborious and time-consuming [5] and requires intensive work for the interpretability and explainability in the decision-making process of the move [6, 7]. Additionally, using brute-force approaches, such as AlphaZero [8], to obtain high performance requires expensive computational resources including advanced hardware requirements that might not be accessible or obtainable for research [5, 9].
Processing knowledge from unstructured data is intricate due to knowledge accessibility, the language's complex nature [10, 11], and understanding of the domain environment expressed in natural language. However, various studies in different domains have shown that using unstructured data for an alternative approach to overcome some limitations of using brute-force approaches only. Previous work focused mainly on extracting actions from short direct instruction sentences. This alternative approach has led to the improvement of the AI systems performance. Information extraction (IE) methods for identifying entities and relations from the _Civilization II_ instructional manual were integrated into a Monte Carlo Tree Search (MCTS) model to learn how to play the game [12]. The natural language-based model demonstrated improved performance by 33.9% of the time compared to an AI agent without knowledge of the instructional manuals. In the video-games domain, a work to analyse Steam platform reviews was performed for identifying game features, sentiment towards players and identifying spam reviews [13]. Similarly, Events Extraction (EE) from unstructured data was used for continuously updating a stochastic model that performs
operational processes in uncertain and changing environments, such as evacuation routes recommended by humans using the Twitter platform [14]. An LSTM-RNN model was developed to translate instruction sentences of the current state into action sequences for autonomous agents [15]. The recent advances in context representation and language analysis led to the development of state-of-the-art pre-trained language models. Such models were pre-trained on a large corpus and then fine-tuned for downstream tasks in different domains. For example, sentiment analysis of chess commentaries representation using an LSTM model with BERT embeddings [9] improved the alpha-beta chess evaluation function, which won 81% of 100 games against random and DeepChess [3] systems. Recently, GPT-2 model was trained on 2.8 million chess games in Portable Game Notation (PGN) to predict the next move [16]. A different approach of analysing how GPT-2 learns chess playing rules, store chess knowledge and generate a move was studied in [17], which took into consideration different model sizes, the size of the training and the number of correctly generated moves. An LSTM model was used to generate chess moves commentaries benchmark that are comparable to human commentaries in terms of grammar and language [18]. Furthermore, DistilBERT [19] was used to encode state and action representations of the text game, then fed them into Reinforcement Learning (RL) agent, which achieved 44.7 a new state-of-the-art result in the interactive textual game \(\mathit{Zork1}\).
Nonetheless, the usability and the benefit of free unstructured data approach, such as textbooks, has not been explored before and there is almost no previous research in the literature to explore this approach in board games domain, specifically in chess domain. In this paper, we are introducing a new task in the game playing domain that is underpinned by Natural Language Processing (NLP): sentiment analysis for unlocking the otherwise hidden knowledge of chess master players from unstructured data. The contributions of this paper is four folds; firstly, we introduce LEAP, LEArning to Play chess, a new heterogeneous corpus collected from Chess-teaching Textbooks in believe that it can be used to aid natural language-based chess agents to evaluate moves from heterogeneous chess knowledge sources. The corpus contains two data types; labelled sentences that discuss the game's moves, "unstructured data", and the full game's move with their board states "structured data". We believe that the latter data format is necessary to link the chess context, expressed in sentences, to its equivalent environment represented in board states. Secondly, We introduce two types of data annotations (in the above-mentioned corpus): (1) _relevancy_ labels indicating whether a sentence is discussing a move, and (2) _sentiment_ labels, i.e., whether a move is considered as good or bad, where labels are cast as an evaluation task of the move. Thirdly, we demonstrate by empirical evaluations the usability and characteristics of the corpus using state-of-the-art transformer-based models for the two classification tasks. We report the performance of various transformer models as baselines, discuss further improvements and propose approaches for move evaluation. Finally, we contribute to the limited unstructured resources problem in the chess domain by synthesising the LEAP corpus to create a larger dataset, and show by empirical experiments the quality and usability of the dataset for training language models.
## Methods
### Challenges
A textbook is a knowledge acquisition and learning source, but there are several challenges that come with mining chess-teaching textbooks. Firstly, chess playing _Illustration_ is not limited to text, but to board state as well, an example of textbook's paragraph is shown in Figure 1. It is difficult to deduct why a move described in text is necessary for a player without visualisation of the board state. Also, showing a board state diagram and a list of moves only is not useful without explaining why these moves are chosen and have been played in this order. Thus, the board state diagrams help the reader visualising the environment and understand the connection with the described moves in the text. Chess-teaching textbooks provide access to both data formats; the textual description of the states and the legal move(s) to be considered in these states, and also refers to diagrams of the board state (environment).
The second challenge in mining such a knowledge source is _Dependency_. Chess game consists of a sequence of moves, where each played move creates a new board state, changes the set of plausible, legal moves, and affects board evaluation for the players. An example sentence is in Figure 1; _White's further attack on the Knight by Qf3 forces the Rook to defend on K3_. This means that Black's move "_Rook to defend on K3_" is favoured only if White played "_Qf3_". Hence, the decision-making process with respect to moves, requires recognising the move's effect on the board state after the move has been made, which eventually creates a dependency between the current board state and the moves being considered.
_Incompleteness or Implicitness_ of information required for decision-making is a challenge presented in textbooks. For example, the move to be played in the following sentence, taken from Chess Strategy by Edward Lasker (1918) 1, is missing and not described explicitly in text: _"Black would appear to have sufficient protection available, with his Knight and Bishop."_. The move with the Knight or the Bishop is implicitly favoured. However, it is not straightforward for a chess agent to identify this move, typically requiring the use of search algorithms.
The fourth challenge is related to _Recommendation of Conflicting Moves_: Some sentences describe a move, or a sequence of moves, using counterfactual statements which could confuse a chess agent, leading to the extraction of inaccurate moves and hence incorrect actions. An example of counterfactual statement can be seen in the following sentences: _"If White had only a Bishop or a Knight Additionally to the King he could never mate Black, for neither Bishop nor Knight can attack the King and at the same time control a square adjacent to the King. This, however, is at least necessary to force the mate, even in the most unfavourable position of the King, that is, in the corner."_. At first, the author explains that the move _"mate Black King"_ by the White Bishop or a Knight is discouraged in this game state, but also suggests in the second sentence that this move is necessary to be played.
The last challenge we observed is _Poor Formatting_: Information is not always organised or presented in a consistent manner. Textbooks can describe special moves in natural language, such as "_pawn is promoted to a queen_", or in chess notation such as Standard Algebraic Notation (SAN) "_Pe8=Q_", or in both "_Qc8, white was just promoted to a queen, giving mate_".
Some of the challenges involved in mining chess-teaching textbooks are also pertinent in other problems that require decision-making, such as in computer vision [20], science literature understanding [21], medical domains [22] and interacting with robotics and human-machine interaction using natural language [23].
### Case study context
The goal of evaluating chess moves is to find an optimal move \(Move_{o}\) for a state \(State_{s}\). Since 1949, Claude Shannon designed the evaluation formula in a heuristic structure to determine the relative value of a move \(Move_{m}\) by measuring the score of a board state \(State_{s}\) after playing \(Move_{m}\). There are different features to consider while evaluating a move \(Move_{m}\) heuristically, such as the game status e.g. middle-game or end-game, the pieces-positions values and the king safety based on its position. Currently, these heuristics are embedded in the chess engines with search algorithms to evaluate chess moves. In general, the search algorithms are based on tree structure, where the nodes of the tree represent all possible future states of each legal move for the player, the depth level represents the result state of playing a legal move, e.g. \(Move_{m}\), and the edges represents the transition from a state \(State_{s}\) to state \(State_{s+1}\) by playing \(Move_{m}\). Alpha-beta pruning search algorithm has been widely used to evaluate chess moves, which is a directed graph based on Min-Max algorithm. To determine which move should be played at \(State_{s}\), the Alpha-beta algorithm first removes (prunes) the unused tree branch, then visits all the remaining nodes (moves) and evaluates the board state score after playing each move. Finally, Alpha-beta backtracks and selects the optimal move \(Move_{o}\) for the \(State_{s}\). The second search algorithm is Monte Carlo Tree search algorithm which is based on simulations games, where each simulation game is the result of a randomly selected moves. Finally the optimal move \(Move_{o}\) is selected based on the game that achieved highest result.
The scope of this work is to extract the evaluation function of the moves from the unstructured data "natural language" description in chess-teaching textbooks that evaluates a move. Our scope is different from previous work in the literature by mining this rich content instead of itemised instructions, such as in _Civilization II_ instructional manual [12]. In this on-going work, our first step is aiming to bridge the gap for a chess agent understanding of this description during decision-making processes. Figure 2 shows an example sentence and how it can be analysed and understood by a chess agent. A textbook sentence usually is a descriptive rather than a direct instruction of an action or move. The example sentence explains _why_ Black needs to play the move "_the exchange on the seventh move_", rather than directly instructing the chess agent to play the move. To train a natural language-based chess agent understanding such an action, we need information extraction methods, possibly Named-Entity Recognition (NER), to identify the player and possible moves from the discussed moves \(Move=(Move_{1},...,Move_{n})\), where \(n\) is the number of moves discussed in a sentence. In addition, it is necessary to integrate the current board state \(State_{s}\) with moves extracted from the sentences to identify which of the discussed moves is a possible one. Hence, we formulate the information as a tuple of each extracted move:
\[Board\ Evaluation(Player,Move_{i},State_{s})\]
the tuple is sent to a chess engine to validate the move legally at this specific board state \(State_{s}\), then evaluate it using search algorithms (e.g., alpha-beta pruning). However, following the related work in analysing chess commentaries, we hypothesis that it is possible to infer the move evaluation by analysing the text description of the move's effect using sentiment analysis. The outcome of a move is usually described in textbook to either have a negative, positive or neutral effect on the player or sometimes to the opponent. The example sentence in Figure 2 highlights that the outcome of playing the move _"the exchange, exd4"_ is positive for Black in this turn _"is compulsory"_ by explaining that a Black Pawn will be lost later. In other words, if the move _"the exchange, exd4"_ is played it will increase the black score and reduce the white score by losing his pawn. Then it goes on to explain the necessity of the move _"is compulsory"_, because the black pawn will be lost in the next move by Nxd4. This will lead to lower back the Black player score and increase the White player score. Hence, the move _"the exchange, exd4"_ helps Black player to maintain his score for another turn, and without the move his score would decrease in the second turn. A human player can easily interpret the discussed moves and their effects, filter and choose the move with a positive impact while
considering the current board at the same time. For a natural language-based chess agent, we can cast the move evaluation process as the analysis of a sentiment towards the player. A positive sentiment indicates that playing the move would likely have a positive effect on the player. This can be explicitly stated, such as "it is best to play move X", or implicitly derived as in the example in Figure 2. A negative sentiment indicates a negative effect on the player: either explicitly, such as "it is best to avoid playing move X", or implicitly as in "playing move X will help opponent player to progress".
### Dataset sources
Figure 3 summarises the steps taken in constructing the corpus. We searched Project Gutenberg ([https://www.gutenberg.org](https://www.gutenberg.org)), a free electronic textbooks library, using the search term "chess" and 25 e-books were retrieved and manually scanned according to the following selection criteria:
* E-book must be in the English language.
* E-book must be aimed at teaching how to play strategic chess moves.
* E-book must not be about the history of chess.
* E-book must contain textual descriptions of moves, and not only listing moves from played games.
* E-book must be aimed at teaching humans, and not for designing chess systems.
* The author must be a chess master-player with an Elo rating above 2400.
ELO is a rating system that relatively measures the skill levels of chess players. 2400 is a rating of most international master players and some Grandmaster players. Therefore, we can obtain a level of knowledge close to the rating of a strong chess engine, such as Stockfish ([https://stockfishchess.org/](https://stockfishchess.org/)), which can reach level up to superhuman with ELO rating above 30008. The textbooks that met the initial search criteria are listed in Table 1. After manual checking, we selected "Chess Strategy" textbook, E-book id (5614), by Edward Lasker, an international master chess player with an ELO rating of 2489. The textbook explains strategies played in popular tournaments games, and discusses moves that should, would, or not, have been considered. Also, this textbook speaks to both beginners and advanced players.
Language grounding is required for a model to learn the representation of the knowledge in both the textual descriptions and the environment. We used regular expression-based rules to parse diagrams of the board states and convert them into the Forsyth-Edwards Notation (FEN) format. This is a chess notation that describes any board state of a game with pieces positions, player's turn and move. Chess engines use this format to initialise the game at any state. Finally, we retrieved the tournament games described in the textbook from the chess database ([https://www.chessgames.com/index.html](https://www.chessgames.com/index.html)) in Portable Game Notation (PGN), a format that records moves, players' names, time/date and game result. Also, we manually created PGN files for games that were not retrievable from the database or from other sources.
### Data cleaning
Descriptive notation is a move-recording style that was used until 1980. Since then, a new notation style, Standard Algebraic Notation (SAN), was introduced for machines to parse chess moves and games. To follow standardised chess semantics, we applied cleaning and preprocessing steps to convert descriptive notation to its corresponding SAN:
* Renaming of positions of pieces, such as columns names, e.g., from "QR" file to "a" file.
* Changing descriptive notation of piece names, such as "QR" to "Rook".
* Changing descriptive notation of movements to standard algebraic notation, such as "QR2" to "Ra2", for chess engine readability purposes.
* Manual correction of incorrect mentions of a move or a piece in board diagrams, arising from optical character recognition (OCR) conversion errors.
* Removal of diagrams and text sections that are not pertinent to a particular game, e.g., diagrams that were included in the textbook to illustrate piece movements without the other pieces presented in the board.
### Data processing
#### Annotation process for evaluating chess move
Textbooks contain different types of sentences, such as introductory sentences which are not relevant to our task. Thus, we followed a similar approach for sentiment annotation as in [24, 25] to annotate the corpus with move relevancy labels on a sentence level. A sentence is considered relevant only if it discusses a move, or a sequence of moves, as a topic of the sentence in an evaluative form. An example of a relevant sentence is "_To convert it into a win by queueing the extra pawn is only a matter of time._" because it discusses the move "_queening the extra pawn_" as a positive move because it will lead to "_winning_". An
example of non-relevance sentence is "_We have now seen how the possession of open files reacts on the mobility of the opposing forces, forever increasing their difficulties until the positional advantage is converted into material gain._".
Afterwards, each sentence labelled as relevant is annotated with one sentiment label with the aim to evaluate the move it discusses. For this task we followed the "simple sentiment annotation schema" described in [26], applying it at the level of sentences. We define the sentiment labels as follows:
* _Positive_[label:2]: expresses a good outcome of playing a move for the player. An example sentence, "But White can, by a simple sacrifice, bring the slumbering R at a1 into sudden action: 1.... Nxe4 2. Re1 Bf5 3. Nc3 Nd6 4. Rxe4 Nxe4 5. Re1 and White wins two pieces for his Rook."
* _Negative_[label:0]: expresses a negative outcome of playing a move for the player. An example sentence, "An example of this is found in Diagram 6; Nxe4 fails on account of Rxc6; this leaves the Knight unprotected, and White wins two pieces for his Rook."
* _Neutral_[label:1]: a sentence does not express any explicit outcome of a move. An example sentence, "It is Black's move, and we will suppose he wishes to play e5."
* _Uncertain (not sure)_[label:3]: when the outcome of a move is difficult to identify, or it is difficult to identify an explicit move. An example sentence, "In both cases White has an easy development, whilst Black has no convenient square for his Queen's Bishop."
We consider negation in a sentence if it has a direct effect on its sentiment. For example, "_Black cannot very well exchange the pawns, leaving the King's file quite exposed, and must submit to White playing cxd5 maintaining the pawn at e4 and preventing Black's d5 for some time to come._". The negative polarity here implies negative sentiment toward the move "_exchange the pawns_". The dataset was fully labelled by the first author, and a second annotator with chess domain background was recruited for measuring inter-annotator agreement.
### Synthetic data generation
The size of the LEAP corpus is considered small compared to corpus sizes in other domains, and manually labelling more datasets is labor-intensive. Alternatively, data augmentation methods offer an option for increasing the corpus size, enabling language models to generalise and comprehend the contextual and linguistic characteristics of a specific domain. This, in turn, enhances the language model's performance in classification tasks. In this study, we adopted the data augmentation approach reported in similar research to generate synthetic data from the LEAP corpus [27]. We employed the DINO method introduced recently for this purpose [28]. Generative Pre-trained Transformer (GPT-2-xl) [29] is the core component of the method, which follows unsupervised approach to generate synthetic data from scratch based on three prompt instructions:
* Write two sentences that mean the same thing.
* Write two sentences that are somewhat similar.
* Write two sentences that are on completely different topics.
The three prompt instructions are acted as a self-debiasing approach, where each prompt group would produce a sentence with a different meaning, and each generated sentence should be falling only into one prompt group to control the quality of the generated sentences. However, we acknowledge that the text generation model's output may differ from the original sentences, potentially resulting in the creation of illegal moves, altering the likelihood of making certain moves, or shifting the sentiment associated with those moves.
We generated 30 synthetic data (candidates) per original sentence (reference) per prompt instruction, resulting in a total of 99,529 generated synthetic data points. After removing duplicate sentences where the reference and candidate are the same, and eliminating sentences generated by the third prompt instruction based on irrelevant grounds, the final count of synthetic data was reduced to 82,145.
To evaluate the quality of the synthetic data, we employed the BertScore metric [30]. This metric represents each token using contextual embeddings and measures the cosine similarity between the reference and candidate sentences. The results were reported in terms of F1-score. Additionally, we utilised the BLEURT score [31], a slightly modified version of the original Bert model, where the model was fine-tuned for language robustness and domain generalisation through an unsupervised approach, using millions of synthetic data pairs (reference, candidate). These pairs were generated using three techniques: different methods of mask filling, back translation from English to another language and back to English, and random word dropping from the generated data (reference sentence). The second modification involved leveraging the special token [CLS], which represents a vector of the contextual representation of the reference and candidate sentences. A linear layer was added on top of this representation to predict the similarity rating.
During evaluation and manual inspection of a subset of sentence pairs (reference, candidate), we observed that if the BLEURT score of the candidate sentence was above 80, it mostly indicated a duplicate of the reference sentence with limited changes, such as altering a number or removing a comma. To prevent the model from overfitting on duplicate sentences during training, we set the threshold for synthetic data with BLEURT scores between 30 and 80. This resulted in 69,049 synthetic sentences for the relevant classification task and 39,449 synthetic sentences for the sentiment classification task. The threshold range was selected after multiple experiments with different ranges and manual inspection of the data. Furthermore, we observed a relatively similar scoring range between both metrics, where high or low scores were assigned to instances by both metrics. To illustrate this observation, we visualised the scores from both metrics for a sample of 5,000 synthetic sentences in Figure 4.
### Classification models
Recently, transformer architectures, based on the attention mechanism [32], have achieved state-of-the-art (SOTA) results and outperforming any previous text classification models. Such models were pre-trained on large size of general domain datasets to gain large knowledge and transfer it into specific tasks. One advantage of adapting transfer learning technique is pushing SOTA results for specific task with labelled data in limited-resources domains, where there are not enough data to pre-train a model from scratch. Thus we selected four transformer-based pre-trained models based on various architectures, as baselines for both topic relevance and sentiment analysis classification tasks: (1) BERT [33] was the first deep Bi-directional Encoder Representation using Transformer model developed based on masked language model (MLM) technique, which randomly masks words and trains to predict them based on the context embeddings. Also, it introduces Next Sentence Prediction (NSP) approach to improve the performance for natural language understanding tasks, such as Natural Language Inference (NLI). BERT obtained new state-of-the-art results on 11 different tasks, including General Language Understanding Evaluation (GLUE) [34], (SQuAD 1.1 [35], SQuAD 2.0) and Situations With Adversarial Generations (SWAG) [36].(2) XLNET [37] is an auto-regressive transformer model that uses permutation language model approach of all embedding factorisation in order to overcome the defects of words dependency in MLM approach. It outperformed BERT models on reading comprehension tasks including question answering, text classification datasets and GLUE tasks. (3) RoBERTa [38] a modified BERT architecture that pre-trained BERT for a longer time, longer sentence and a larger corpus. The architecture achieved higher performance on some GLUE tasks comparing to both BERT and XLNET. (4) ALBERT [39] is a lighter BERT version that uses two parameter-decreasing techniques; factorized embedding parameterization and cross-layer parameter sharing. The techniques are designed to reduce the problem of large model size, which leads to memory limitations and long time training. Also, it uses self-supervised loss technique for sentence-order prediction (SOP) instead of NSP to improve inter-sentence coherence for multi-sentence encoding tasks. ALBERT outperformed BERT over GLUE, SQuAD datasets and RACE benchmark for reading comprehension task [40] with (1.4% - 8.4%) improvement. ALBERT also outperformed RoBERTa and XLNET on some of the GLUE tasks, and on SQuAD and RACE benchmarks. Finally, we included the distilled version of both RoBERTa and BERT models, which is a reduced-size model of the original BERT and RoBERTa models but faster and on a comparable level of performance with the original models [19].
Each model was developed with different settings, including the size and type of the training dataset (e.g., books, news articles), the length of embeddings; whether the type of tokens used (cased or uncased) and the number of parameters, which resulted in various model sizes. To thoroughly examine the impact of these settings, we explored both model sizes and token types for each architecture. Specifically, we considered (BERT-base-uncased, BERT-base-cased, BERT-large-uncased, BERT-large-cased, BERT-large-cased, BERT-large-uncased-whole-word, BERT-large-cased-whole-word, distil-BERT-cased, distil-BERT-uncased) for BERT, (XLNET-base-cased and XLNET-large-cased) for XLNET, (RoBERTa-base, RoBERTa-large, distil-RoBERTa-based) for RoBERTa, and ALBERT-base, ALBERT-large for ALBERT. Finally, we demonstrate the Transformer model's proficiency in comprehending chess context and highlight the effectiveness of transfer learning. Thus, we employed several linear machine learning baseline models; Random Forest (RF), Support Vector Machines (SVM), and Multi-layer Perceptron Neural Network (MLP). For each model, we utilised three types of pre-trained embeddings to represent the corpus semantically, hence have a fair comparison with Transformers models: The pre-trained embeddings are: pre-trained GLOVE embeddings [41], BERT embeddings, and Sentence-BERT (all-MiniLM-L6) embeddings [42].
### Data Records
Table 2 summarises the characteristics of our corpus: number of sentences, tokens, unique tokens, discussed board states, number of sentences labelled, and summarises the synthetic data size. With regard to organising the corpus, the raw text of the textbook was split into paragraphs at first. However, a paragraph is segmented every time to a sentence referring to a board state diagram is encountered, to allow for separately storing the description together with the corresponding board state in FEN format. Nonetheless, every full game is also provided in PGN format. All sentences annotated with both relevance and sentiment labels were saved separately and provided in JavaScript Object Notation (JSON), a data representation format, which
contains the sentences and the classification label.
We split the dataset into training, validation and test subsets using the bootstrap sampling technique [43], to overcome the data imbalance issue, and set the seed to 50 for reproducibility. For relevance classification we divided the corpus into training, validation and test subsets following a 80-10-10% split [25]; for sentiment classification the proportion is 70-10-20% [44]. The resulting number of sentences in each subset is described in Table 3. We evaluated the suitability of the synthetic dataset for fine-tuning models by assessing the performance of the models on an unseen testing set, which consisted of the original sentences from the LEAP corpus. To ensure a proper evaluation, we split the synthetic sentences into a training set (70% of the data) and a validation set (30% of the data). Notably, we excluded the candidate sentences generated from the testing set sentences to prevent any data leakage during the evaluation process.
## Technical Validation
### Annotation analysis
To validate the labels described in Table 2, the dataset underwent a comprehensive annotation process. Initially, the first author fully labelled the dataset. Subsequently, the dataset was double-annotated by two annotators to measure inter-annotator agreement. This process was carried out in two rounds. The level of agreement was quantified using Cohen's Kappa coefficient [45] (\(\kappa\)) for both annotation tasks and both rounds, and the results are presented in Table 4.
In the first round, 10% of the dataset was used for inter-annotator agreement assessment. The results indicated a moderate to substantial agreement strength [46]. This initial round of agreement evaluation helped identify and address any issues with the annotation guidelines, ensuring their robustness. Subsequently, in the second round, 30% of the dataset was double-annotated, resulting in an "almost perfect" level of agreement for both annotation tasks [46].
The annotation schema adopted ensures the identification of domain-specific knowledge--in the form of sentences discussing and evaluating chess moves and strategies--which helps in preventing noisy sentences from being propagated onto the move decision-making process. However, as with many other domains, there is some data imbalance in our corpus, which can be seen in number of topic relevance and non-relevance classes, and between the sentiment labels in LEAP corpus. Moreover, textbook sentences have a tendency to describe moves and strategies that will lead to positive outcomes. Thus, there are almost twice as many positive sentences compared to negative ones. Finally, we compiled the most common comments provided by the annotators during the manual annotation process. These comments align with the challenges identified in the previous section and may have an impact on the results of automatic classification. The comments are as follows:
* The sentence discusses moves for both players.
* Difficulty arises in assigning a sentiment when multiple moves are presented in a sentence.
* Difficulty arises in selecting a sentiment for implicit moves in a sentence.
* Difficulty arises in interpreting a sentence that discusses a move without access to the board state.
* Sentences exist with contradictory sentiments regarding the same move at a specific board state.
These comments provide valuable insights into the complexities and nuances of the annotation process, highlighting the potential challenges that could affect the accuracy of automatic classification.
### Empirical evaluation
In this section, we describe the steps for evaluating our new Learning to Play Chess from Textbooks (LEAP) corpus, and the performance of state-of-the-art models in two classification tasks. The models hyper-parameters setting were the same for both classification tasks to understand the effect of the tasks and context on the models performance. We used the following baseline hyper-parameters for the Transformers models: learning rate: 4e-05, training and evaluation batch size: 8, dropout: 0.1. We randomly selected two weight initialisation seeds (0, 42) to understand if the \(F_{1}\) scores are related to model sensitivity of randomisation for weight initialisation seeds or the scores are effected by classification task [47], and set the number of epochs to 10 for both classification tasks. During training phase, we evaluate the models at the end of each epoch using the validation set. The model "best epoch" is usually measured by achieving convergent; where the model achieves the lowest evaluation loss score when tested on the validation set and normally each model achieve the lowest evaluation loss at different epoch. Figure 5 summarises the evaluation loss functions over all epochs for both weights initialisation seeds and both classification tasks. The models obtained at the best epoch for both classification tasks were evaluated against the task corresponding testing set using weighted macro \(F_{1}\) score, which take into account class imbalance in the dataset, and we included the micro \(F_{1}\), which assign equal weight for all classes to measure the overall performance [44, 25]. The results of the Transformers models are summarised in Tables 5 and 6, and the \(F_{1}\) scores of the machine learning baseline models are reported in Figure 6.
Most Transformers models achieved between 88-97% \(F_{1}\) score on topic relevant classification task, and between 27-68% \(F_{1}\) score on sentiment classification task. Using BERT or Sentence-BERT embeddings slightly improved the performance of machine learning baseline models, however, almost all Transformer models achieved equal \(F_{1}\) scores or a higher \(F_{1}\) scores with 2-4%. This clearly shows the power of the Transformer architecture and transfer learning approach in classification tasks with limited corpus size. Nonetheless, it is assumed that larger Transformer models usually lead to an improvement in the performance, regardless the size of the dataset used for fine-tuning [33, 39]. Yet, such an improvements was not always present in both classification tasks, such as ALBERT large models achieved \(F_{1}\) score less than baseline machine learning models in topic relevance task, and partially in sentiment analysis task, XLNet large model also achieved lowest \(F_{1}\) score comparing to machine learning and Transformers models, and this observation was also presented in recent study [48]. One justification could be that the corpus size might affect the large models' performance. Also, different weight initialisation seed could negatively impact the model performance [47] due to randomisation assignment of weights for deep learning models comparing to a more stable initialisation for machine learning models. Some large-based models were the most models sensitive to weight initialisation seeds, where \(F_{1}\) score changed more than +- 10%, e.g. ALBERT-large model and BERT-large--cased-whole-word-mask model in sentiment analysis task, and BERT-large-uncased for topic relevance task. However, such randomisation could be beneficial to achieve a higher performance, such as RoBERTa-base model achieving the highest \(F_{1}\) score of 68% using weight initialisation seed 42 comparing to 64% \(F_{1}\) score by DistilBERT-base-uncased using weight initialisation seed 0 for sentiment analysis classification task. We did not observe a direct effect of the token type (cased, uncased) to the \(F_{1}\) scores, but they were mostly effected by the weight initialisation seed. This is an indication to not to follow the standard weight initialisation seed, which is usually 42, but to practice some experimentation with various models and careful selection of hyper-parameters, such as weight initialisation seed, before choosing.
We noticed that most models converged early in topic relevant classification task when epoch = 1 or 2, while the convergent in sentiment classification task was mostly on or after epoch 3. Also, the average evaluation loss function, on best epoch, achieved in topic relevant classification task was 0.24 for seed =0 and 0.22 for seed = 42. In sentiment classification task, the average evaluation loss function was 1.026 for seed = 0 and 1.041 for seed = 42. This shows that the models struggled in learning sentiment analysis task, which also explains the difference of \(F_{1}\) scores achieved by the models between both classification tasks.
To understand the effect of the context, first, we measured the reading comprehension of chess-teaching sentences by Flesch-Kincaid reading ease formula [49]. The sentences' easy to read score was 93, with an average of 24 words per sentence, which indicates that humans can process and understand the text easily. Secondly, we analysed the models' ability to understood the context by label predictions distribution in both classification tasks. Also, we show a sample from the testing dataset of each task in Table 7 and Table 8; 10 sentences for each correctly and incorrectly predicted class labels by the models that achieved highest f1-score per each seed. Figure 7 shows that models understood the context for labelling topic relevant classification task, while data imbalance issue might be a reason for false positive between both classes [50]. The same issue mostly contributed to lowering the \(F_{1}\) scores in sentiment analysis classification task for all models, as shown in Figure 8, due to small size of 'not sure [3]' class. In the same classification task, most false positive was between 'neutral [1]' and 'positive [2]' classes. Giving that chess is closed domain, many domain-specific terms are repeated, and sentence structure and semantic are relatively similar, regardless of the class label, which might confused the models.
Also, many sentences discuss multiple moves or the effect of the move on both players, which sometimes confused the annotators and the models. On the other hand, sentences that discuss single move and its effect to a single player are more likely to be classified correctly, as seen in examples reported in Table 8. Hence, sentiment analysis schema on a finer-grained level, such as Aspect-based Sentiment Analysis (ABSA) [51] and semantic role labelling (SRL) level [26, 52], might improves the analysis by focusing on the move as a primary target of the sentiment. Finally, human understands the semantic of language by its environment, hence depending only on words for analysing moves, without access to the environment "board" that the text describes, hindered evaluating the quality of the moves in sentiment classification task [53].
Finally, to address the effect of class imbalance and the corpus size on the Transformers models performance, we created random three sub-datasets using the synthetic data we generated from the LEAP corpus to fine-tuned the Transformer models; (1) "balanced": we used the original LEAP corpus with synthetic sentences to balance the number of classes, (2) "oversampled": original LEAP corpus over-sampled the minority classes with synthetic sentences to increase the size of the dataset and balance the classes as well, "synthetic": fine-tuned the models using only the synthetic dataset. Furthermore, to analyse if chess terminology have an impact over the models performance, we masked the chess entities of moves and players in the original LEAP corpus and replaced them with "MOV and PLY" terms. Each dataset was split into 70% training and 30% validation set, and we evaluated the fine-tuned models using the original LEAP testing set with the same hyper-parameters.
Figure 9 depicts the weighted macro \(F1\) scores for both classification tasks using the five datasets. Firstly, the \(F1\) scores of the Masked dataset indicate that the removal of chess entities does not necessarily affect or improve performance, especially in the sentiment analysis classification task. Secondly, as expected, balancing the classes and sometimes increasing the size of the
dataset can enhance the \(F1\) scores for both tasks. However, such improvement was limited, and the utilization of synthetic data to fine-tune the models resulted in a reduction of the \(F1\) scores.
We manually analyzed and labeled 200 sentences randomly selected from the synthetic data for the sentiment analysis task. We found that it is not always possible to automatically transfer the original sentence label to the synthetically generated sentence. Although the generated sentence presents a high level of chess context, the underlying meaning of the original sentence that influences the classification label cannot always be the same for the synthetically generated sentence. We suspect that this confuses the models, and we illustrate this confusion using a confusion matrix between the manual labels and the original sentence labels in Figure 10, along with a sample of the sentences in Table 9. It is evident that some sentences originally labeled as 'negative [0]' sentiment were changed to 'neutral [1]' sentiment in the synthetic sentence, and some original sentences labeled as 'not sure [3]' were transformed into 'positive [2]' sentiment in the generated sentence. Thus, the models are likely to be fine-tuned using incorrect labels that do not correspond to the context of the sentence, resulting in incorrect predictions during testing.
Furthermore, the Cohen's Kappa coefficient [45] between the labels was (\(\kappa\)) = 0.45, with 4 synthetic sentences marked as not topic-relevant, and 115 sentences having matching labels. This indicates a low coefficient between the labels, and therefore, we cannot rely on automatically transferring the original sentence label to the synthetic one. However, synthetic data can be employed to enrich the models' understanding of chess context, as demonstrated by the \(F_{1}\) scores obtained using the "balance" and "oversampled" datasets.
In conclusion, considering the inherent difficulty of the tasks, synthetic data did not significantly improve performance in the classification tasks. However, it proved valuable in enhancing chess knowledge and as a cost-effective alternative to processing chess textbooks in the traditional manner, offering advantages in terms of cost and Optical Character Recognition (OCR) correction.
## Usage Notes
### Code availability
All provided code were tested and run on a CPU with Python 3 (version 3.6 and above). The pre-trained models were fine-tuned using the Python-based the HuggingFace Transformers library [54] (version 4.10). We used 4 Nvidia Volta v100 GPUs in fine-tuning the pre-trained models. For reproducibility, we provide the code for classification tasks, and all datasets (raw, split, synthetic), annotation guidelines and the models evaluation are free to use and available in the repository ([https://github.com/resrepos/LEAF](https://github.com/resrepos/LEAF)).
|
2309.13911 | Exactly solvable subspaces of non-integrable spin chains with boundaries
and quasiparticle interactions | We propose two new strategies to construct a family of non-integrable spin
chains with exactly solvable subspace based on the idea of quasiparticle
excitations from the matrix product vacuum state. The first one allows the
boundary generalization, while the second one makes it possible to construct
the solvable subspace with interacting quasiparticles. Each generalization is
realized by removing the assumption made in the conventional method, which is
the frustration-free condition or the local orthogonality, respectively. We
found that the structure of embedded equally-spaced energy spectrum is not
violated by the diagonal boundaries, as log as quasiparticles are
non-interacting in the invariant subspace. On the other hand, we show that
there exists a one-parameter family of non-integrable Hamiltonians which show
perfectly embedded energy spectrum of the integrable spin chain. Surprisingly,
the embedded energy spectrum does change by varying the free parameter of the
Hamiltonian. The constructed eigenstates in the solvable subspace are the
candidates of quantum many-body scar states, as they show up in the middle of
the energy spectrum and have entanglement entropies expected to obey the
sub-volume law. | Chihiro Matsui | 2023-09-25T07:21:03Z | http://arxiv.org/abs/2309.13911v2 | Exactly solvable subspaces of non-integrable spin chains with boundaries and quasiparticle interactions
###### Abstract
We propose two new strategies to construct a family of non-integrable spin chains with exactly solvable subspace based on the idea of quasiparticle excitations from the matrix product vacuum state [1]. The first one allows the boundary generalization, while the second one makes it possible to construct the solvable subspace with interacting quasiparticles. Each generalization is realized by removing the assumption made in the conventional method [2], which is the frustration-free condition or the local orthogonality, respectively. We found that the structure of embedded equally-spaced energy spectrum is not violated by the diagonal boundaries, as log as quasiparticles are non-interacting in the invariant subspace. On the other hand, we show that there exists a one-parameter family of non-integrable Hamiltonians which show perfectly embedded energy spectrum of the integrable spin chain. Surprisingly, the embedded energy spectrum does change by varying the free parameter of the Hamiltonian. The constructed eigenstates in the solvable subspace are the candidates of quantum many-body scar states, as they show up in the middle of the energy spectrum and have entanglement entropies expected to obey the sub-volume law.
## I Introduction
Understanding the thermalization mechanism of isolated quantum systems is one of the most well-developed studies in recent statistical mechanics. After the eigenstate thermalization hypothesis (ETH) is recasted as the most powerful candidate to explain thermalization phenomena, a plenty of related works have been achieved including the ones which test validity or violation of the ETH. Although generic isolated quantum systems are believed to obey the strong ETH [3; 4; 5], which requires all the energy eigenstates are macroscopically indistinguishable from the thermal states, it has been found that some energy eigenstates are different from the thermal states by violating the statement of strong ETH. These non-thermal states often show up in systems which do not thermalize, including the systems with integrability [6; 7] or many-body localization [7; 8; 9; 10; 11], while it has been found that such non-thermal states also show up in the system which does thermalize [12; 13; 14; 15; 16]. These non-thermal energy eigenstates are called _the quantum many-body scars_, named after the single-body quantum scar state [17].
The first example of quantum many-body scars has been found experimentally for the Rydberg-atom quantum simulators [18], which shows the embedded equally-spaced energy spectrum. The system shows strong revivals and very slow thermalization when the initial state has non-negligible overlap with the eigenstate of the equally-space eigenenergy. This unforeseen behavior was expected to be caused by violation of ETH due to the eigenstates of the equally-spaced eigenenergies contained in the prepared initial state. Later, emergence of such non-thermal energy eigenstates has theoretically been explained by employing the \(PXP\) model [19], the effective model of the Rydberg atom chain, which admits exactly solvable energy eigenstates with equally-spaced energy spectrum [20; 21]. Surprisingly, the known quantum many-body scars are often exactly solvable states of non-integrable systems. Besides the \(PXP\) model, there exists a variety of models, including the AKLT model [12; 13; 22] and the Hubbard-type models [23; 24], which are non-integrable but have exactly solvable energy eigenstates. All those exactly solvable energy eigenstates are macroscopically distinguished from the thermal state. Therefore, we expect that exactly solvable states of non-integrable models are the candidates of quantum many-body scars.
It is believed that the models which admit emergence of quantum many-body scars have the almost block-diagonal Hamiltonians [25; 26]:
\[\mathcal{H}\simeq W\oplus\mathcal{H}_{\text{thermal}}, \tag{1}\]
consisting of the large thermal subspace \(\mathcal{H}_{\text{thermal}}\) and the relatively small subspace \(W\) which becomes negligible in the thermodynamic limit. The states in the small subspace \(W\)breaks the quantum version of ergodicity, as they cannot move out from \(W\) during time evolution. Thus, the block diagonal Hamiltonian prevents full thermalization by keeping energy eigenvectors stay in each diagonal block. Recently, various methods to construct the Hamiltonian with the small invariant subspace have been proposed. The methods are mainly classified in to three types, each of which is called the projector embedding [27; 28; 29], the spectrum generating algebra [13; 30; 31; 32; 33; 34; 35; 36], or the Krylov restricted thermalization [37]. These are not always independent methods, but sometimes grasp different aspects of the same mathematical structure behind the Hamiltonians. Indeed, it can happen that a certain model is constructed by one method, and later, the same model is constructed by another method again. For instance, emergence of quantum many-body scars in the \(PXP\) and AKLT model was first explained by the spectrum generating algebra [12], and then, the projector embedding type construction has been proposed for
both models recently in [38; 39].
In this paper, we propose the new method to construct the Hamiltonian with the small invariant subspace based on the Bethe-ansatz method. The method is similar to the spectrum generating algebra,
\[\left([H,\,Q]-\mathcal{EQ}\right)\Big{|}_{W}=0, \tag{2}\]
in the sense that both methods provide the Hamiltonian and energy eigenstates in the subspace \(W\) at the same time, although the partial solvability of our method does not occur due to the spectrum generating algebra (2). The spectrum generating algebra also tells that the Hamiltonian has equally-spaced energy spectrum in the solvable subspace \(W\), which perfectly explains the strong revival obtained in the Rydberg atom experiment. The equally-spaced energy spectrum indicates that the quasiparticles living in the subspace \(W\) are identical particles, while our method based on the Bethe ansatz breaks the equally-spaced energy spectrum in the subspace, by showing the same energy spectrum as the spin-\(1/2\)\(XXX\) model. This implies that no revival phenomena will be obtained in the Bethe ansatz solvable subspace spanned by non-identical quasiparticle excitation states.
It should also be noted that the most known examples of quantum many-body scars are written in the languages of non-interacting quasiparticles, while the candidates of quantum many-body scars constructed in this paper are expressed in terms of interacting quasiparticles. Only a few examples are known to have QMBS consisting of interacting quasiparticles. One is the deformation of the integrable Hamiltonian [40], in which the exactly solvable energy eigenstates are constructed via the fully antisymmetrized bases. The other example is the Hamiltonian consisting of two parts [41], one of which annihilates the scar states and the other admits some solvable energy eigenstates. The partial solvability of our model is completely independent from these two since its solvability came from conventional integrability but its mathematical structure is highly non-trivial as we imposed the integrability conditions on _the pseudo basis_ constituted by the matrix-valued vectors. However, we would say our model has advantage for practical uses, since the Hamiltonian quite simply consists of spin-\(1\) nearest neighbor interactions. Besides, the method used for the construction can be applied for the models associated with any other integrable models as well.
This paper is organized as follows. In the next section, we define the model to be studied in this paper. We focus on the spin-\(1\) chain which often shows up in the discussion of quantum many-body scars, including the AKLT model. We also provide the basic notion of the matrix product state and quasiparticle excitation states first introduced in the discussion of the tangent space of the (nonlinear) manifold defined by the elements of the matrix product states [1; 2]. We are especially interested in the small subspace spanned by these states, as they are expected to have relatively small entanglement entropies compared to thermal states which have the volume-law entanglement entropies. For instance, the matrix product state is known to have the area-law entanglement entropy [42; 43; 44; 45], if its bond dimension is small enough, and the quasiparticle excitation states are also expected to have the sub-volume-law entanglement entropy [26]. Thus the matrix product state and quasiparticle excitation states are the candidates of quantum many-body scars, since the low entanglement entropy is one of the characteristic features of non-thermal states. In Section III, we provide the Hamiltonian and its invariant subspace spanned by non-interacting quasiparticle excitation states. The first half of the section is devoted to the review of the known results for the periodic boundary models, whose partial solvability comes from the hidden spectrum generating algebra. In the last half, we discuss the generalization to the non-trivial boundary case. We show that the structure of the spectrum generating algebra is not violated by the diagonal boundary deformation. In Section IV, we discuss the construction of the Hamiltonian with Bethe-ansatz solvable subspace. We show that the energy spectrum in the Bethe-ansatz solvable subspace coincides with the energy spectrum of the integrable system, without exhibiting the equally-spaced structure any more. The model which admits the Bethe-ansatz solvable subspace possesses a free-parameter, which does not show up in the energy spectrum of the solvable subspace., implying the robustness under a certain kind of perturbations. We also remark that the energy spectrum in the Bethe-ansatz solvable subspace becomes continuous ranging to infinity in the thermodynamic limit, which is never obtained for the scar subspace resulting from the spectrum generating algebra.
## II The model
Let us consider the spin-\(1\) chain with translationally invariant nearest neighbor interactions. By writing the elementary matrix whose \((t,s)\)-element is \(1\) and the others are \(0\) by \(E^{t,s}\), the local bulk Hamiltonian is written as
\[h=\sum_{s,s^{\prime},t,t^{\prime}=0}^{2}h_{t,t^{\prime}}^{s,s^{\prime}}E^{t,s }\otimes E^{t^{\prime},s^{\prime}}. \tag{3}\]
The whole Hamiltonian consists of the summation of the local Hamiltonian over all the sites. In this paper, we consider the periodic boundary:
\[H=\sum_{j=1}^{N}h_{j,j+1} \tag{4}\]
and the open boundaries:
\[H_{\rm B}=\sum_{j=1}^{N-1}h_{j,j+1}+h_{\rm L}+h_{\rm R}, \tag{5}\]
where \(h_{j,j+1}\) is non-trivially acts on the \(j\) and \(j+1\)th sites:
\[h_{j,j+1}=\mathbf{1}\otimes\cdots\otimes\underset{j,j+1}{h}\otimes\cdots\otimes \mathbf{1}. \tag{6}\]
Besides the locality and translation invariance, we assume the spin-flip invariance
\[h_{t,t^{\prime}}^{s,s^{\prime}}=h_{2-t,2-t^{\prime}}^{2-s,2-s^{\prime}} \tag{7}\]
and conserved magnetization
\[h_{t,t^{\prime}}^{s,s^{\prime}}=h_{t,t^{\prime}}^{s,s^{\prime}}\delta_{s+s^{ \prime},t+t^{\prime}}, \tag{8}\]
besides Hermiteness
\[h_{t,t^{\prime}}^{s,s^{\prime}}=(h_{s,s^{\prime}}^{t,t^{\prime}})^{*} \tag{9}\]
for the local bulk Hamiltonian. These are natural assumptions realized by many models.
Some of spin-1 chains equipped with the above properties are known to be integrable, including the Fateev-Zamolodchikov spin chain:
\[h_{j,j+1}=\vec{S}_{j}\cdot\vec{S}_{j+1}-(\vec{S}_{j}\cdot\vec{S}_{j+1})^{2}, \tag{10}\]
while some other spin-1 chains are known to have exactly solvable energy eigenstates, although they are non-integrable. The most famous example of the latter case is the AKLT model:
\[h_{j,j+1}=\vec{S}_{j}\cdot\vec{S}_{j+1}+\frac{1}{3}(\vec{S}_{j}\cdot\vec{S}_{j +1})^{2}, \tag{11}\]
which admits not only the exactly solvable ground state but also the exactly solvable excitation states as well [2; 12; 13; 26; 31].
Most of the known solvable energy eigenstates of non-integrable models are written in the homogeneous matrix product forms or quasiparticle excitations from the matrix product states [1; 2]. The homogeneous matrix state is written in the form
\[|\psi_{A}\rangle =\operatorname{tr}_{a}(K_{a}\vec{A}\otimes_{p}\cdots\otimes_{p} \vec{A}) \tag{12}\] \[=\sum_{(m_{1},\ldots,m_{N})\in\{0,\ldots,d-1\}^{N}}\operatorname{ tr}_{a}(K_{a}A_{m_{1}}A_{m_{2}}\cdots A_{m_{N}})|m_{1},m_{2},\ldots,m_{N}\rangle, \tag{13}\]
where \(A_{m_{n}}\in\operatorname{End}(\mathbb{C}^{\chi})\) (\(n=1,\ldots,N\)) are the matrices which act on the auxiliary space. Another index \(d\) denotes the dimension of the local physical space. For the spin-1 chain, the local physical space must be three-dimensional, _i.e._\(d=2\). Note that the tensor product \(\otimes_{p}\) must be operated on the physical spaces. The matrix \(K_{a}\) is the boundary matrix acting in the auxiliary space \(\mathbb{C}^{\chi}\) determined by the boundary conditions. For instance, \(K_{a}\) is the identity matrix for the periodic boundary, while \(K_{a}\) is a certain matrix with \(\operatorname{rank}K_{a}=1\) for open boundaries. Throughout this paper, we focus on the matrix product states given by
\[\vec{A}=\begin{pmatrix}a_{0}\sigma^{+}\\ a_{1}\sigma^{z}\\ a_{2}\sigma^{-}\end{pmatrix},\qquad a_{0},a_{1},a_{2}\in\mathbb{C}, \tag{14}\]
which has the smallest non-trivial bond dimension \(\chi=2\). This class of the matrix product states includes the exactly solvable ground state of the AKLT model [22].
On the other hand, we consider the one-quasiparticle excitation state expressed by
\[|\psi_{A,B}(k)\rangle=\sum_{k=1}^{N}e^{ikx}\operatorname{tr}_{a}(K_{a}\vec{A} \otimes_{p}\cdots\otimes_{p}\vec{B}\otimes_{p}\cdots\otimes_{p}\vec{A}), \tag{15}\]
where \(\vec{B}\) is again the matrix-valued vector acting in the auxiliary space \(\mathbb{C}^{\chi}\), which locates at the position of the quasiparticle. In the above expression, no quasiparticle creation or annihilation is assumed, which is true for the periodic or diagonal boundaries. Indeed, the magnetization conservation property of the model, which we imposed in (8), guarantees that the number of quasiparticles does not change in the bulk. The quasiparticle excitation states of the form (21) have been first proposed in the discussion of the generalized tangent space of the manifold formed by the matrix product tensors \(\{A_{m_{1}},\ldots,A_{m_{N}}\}\)[1].
The nature of quasiparticles depends on the choice of _the local quasiparticle creation operator_\(O\in\operatorname{End}(\mathbb{C}^{3})\) defined by
\[\vec{B}=O\vec{A}. \tag{16}\]
For instance, quasiparticles show the non-interacting property under the nearest-neighbor Hamiltonian (3) if the quasiparticle is chosen as the spin-2 magnon created by \(O=(S^{+})^{2}\)[2]. The spin-2 magnon is known to constitute the solvable invariant subspace of the models belonging to the AKLT type [2]. For the other choices, quasiparticles may interact with one another. The example of interacting quasiparticles is obtained in the Bethe-ansatz solvable subspace, as we will show in Section IV.
## III Exactly solvable subspace without quasiparticle interaction
In this section, we construct the solvable subspace \(W\) spanned by non-interacting quasiparticle excitation states. The non-interacting property of quasiparticle is realized, for instance, by choosing the local quasiparticle operator \(O\) as the spin-2 magnon creation operator:
\[O=(S^{+})^{2}. \tag{17}\]
The other examples which produce non-interacting quasiparticles can be found in [2]. The local spin-2 magnon creation operator satisfies the repulsive relations [2]:
\[O^{2}\vec{A}=O\vec{B}=0, \tag{18}\] \[\vec{B}\otimes_{p}\vec{B}=0, \tag{19}\]
which forbid quasiparticles to occupy the same site or the adjacent sites. Thus, the spin-2 magnons do not interact each other in the subspace \(W\) since the Hamiltonian consists only of the nearest neighbor interactions (3). This repulsive nature of quasiparticles determines the dimension of the subspace \(W\). When \(W\) consists of spin-2 magnons, we thus obtain \(\dim W=\lfloor N/2\rfloor\) as maximally \(\lfloor N/2\rfloor\) quasiparticles are allowed to exist.
Throughout this section, we impose _the local orthogonality_:
\[(^{t}\vec{A}^{*}\otimes_{p}{}^{t}\vec{A}^{*})\cdot(\vec{B}\otimes_{p}\vec{A}+ e^{ik}\vec{A}\otimes_{p}\vec{B})=0, \tag{20}\]
which is the sufficient condition for the quasiparticle excitation states (21) with different number of quasiparticles to be orthogonal. In the recent work of constructing a family of Hamiltonians with exactly solvable subspace [2], the local orthogonality is always imposed. However, it is rather strong condition since the local orthogonality allows only _identical_ quasiparticles with momentum \(k=\pi\) to exist. This also means that the hidden spectrum generating algebra exists behind the model.
With these properties, the multiple spin-2 magnon excitation states are represented as
\[|\psi_{A,B^{n}}\rangle=Q^{n}|\psi_{A}\rangle, \tag{21}\]
in which the index \(n\) represents the number of quasiparticles running over \(n=1,\ldots,\lfloor N/2\rfloor\), due to the repulsive properties of quasiparticles. \(Q\) is the quasiparticle creation operator given by the summation of the local creation operator at each site:
\[Q=\sum_{x=1}^{N}e^{ikx}O_{x},\qquad O_{x}=\mathbf{1}\otimes\cdots\otimes \underset{x}{O}\otimes\cdots\otimes\mathbf{1}, \tag{22}\]
which is interpreted as the creation operator of the spin-2 magnon carrying the momentum \(k=\pi\).
### Periodic boundary case
In this subsection, we discuss the periodic boundary case. The first part of this section is devoted to the review of the known models, which are the frustration-free models [2]. In the latter part of this section, we give the generalization of the known results by removing the frustration-free condition, which turns to be important for the boundary generalization, as we will see in the next subsection.
In [2], it has been found that the sufficient conditions for the subspace \(W\) to be the solvable subspace of the Hamiltonian are given by _the frustration-free condition_:
\[h\vec{A}\otimes_{p}\vec{A}=0 \tag{23}\]
and _the eigenvalue condition_:
\[h(\vec{B}\otimes_{p}\vec{A}+e^{ik}\vec{A}\otimes_{p}\vec{B})=\mathcal{E}(\vec {B}\otimes_{p}\vec{A}+e^{ik}\vec{A}\otimes_{p}\vec{B}). \tag{24}\]
The first condition makes the vacuum state (12) be the zero-energy state, although it is not necessarily the ground state. The conditions (23), (24) are equivalent to the spectrum generating algebra in the subspace \(W\):
\[\Big{(}[H,\,Q]-2\mathcal{E}Q\Big{)}|\psi_{A,B^{n}}\rangle=0,\quad n=0,1,\ldots,\left\lfloor\frac{N}{2}\right\rfloor. \tag{25}\]
Therefore, the energy spectrum of the Hamiltonian in \(W\) shows the equally-spaced structure:
\[H|\psi_{A,B^{n}}\rangle=2n\mathcal{E}|\psi_{A,B^{n}}\rangle,\quad n=0,\ldots, \left\lfloor\frac{N}{2}\right\rfloor, \tag{26}\]
which is understood also as the consequence of identical particle nature of spin-2 magnons. One thing which was missed to be noted in [2] is that the quasiparticle excitation states (21) under the periodic boundary provides the energy eigenstate for the Hamiltonian only when the system consists of an even number \(N\) of sites.
The solution to the frustration-free condition and the eigenvalue condition can be found a
which contains essentially three free parameters, up to overall factor of the Hamiltonian, if one normalizes the quasiparticle excitation states (21). This class of models includes the AKLT model, realized by choosing \(h_{11}^{11}/h_{00}^{00}=2/3\) and \(a_{0}=-\sqrt{2}a_{1}=-a_{2}=\sqrt{2/3}\), which perfectly explains the emergence of embedded equally-spaced energy spectrum obtained by the numerical test [12].
Now the question is how much we can generalize a model in such a way that does not destroy the block diagonal structure (1), _i.e._ that keeps \(W\) as its invariant subspace. One possibility is to generalize the sufficient conditions (23) and (24) for \(W\) to be the invariant subspace of the Hamiltonian. First, we replace the frustration-free condition with _the generalized frustration-free condition_
\[h\vec{A}\otimes_{p}\vec{A}=\vec{A}\otimes_{p}\vec{A}^{\prime}-\vec{A}^{\prime} \otimes_{p}\vec{A}. \tag{28}\]
Here \(\vec{A}^{\prime}\) is another matrix-valued vector with two-by-two matrix elements. This generalization (28) reminds us the idea of constructing the steady states of the classical solvable stochastic processes such as the asymmetric simple exclusion process [46; 47; 48; 49]. Accordingly, we modify the eigenvalue condition (24) as
\[h(\vec{B}\otimes_{p}\vec{A}+e^{ik}\vec{A}\otimes_{p}\vec{B})=\vec{B}\otimes_{ p}\vec{Z}+e^{ik}\vec{X}\otimes_{p}\vec{B}, \tag{29}\]
where the operator-valued vectors \(\vec{X}\), \(\vec{Y}\), \(\vec{Z}\), and \(\vec{W}\) are set as
\[\vec{Z}-\vec{A}^{\prime}=\mathcal{E}^{\prime}(k)\vec{A} \tag{30}\] \[\vec{X}+\vec{A}^{\prime}=\mathcal{E}(k)\vec{A}. \tag{31}\]
Besides these relations, we keep the local orthogonality (20), which allows only \(k=\pi\) quasiparticles to exist.
The first condition (28) again makes the vacuum state (12) be the zero-energy (but not necessarily the lowest energy) eigenstate under the periodic boundary condition. It also requires that the newly introduced matrix-valued vector \(\vec{A}^{\prime}\) to be
\[\vec{A}^{\prime}=\begin{pmatrix}b_{0}\sigma^{+}\\ b_{1}\sigma^{z}\\ b_{2}\sigma^{-}\end{pmatrix},\quad b_{0},b_{1},b_{2}\in\mathbb{C}, \tag{32}\]
where \(b_{2}\) is restricted by the condition \(b_{0}/a_{0}=b_{2}/a_{2}\). The generalized frustration-free condition (28), together with the generalized eigenvalue condition (29), produces the hidden spectrum generating algebra:
\[\Big{(}[H,\,Q]-(\mathcal{E}+\mathcal{E}^{\prime})Q\Big{)}|\psi_{A,B^{n}} \rangle=0,\quad n=1,\ldots,\left\lfloor\frac{N}{2}\right\rfloor, \tag{33}\]
which implies that the embedded equally-spaced energy spectrum
\[H|\psi_{A,B^{n}}\rangle=n(\mathcal{E}+\mathcal{E}^{\prime})|\psi_{A,B^{n}} \rangle,\quad n=1,\ldots,\left\lfloor\frac{N}{2}\right\rfloor \tag{34}\]
is not violated by generalizing the frustration-free condition.
The generalized conditions (28) and (29) are solve by the local Hamiltonian given by replacing the \((2,2)\) and \((8,8)\)-elements of (27) as \(h_{00}^{00}/2\to h_{00}^{00}/2+b_{0}/a_{0}-b_{1}/a_{1}\), and the \((4,4)\) and \((6,6)\)-elements as \(h_{00}^{00}/2\to h_{00}^{00}/2-b_{0}/a_{0}+b_{1}/a_{1}\). Thus, the local bulk Hamiltonian under the generalized frustration-free condition contains two more free parameters besides the three parameters in the frustration-free case, if one fixes the normalization of the quasiparticle excitation states (21). However, this increased freedom disappears under the presence of diagonal boundaries, when the four linearly independent vacua degenerate. We will see this point in the next subsection.
### Diagonal boundary case
When the open boundary condition is imposed, the boundary matrix in the matrix product state must be set as the rank 1 matrix. Here we write the boundary matrix in the most general expression:
\[K_{a}=|v_{\mathrm{R}}\rangle\langle v_{\mathrm{L}}|, \tag{35}\]
where the boundary vectors \(|v_{\mathrm{R}}\rangle\) and \(|v_{\mathrm{L}}\rangle\) are the vectors in \(\mathbb{C}^{2}\). Since the matrix product state takes different expressions depending on the choice of the boundaries, we explicitly denote the boundary vectors:
\[|\psi_{A}^{(\mathrm{tr},\mathrm{tr})}\rangle={}_{a}\langle v_{\mathrm{L}}| \vec{A}\otimes_{p}\vec{A}\otimes_{p}\cdots\otimes_{p}\vec{A}|v_{\mathrm{R}} \rangle_{a}. \tag{36}\]
Throughout this subsection, we only consider diagonal boundaries:
\[h_{\rm L}=\begin{pmatrix}\ell_{0}&0&0\\ 0&\ell_{1}&0\\ 0&0&\ell_{2}\end{pmatrix},\quad h_{\rm R}=\begin{pmatrix}r_{0}&0&0\\ 0&r_{1}&0\\ 0&0&r_{2}\end{pmatrix}. \tag{37}\]
Since the diagonal boundaries do not produce the quasiparticles, the expression for quasiparticle excitation state (21) is still valid.
Now we look for the Hamiltonians which have the invariant subspace \(W\) spanned by the matrix product state (12) and the quasiparticle excitations (21). For the bulk solvability in the subspace \(W\), the generalized frustration-free condition (28) and generalized eigenvalue condition (29) must be satisfied. Besides, the boundary solvability requires the consistency conditions at the left boundary:
\[{}_{a}\langle v_{\rm L}|(h_{\rm L}\vec{A}-\vec{A}^{\prime})={\cal E }_{\rm L}\cdot{}_{a}\langle v_{\rm L}|\vec{A},\] \[(h_{\rm R}\vec{A}+\vec{A}^{\prime})|v_{\rm R}\rangle_{a}={\cal E }_{\rm R}\cdot\vec{A}|v_{\rm R}\rangle_{a}, \tag{38}\]
and the right boundary:
\[{}_{a}\langle v_{\rm L}|h_{\rm L}\vec{B}=({\cal E}+{\cal E}_{\rm L })\cdot{}_{a}\langle v_{\rm L}|\vec{B},\] \[h_{\rm R}\vec{B}|v_{\rm R}\rangle_{a}=({\cal E}^{\prime}+{\cal E }_{\rm R})\cdot\vec{B}|v_{\rm R}\rangle_{a}, \tag{39}\]
respectively.
The vacuum energy takes different values for the different choice of the boundary conditions. For instance, if we choose the diagonal boundaries which satisfy (38) and (39), the vacuum energy is given by
\[H_{\rm B}|\psi^{(v_{\rm L},v_{\rm R})}_{A}\rangle=({\cal E}_{\rm L }+{\cal E}_{\rm R})|\psi^{(v_{\rm L},v_{\rm R})}_{A}\rangle. \tag{40}\]
For this reason, we call \({\cal E}_{\rm L}\) and \({\cal E}_{\rm R}\) the left and right boundary energies, respectively. From the boundary solvability conditions (38) and (39), we find that the generalization of the frustration-free condition is important to obtain non-trivial boundary solutions, since the frustration free condition only allows the boundary interactions proportional to the identity matrix.
The solutions to (38) are classified into two types each for the left and right boundaries. The first type of solutions do not restrict the boundary vectors:
\[{\cal E}_{\rm L}=\ell_{0}-\frac{b_{0}}{a_{0}}=\ell_{1}-\frac{b_{ 1}}{a_{1}}=\ell_{2}-\frac{b_{2}}{a_{2}},\quad\forall\,|v_{\rm L}\rangle_{a}, \tag{41}\] \[{\rm resp.}\quad{\cal E}_{\rm R}=r_{0}+\frac{b_{0}}{a_{0}}=r_{1} +\frac{b_{1}}{a_{1}}=r_{2}+\frac{b_{2}}{a_{2}},\quad\forall\,|v_{\rm R}\rangle _{a}, \tag{42}\]
and thus, leads to the degenerate vacua with degree four. Indeed, the same degenerate structure can be obtained in the ground state of the AKLT model under the presence of diagonal boundaries, since it is the special case of our model, as was mentioned in the previous subsection. The second type of the solutions determines the boundary vectors uniquely:
\[{\cal E}_{\rm L}=\ell_{1}+\frac{b_{1}}{a_{1}}=\ell_{2}+\frac{b_{ 2}}{a_{2}}\neq\ell_{0}+\frac{b_{0}}{a_{0}},\quad|v_{\rm L}\rangle_{a}=|1 \rangle_{a} \tag{43}\]
or
\[{\cal E}_{\rm L}=\ell_{0}+\frac{b_{0}}{a_{0}}=\ell_{1}+\frac{b_{ 1}}{a_{1}}\neq\ell_{2}+\frac{b_{2}}{a_{2}},\quad|v_{\rm L}\rangle_{a}=|0 \rangle_{a}, \tag{44}\]
_resp._
\[{\cal E}_{\rm R}=r_{1}-\frac{b_{1}}{a_{1}}=r_{2}+\frac{b_{2}}{a_{ 2}}\neq r_{0}+\frac{b_{0}}{a_{0}},\quad|v_{\rm R}\rangle_{a}=|0\rangle_{a} \tag{45}\]
or
\[{\cal E}_{\rm R}=r_{0}-\frac{b_{0}}{a_{0}}=r_{1}+\frac{b_{1}}{a_{ 1}}\neq r_{2}+\frac{b_{2}}{a_{2}},\quad|v_{\rm R}\rangle_{a}=|1\rangle_{a}, \tag{46}\]
and therefore, does not produce degeneracy for the vacuum states. In any case, we observe that the total boundary energy is determined by the elements of the boundary Hamiltonians as
\[{\cal E}_{\rm L}+{\cal E}_{\rm R}=\ell_{1}+r_{1}. \tag{47}\]
In general, the degeneracy structure of the vacuum states does not survive for the quasiparticle excitation states. Only when we restrict the quasiparticle excitation energy as
\[{\cal E}+{\cal E}_{\rm L}=\ell_{0},\qquad resp.\quad{\cal E}^{ \prime}+{\cal E}_{\rm R}=r_{0}, \tag{48}\]
which is one of the solutions to (39), the quasiparticle excitation states with arbitrary boundary vectors can be the energy eigenstates, although the quasiparticles under this restriction carry zero energy. For the other solutions given by
\[|v_{\rm L}\rangle_{a}=|0\rangle_{a},\qquad resp.\quad|v_{\rm R} \rangle_{a}=|1\rangle_{a}, \tag{49}\]
not only the quasiparticle excitation states but also the vacuum states are not degenerate, as was have obtained above.
The hidden spectrum generating algebra of this model is produced by the bulk and boundary partial solvability conditions (28), (29), (38), and (39):
\[\Big{(}[H_{\rm B},\,Q]-({\cal E}+{\cal E}^{\prime})Q\Big{)}|\psi^{(v_{\rm L}, v_{\rm R})}_{A,B^{n}}\rangle=0. \tag{50}\]
Since the vacuum state \(|\psi^{(v_{\rm L},v_{\rm R})}_{A}\rangle\) has the eigenenergy given by \({\cal E}_{\rm L}+{\cal E}_{\rm R}\), the eigenenergy of the quasiparticle excitation states are obtained as
\[H_{\rm B}|\psi^{(v_{\rm L},v_{\rm R})}_{A,B^{n}}\rangle=\Big{(}n({\cal E}+{ \cal E}^{\prime})+{\cal E}_{\rm L}+{\cal E}_{\rm R}\Big{)}|\psi^{(v_{\rm L},v_ {\rm R})}_{A,B^{n}}\rangle. \tag{51}\]
Here, the number of quasiparticles \(n\) runs over \(n=1,\ldots,\left\lfloor\frac{N}{2}\right\rfloor\). In this way, the embedded equally-spaced
energy spectrum structure is not violated by the non-periodic boundaries.
As was noted in the previous subsection, the boundary solvability conditions reduces the degrees of freedom for the desired Hamiltonian. For instance, when the four linearly independent vacua have the same energy, _i.e._ the conditions (41) and (42) are satisfied by the boundary Hamiltonians, the local Hamiltonian is just given by the frustration-free local Hamiltonian (27) up to the constant \(\ell_{0}+r_{0}\) determined by choice of the boundaries. That is, the energy spectrum in the solvable subspace matches that for the frustration-free energy spectrum with the shift by \(\ell_{0}+r_{0}\).
### Off-diagonal boundary case
Unlike the periodic or diagonal boundary cases, the off-diagonal boundaries create and annihilate quasiparticles. Therefore, states with a fixed number of quasiparticles (21), including the vacuum state (12), are no more the eigenstates of the Hamiltonian. Instead, we assume the superposition of \(n\)-quasiparticle states as the eigenvector of the Hamiltonian:
\[|\psi^{(v_{\rm L},v_{\rm R})}_{A,B^{n}}\rangle=\sum_{n=0}^{[N/2]}c_{n}Q^{n} \,{}_{a}\langle v_{\rm L}|\vec{A}\otimes_{p}\cdots\otimes_{p}\vec{A}|v_{\rm R }\rangle_{a}. \tag{52}\]
The operator \(O\) is again the local spin-2 magnon creation operator \(O=(S^{+})^{2}\) which satisfies the repulsive properties (18) and (19). We also imposed the local orthogonality (20), which allows only \(k=\pi\) identical quasiparticles to exist. We immediately notice that the superposition state (52) becomes the energy eigenstate only when its bulk energy is zero:
\[\mathcal{E}+\mathcal{E}^{\prime}=0. \tag{53}\]
Besides, the boundary solvability conditions:
\[{}_{a}\langle v_{\rm L}|(-c_{n}h_{\rm L}\vec{B}+c_{n-1}\vec{A}^{ \prime})=\mathcal{E}_{\rm L}\cdot{}_{a}\langle v_{\rm L}|(c_{n-1}\vec{A}), \tag{54}\] \[{}_{a}\langle v_{\rm L}|(c_{n}h_{\rm L}\vec{A})=\mathcal{E}_{\rm L }\cdot{}_{a}\langle v_{\rm L}|(-c_{n+1}\vec{B}), \tag{55}\]
and
\[((-1)^{N}c_{n}h_{\rm R}\vec{B}-c_{n-1}\vec{A}^{\prime})|v_{\rm R }\rangle_{a}=\mathcal{E}_{\rm R}\cdot(c_{n-1}\vec{A})|v_{\rm R}\rangle_{a}, \tag{56}\] \[c_{n}h_{\rm R}\vec{A}|v_{\rm R}\rangle_{a}=\mathcal{E}_{\rm R} \cdot((-1)^{N}c_{n+1}\vec{B})|v_{\rm R}\rangle_{a}, \tag{57}\]
are required in order for (52) to be the energy eigenstate.
We found that the only non-trivial solutions to the bulk solvability condition (28), (29) and boundary solvability (38), (39) are given by the boundary vectors \(|v_{\rm L}\rangle=|1\rangle\), \(|v_{\rm R}\rangle=|0\rangle\) and the boundary interactions
\[h_{\rm L}=\begin{pmatrix}0&0&\ell_{02}\\ 0&0&0\\ \ell_{02}^{*}&0&0\end{pmatrix},\qquad h_{\rm R}=\begin{pmatrix}0&0&r_{02}\\ 0&0&0\\ r_{02}^{*}&0&0\end{pmatrix} \tag{58}\]
under the restrictions on the boundary energies:
\[\mathcal{E}_{\rm L}=-\mathcal{E}_{\rm R}=\frac{b_{1}}{a_{1}} \tag{59}\]
and the ratios of the amplitudes:
\[\ell_{02}=-\mathcal{E}_{\rm L}\frac{c_{n+1}}{c_{n}},\qquad r_{02}= (-1)^{N}\mathcal{E}_{\rm R}\frac{c_{n+1}}{c_{n}}, \tag{60}\] \[\mathcal{E}_{\rm L}-\frac{\ell_{02}\ell_{02}^{*}}{\mathcal{E}_{\rm L }}\frac{c_{n}}{c_{n}^{*}}=-\mathcal{E}_{\rm R}+\frac{r_{02}r_{02}^{*}}{ \mathcal{E}_{\rm R}}\frac{c_{n}}{c_{n}^{*}}=\frac{b_{0}}{a_{0}}. \tag{61}\]
That is, the only eigenvector consisting of the spin-2 magnon excitations (17) is the zero-energy eigenstate:
\[H_{\rm B}^{(1,0)}|\psi_{A,B^{n}}\rangle=0. \tag{62}\]
Therefore, the solvable subspace of the off-diagonal boundary model is the one-dimensional space. Interestingly, the solvable state (52) shows up in the middle of the energy spectrum under the generic choice of the off-diagonal boundary conditions (Appendix A). This implies that the superposition of quasiparticle excitations is again the candidate of a quantum many-body scar.
## IV Exactly solvable subspace with quasiparticle interactions
In the previous section, we discussed the models with solvable subspace coming from the hidden spectrum generating algebra. The energy spectrum in the solvable subspace then shows equally-spaced structure, and therefore, it is spanned by the identical quasiparticle excitation states with \(k=\pi\). In this section, we propose the new construction of solvable subspace based on the Bethe-ansatz solvability. The idea is to remove the local orthogonality (20), the sufficient condition for the quasiparticle excitation states with different number of quasiparticles to be orthogonal. Actually, the orthogonality of the energy eigenstates is guaranteed by the different eigenenergies, since we have chosen the Hermitian Hamiltonian (9). Instead, we impose the algebraic structure to produce integrability on the Hamiltonian in the subspace \(W\). Then the matrix product state (12) and the quasiparticle excitation states
\[|\psi_{A,B^{n}}(\{k_{j}\})\rangle=\sum_{1\leq x_{1}<x_{2}<\cdots<x_{n} \leq N}f(x_{1},x_{2},\ldots,x_{n})\,\mathrm{tr}_{a}(\vec{A}\otimes_{p}\cdots \otimes_{p}\vec{B}_{x_{1}}\otimes_{p}\cdots\otimes_{p}\vec{B}_{x_{n}}\otimes_{p} \cdots\otimes_{p}\vec{A}), \tag{63}\] \[f(x_{1},x_{2},\ldots,x_{n})=\sum_{P\in\mathfrak{S}_{n}}A_{n}(P)\, e^{i\sum_{j=1}^{n}k_{P(j)}x_{j}}, \tag{64}\]
which are generalization of (21), become the energy eigenstates in the subspace \(W\), if a set of quasiparticle momenta \(\{k_{j}\}\) satisfies the Bethe equations. Here \(\mathfrak{S}_{n}\) denotes the symmetric group of degree \(n\). The amplitude \(A_{n}(P)\) is determined by the boundary condition. For instance, the periodic boundary requires for \(A_{n}(P)\) to satisfy
\[\frac{A_{n}(P\tau_{j,j+1})}{A_{n}(P)}=-\frac{1+e^{i(k_{P(j)}+k_{P(j+1)})}-2e^{ ik_{P(j+1)}}}{1+e^{i(k_{P(j)}+k_{P(j+1)})}-2e^{ik_{P(j)}}}, \tag{65}\]
in which \(\tau_{j,j+1}\) represents the transposition between the labels \(j\) and \(j+1\). The quasiparticle excitation states (63) look similar to the Bethe states, but of course, they are not the Bethe states in the normal sense.
The explicit forms of the Bethe equations depend on the models. Here we impose the spin-1/2 isotropic Heisenberg (\(XXX\)) like relations on the Hamiltonian in \(W\):
\[h\vec{A}\otimes_{p}\vec{A}=0 \tag{66}\] \[h\vec{A}\otimes_{p}\vec{B}=-\vec{A}\otimes_{p}\vec{B}+\vec{B} \otimes_{p}\vec{A}\] (67) \[h\vec{B}\otimes_{p}\vec{A}=\vec{A}\otimes_{p}\vec{B}-\vec{B} \otimes_{p}\vec{A}\] (68) \[h\vec{B}\otimes_{p}\vec{B}=0, \tag{69}\]
although the Hamiltonian consists of \(s=1\) spins. The first relation is nothing but the frustration-free condition imposed also in the previous subsection (23), and the last relation represents the repulsive property (19) which forbid the quasiparticles to locate at the adjacent sites. The \(XXX\)-like relations (66)-(69) simultaneously determine the Hamiltonian and the local quasiparticle creation operator. The local quasiparticle creation operator which solves (66)-(69) is given by the diagonal matrix:
\[O=\begin{pmatrix}\frac{b_{0}}{a_{0}}&0&0\\ 0&\frac{b_{1}}{a_{1}}&0\\ 0&0&\frac{b_{0}}{a_{0}}\end{pmatrix}, \tag{70}\]
which apparently allows double occupation for quasiparticles, since the repulsive relation (18) does not hold for the above choice of \(O\). This also indicates that the quasiparticles in the subspace \(W\) interact with each other. The Hamiltonian solves the relations (66)-(69) if the local Hamiltonian is given by
\[h=\begin{pmatrix}h_{00}^{00}&0&0&0&0&0&0&0\\ 0&-1&0&-1&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0\\ 0&-1&0&-1&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&-1&0&-1&0\\ 0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&-1&0&-1&0\\ 0&0&0&0&0&0&0&0&h_{00}^{00}\end{pmatrix} \tag{71}\]
which leaves the one parameter \(h_{00}^{00}\) free, and therefore, this local Hamiltonian is not in the class of known integrable models [50; 51; 52].
Then the Hamiltonian (71) have the invariant subspace spanned by the Bethe-like states (63) if a set of quasiparticle momenta satisfies the Bethe equations for the spin-1/2 \(XXX\) model:
\[e^{ik_{j}N}=\prod_{\begin{subarray}{c}t=1\\ t\neq j\end{subarray}}^{n}\frac{e^{kj_{j}+k_{\ell}}+1-2e^{ik_{j}}}{e^{kj_{j}+k _{\ell}}+1-2e^{ik_{\ell}}},\qquad j=i,\ldots,n. \tag{72}\]
Of course, the energy spectrum of the Hamiltonian match the energy spectrum of the spin-1/2 \(XXX\) model:
\[H|\psi_{A,B^{n}}(\{k_{j}\})\rangle=\Big{(}2\sum_{j=1}\cos k_{j}-\frac{n}{2} \Big{)}|\psi_{A,B^{n}}(\{k_{j}\})\rangle \tag{73}\]
in the subspace \(W\), which means the equally-spaced energy spectrum structure is broken in \(W\). This also implies that there is no hidden spectrum generating algebra behind the model, and therefore, no revival phenomena will not been obtained for this model. The embedded spin-1/2 energy spectrum also indicates that the energy spectrum becomes gapless continuum ranging to infinity in the thermodynamic limit, as in the case of the spin-1/2 \(XXX\) model, although the dimension of the subspace \(\dim W<2^{N}\) becomes negligibly small in the thermodynamic limit \(N\rightarrow\infty\), compared to the dimension of its complement \(\dim W^{c}>3^{N}-2^{N}\). This is very different structure from most of the known energy spectra for quantum many-body scars, which, by construction, often stay discrete even in the thermodynamic limit.
We have numerically checked the energy spectrum of the Hamiltonian (71) and indeed obtained the embedded energy spectrum of the spin-1/2 \(XXX\) model in the full energy spectrum (Appendix B). Remarkably, the embedded \(XXX\) energy spectrum does not violated by varying
\(h_{00}^{00}\), the free parameter in the local Hamiltonian. That is, this scar subspace is robust against the perturbation \(E^{0,0}\otimes E^{0,0}+E^{2,2}\otimes E^{2,2}\).
It should be noted that the model (71) does not show the same degeneracy as that of the spin-\(1/2\)\(XXX\) model, since the subspace \(W\) includes only the Bethe-like states, which correspond to the highest weight states of \(\mathfrak{sl}_{2}\) for the \(XXX\) model. Unfortunately, we are not succeeding to construct the operator for the model (71) which corresponds to \(S_{\text{tot}}^{-}=\sum_{x=1}^{N}S_{x}^{-}\) operator of the spin-\(1/2\)\(XXX\) model so far.
## V Conclusion and discussion
We have proposed the new construction of non-integrable spin chains with exactly solvable subspace. The construction is based on the Bethe-ansatz method, which produces the invariant subspace not based on the spectrum generating algebra, and therefore, the energy spectrum of the Hamiltonian in the solvable subspace is not equally spaced. As an example, we have constructed the spin-\(1\) chain with the \(XXX\)-type solvable subspace, whose Hamiltonian shows the embedded energy spectrum of the \(XXX\) model in the full energy spectrum. The subspace spanned by the known quantum many-body scars often shows discrete energy spectrum consisting of a finite number of eigenenergies [27] or infinitely many but equally-spaced eigenenergies [2]. On the other hand, the model we proposed in this paper shows continuous energy spectrum in the solvable subspace at the thermodynamic limit, as it coincides with the energy spectrum of the \(XXX\) model, although the dimension of the subspace is negligibly small in the thermodynamic limit. This is the first difference which makes our model distinguished from the scar subspace emerging due to the spectrum generating algebra. Subsequently, the broken hidden spectrum generating algebra in the subspace results in violation of the revival phenomena, which is often referred as the common behavior of the models with quantum many-body scars. The second difference is obtained in the nature of quasiparticles in the solvable subspace. The known solvable subspace produced by the spectrum generating algebra is spanned by non-interacting quasiparticle excitation states, while the Bethe-ansatz solvable subspace we have constructed in this paper is spanned by the interacting quasiparticle excitation states. These uncommon properties as the solvable subspace might be enough for saying that our model is a very new candidate of the model with quantum many-body scars.
We have also constructed partially solvable spin chains with boundary magnetic fields. The partial solvability of this class comes from the hidden spectrum generating algebra, if the boundary Hamiltonians are diagonal. That is, diagonal boundaries do not destroy the structure of the spectrum generating algebra behind the model. Subsequently, the solvable subspace consists of non-interacting quasiparticles, in which the Hamiltonian shows the equally-spaced energy spectrum. The situation is a bit different for the off-diagonal boundary case, since the solvable subspace of the off-diagonal boundary model is the one-dimensional space. However, the solvable state is in the middle of the spectrum, which can still be a candidate of quantum many-body scars.
Although we provided the completely new construction of partially solvable models based on the algebraic structure of conventional integrable systems, we did not know whether the exactly solvable energy eigenstates found in this paper are the real quantum many-body scars. Therefore, the first thing to be achieved in the next future work is to prove that our exactly solvable energy eigenstates are macroscopically distinguished from the thermal state, or that these exactly solvable energy eigenstates show entanglement entropies smaller than those for thermal states, which obey the volume law. Actually, we are already close to the proof of the second statement since our solvable subspace is spanned by the matrix product state and the quasiparticle excitation states [42; 43; 44; 26; 45], both of which are expected to have relatively small entanglement entropy, as long as the bond dimension is small enough.
The second thing we need to make sure is non-integrability of the model we constructed in this paper. This is rather abstract problem which is difficult to be solved. Of course, we have checked that our model is not included in the class of the known integrable spin chains [50; 51; 52], but this does not mean the model is non-integrable unless we remove all possibilities to find unknown integrable spin chains. One possible approach to this rather hard-looking problem is to check the energy level statistics, since energy level obeys different statistics, _i.e._ the Poisson distribution or Wigner-Dyson distribution, depending on whether the model is non-integrable or integrable, respectively.
From the mathematical point of view, it is a mystery where the partial integrability of our model comes from. We have imposed the \(XXX\)-like relations on the matrix-valued vectors in the quasiparticle excitation states and accidentally found the solution, but of course, this does not mean we can always find the solution to the similar algebraic relations associated with the other integrable models such as the \(XXZ\) model, supersymmetric \(t\)-\(J\) model, Hubbard model, and so on. It would be nice to explain the existence of these solutions from the Yang-Baxter equation, which is sometimes used as the definition of quantum integrability.
###### Acknowledgements.
C. M. is supported by JSPS KAKENHI Grant Number JP18K13465 and JP23K03244.
## Appendix A Energy spectrum of the \(N=4\) Hamiltonian under off-diagonal boundaries
Here we give an example in which the solvable zero-energy eigenstate (highlighted by the bold fonts) shows up in the middle of the spectrum under the presence of the off-diagonal boundaries. The boundary Hamiltonians are chosen as \(\ell_{02}=3,r_{02}=-3\). The bulk parameters are chosen as \(h_{00}^{00}=5\) with \(h_{11}^{11}/h_{00}^{00}=2/3\), which is the AKLT point. Accordingly, the bulk Hamiltonian is set to satisfy the frustration-free condition, with the choice \(a_{0}=-\sqrt{2}a_{1}=-a_{2}=\sqrt{2/3}\).
## Appendix B Energy spectrum of the \(N=4\) Hamiltonian with the Bethe-ansatz solvable subspace
Here we give the energy spectrum of the Hamiltonian with the Bethe-ansatz solvable subspace. The parameter \(h_{00}^{00}\) is chosen as \(h_{00}^{00}=0,0.3,\) and \(1.2\), respectively. The embedded spin-\(1/2\)\(XXX\) energy spectrum (highlighted by the bold fonts) is obtained, which is not violated by varying \(h_{00}^{00}\).
|
2309.07794 | Improving Multimodal Classification of Social Media Posts by Leveraging
Image-Text Auxiliary Tasks | Effectively leveraging multimodal information from social media posts is
essential to various downstream tasks such as sentiment analysis, sarcasm
detection or hate speech classification. Jointly modeling text and images is
challenging because cross-modal semantics might be hidden or the relation
between image and text is weak. However, prior work on multimodal
classification of social media posts has not yet addressed these challenges. In
this work, we present an extensive study on the effectiveness of using two
auxiliary losses jointly with the main task during fine-tuning multimodal
models. First, Image-Text Contrastive (ITC) is designed to minimize the
distance between image-text representations within a post, thereby effectively
bridging the gap between posts where the image plays an important role in
conveying the post's meaning. Second, Image-Text Matching (ITM) enhances the
model's ability to understand the semantic relationship between images and
text, thus improving its capacity to handle ambiguous or loosely related
modalities. We combine these objectives with five multimodal models across five
diverse social media datasets, demonstrating consistent improvements of up to
2.6 points F1. Our comprehensive analysis shows the specific scenarios where
each auxiliary task is most effective. | Danae Sánchez Villegas, Daniel Preoţiuc-Pietro, Nikolaos Aletras | 2023-09-14T15:30:59Z | http://arxiv.org/abs/2309.07794v2 | # Improving Multimodal Classification of Social Media Posts by Leveraging Image-Text Auxiliary tasks
###### Abstract
Effectively leveraging multimodal information from social media posts is essential to various downstream tasks such as sentiment analysis, sarcasm detection and hate speech classification. However, combining text and image information is challenging because of the idiosyncratic cross-modal semantics with hidden or complementary information present in matching image-text pairs. In this work, we aim to directly model this by proposing the use of two auxiliary losses jointly with the main task when fine-tuning any pre-trained multimodal model. Image-Text Contrastive (ITC) brings image-text representations of a post closer together and separates them from different posts, capturing underlying dependencies. Image-Text Matching (ITM) facilitates the understanding of semantic correspondence between images and text by penalizing unrelated pairs. We combine these objectives with five multimodal models, demonstrating consistent improvements across four popular social media datasets. Furthermore, through detailed analysis, we shed light on the specific scenarios and cases where each auxiliary task proves to be most effective.
## 1 Introduction
Multimodal content including text and images is prevalent in social media platforms (Vempala and Preotiuc-Pietro, 2019). Content of both text and images has been widely used to improve upon single modality results in various downstream tasks such as sentiment analysis (Niu et al., 2016; Ju et al., 2021), hate speech detection (Botelho et al., 2021; Hossain et al., 2022; Cao et al., 2022), sarcasm detection (Cai et al., 2019; Xu et al., 2020; Liang et al., 2022), and named entity recognition (Moon et al., 2018; Sun et al., 2020).
Existing multimodal classification methods for social media tasks often combine text and image representations obtained from pre-trained encoders. Generally, they can be divided into: (1) _single-stream_ models where image and text representations are concatenated initially and used as input into the encoder such as Unicoder (Li et al., 2020), VisualBERT (Li et al., 2019) and ViLT (Kim et al., 2021); and (2) _dual-stream_ approaches where image and text features are encoded separately and then combined via a fusing mechanism such as concatenation or attention, for example ViLBert (Lu et al., 2019) and PaLI (Chen et al., 2022). These models are usually pre-trained on standard vision-language data such as image captions where strong image-text connections are assumed, i.e., captions that explicitly describe a corresponding image (Hessel and Lee, 2020; Xu and Li, 2022).
Modeling text-image pairs from social media posts presents additional challenges. For instance, capturing cross-modal semantics that are not immediately apparent is challenging. Figure 1 (top) shows an example where the text refers specifically to the mood of the person in the photo (i.e., "unhappy feeling" _when @USER gets more followers..._). Moreover, cases where the visuals are weakly related to the text are also prevalent (Sanchez Villegas and Aletras, 2021; Xu et al., 2022). For instance, Figure 1 (bottom) shows an
Figure 1: Examples of image-text relations in social media posts from Vempala and Preotiuc-Pietro (2019).
image of a hen accompanied by the text _My baby approves_. It is difficult to draw a direct relationship between the two without any additional context.
In this work, we propose using two tasks - Image-Text Contrastive (ITC) and Image-Text Matching (ITM) - as auxiliary losses during fine-tuning for social media post classification. ITC and ITM have been only used as pre-training objectives for pre-trained multimodal models (Radford et al., 2021; Wang et al., 2021; Chen et al., 2022). ITC uses a contrastive loss (He et al., 2020; Li et al., 2021; Yu et al., 2022), while ITM involves a binary classification loss for image-text alignment (Chen et al., 2019; Tan and Bansal, 2019).
Our main contributions are as follows: (1) we present an extensive study on comparing multimodal models jointly fine-tuned with ITC and ITM covering both _single_- and _dual-stream_ approaches; (2) we show that models using ITC and ITM as auxiliary losses consistently improve their performance on four popular multimodal social media classification datasets; (3) we provide a comprehensive analysis that sheds light on the effectiveness of each auxiliary task and their combination.
## 2 Multimodal Auxiliary Tasks
### Image-Text Contrastive (ITC)
Modeling text-image pairs in social media posts involves capturing hidden cross-modal semantics (Vempala and Preotiuc-Pietro, 2019; Kruk et al., 2019). For instance, in Figure 1 (top) the visible mood of the person on the photo is related to the text of the post. Instead of directly matching images with textual descriptions (e.g., _a man wearing a helmet_), we aim to encourage the model to capture the underlying dependencies between the image and text within the posts.
For this purpose, we propose using the ITC objective (He et al., 2020; Li et al., 2021; Yu et al., 2022) which pushes towards a feature space in which image and text representations of a post are brought closer together, while image and text representations that appear in different posts are pushed further apart. This is done by minimizing the distance of image and text embeddings of the same post while maximizing the distance of image and text embeddings that correspond to different posts. Let \(L_{n}\) and \(I_{n}\) be the n-th (normalized) representation of text and accompanying image of a post in a training batch. While the cosine similarity of the pair \(L_{n}\) and \(I_{n}\) is minimized, the cosine similarity of all other random pairs (e.g., \(L_{n}\) and \(I_{m}\); \(I_{m}\) is an image from a different post in the current batch) is maximized. Given \(N\) posts within a training batch, ITC loss is defined as follows:
\[l_{ITC}=\frac{1}{2}(l_{1}+l_{2}) \tag{1}\] \[l_{1}=-\frac{1}{N}\Sigma_{n=1}^{N}log\frac{exp(Ll^{T}/e^{\tau})} {\Sigma_{j=1}^{M}exp(Ll^{T}/e^{\tau})}\] (2) \[l_{2}=-\frac{1}{N}\Sigma_{n=1}^{N}log\frac{exp(lL^{T}/e^{\tau})} {\Sigma_{j=1}^{M}exp(lL^{T}/e^{\tau})} \tag{3}\]
\(\tau\) is a learnable temperature parameter to scale the logits (Jia et al., 2021).
### Image-Text Matching (ITM)
In social media posts, unrelated or weakly related text-image pairs are also common (Hessel and Lee, 2020; Xu et al., 2022). For example, in Figure 1 (bottom), an image of a hen is accompanied by the text _My baby approved_ instead of a more descriptive text (e.g., _a close-up of a bird standing next to a fence_), as it would be expected in standard image captioning datasets. To address this, we propose using the ITM objective (Chen et al., 2019; Wang et al., 2021) during fine-tuning to understand the semantic correspondence between images and text. ITM involves a binary classification loss that penalizes the model when a given text and image do not appear together in a post. Let \(I_{n}\) and \(L_{n}\) be the image and text representation of the n-th post in a training batch, we randomly replace \(I_{n}\) with an image of another post from the current batch with a probability of \(0.5\) following (Wang et al., 2021; Kim et al., 2021). If \(I_{n}\) is replaced, then the image and text do not match, otherwise \(I_{n}\) and \(L_{n}\) match. Thus, the ITM loss corresponds to the cross-entropy loss for penalizing incorrect predictions, \(l_{ITM}=-\Sigma_{i=1}^{2}t_{i}log(p_{i})\) where \(t_{i}\) is the gold label (matched or mismatched) and \(p_{i}\) is the softmax probability for each label.
### Joint Fine-tuning Objectives
The loss function used during fine-tuning is a combination of the losses from the downstream classification task (cross-entropy loss or \(l_{CE}\)) and the two auxiliary training objectives (Section 3.3 presents all multimodal classifiers used in our experiments). Thus, the joint objective can be defined as: \(l_{C+M}=\lambda_{1}l_{CE}+\lambda_{2}l_{ITC}+\lambda_{3}l_{ITM}\), where \(\lambda_{1},\lambda_{2},\lambda_{3}\) are hyperparameters used to control the influence of each loss.
## 3 Experimental Setup
### Datasets
We experiment with four standard Twitter classification datasets in English: (1) **TIR** - text-image relationship categorization (Vempala and PreotiucPietro, 2019); (2) **MVSA** - multi-view sentiment analysis (Niu et al., 2016); (3) **MHP** - multimodal hate speech detection (Gomez et al., 2020; Botelho et al., 2021); and (4) **MSD** - multimodal sarcasm detection (Cai et al., 2019).
We use the same data splits for MVSA, MHP and MSD as in the original papers. For TIR, instead of a 10-fold cross-validation, we randomly split the data in \(80\)%, \(10\)%, and \(10\)% for training, validation, and testing for consistency with the other tasks. Table 4 in the Appx. presents dataset statistics.
### Single Modality Methods
Text-onlyWe fine-tune two pre-trained models on each classification task: **BERT**(Devlin et al., 2019) and **Bernice**(DeLucia et al., 2022). Bernice is a BERT based model pre-trained on a large-scale corpus of multilingual tweets. Separately, we experiment with few-shot (FS) prompting using **Flan-T5**(Chung et al., 2022) and **GPT-3**(Brown et al., 2020). For each dataset, we construct a few-shot prompt to include two randomly selected training examples for each class.1
Footnote 1: Appx. B includes the prompt templates.
Image-onlyWe fine-tune two pre-trained models: (1) **ResNet152**(He et al., 2016) and (2) **ViT**(Dosovitskiy et al., 2020), both pre-trained on the ImageNet dataset (Russakovsky et al., 2015).
### Multimodal Predictive Models
Ber-ViTWe use Bernice and ViT to obtain representations of the text (\(L\)) and image (\(I\)). We combine \(L\) and \(I\) in two ways: **Ber-ViT-Conc** appends the text and image vectors from the corresponding \(L\) and \(I\) [CLS] tokens to obtain the multimodal representation vector \(h^{LI}\); **Ber-ViT-Att** computes the scaled dot attention with \(L\) as queries, and \(I\) as keys and values. \(h^{LI}\) is obtained by appending the [CLS] token from the text representation (\(L\)) and the [CLS] token from the attention layer. We fine-tune each model on each task by adding a classification layer.
MmbtMMBT is a model that jointly fine-tunes pre-trained text and image encoders (Kiela et al., 2019). Image embeddings obtained from Resnet152 are concatenated with the word token embeddings and passed to a BERT-like transformer. The [CLS] token is used as the multimodal representation (\(h^{LI}\)) for classification.
LxmertLXMERT (Tan and Bansal, 2019) consists of three encoders and their corresponding outputs for vision \(I\in R^{m_{I}\times d_{k}}\), language \(L\in R^{m_{L}\times d_{k}}\) (where \(m_{L}\) is the sequence length and \(d_{k}\) the hidden size), and a multimodal vector \(h^{LI}\in R^{d_{k}}\). We max pool the visual and text outputs to obtain the image and text vectors.
ViLTWe fine-tune ViLT (Dosovitskiy et al., 2020) and extract the image and text vectors from the last hidden state of ViLT. The multimodal embedding \(h^{LI}\) corresponds to the first token from the last hidden state.
ITC and ITM InputsThe ITC auxiliary task inputs are the corresponding text and image vectors for each model. The ITM auxiliary task input is the respective multimodal representation \(h^{LI}\).
### Evaluation
Results are obtained over three runs using different random seeds reporting average and standard deviation.2 We use weighted F1 for model evaluation following the standard practice on the TIR and MHP datasets to manage class imbalance.3
Footnote 2: Table 6 in the Appx.includes the standard deviation.
## 4 Results
### Performance Comparison
Table 1 presents the results for all classification models across datasets. Overall, we observe consistent performance improvements across all models when incorporating either the ITC, ITM or both auxiliary losses during fine-tuning. Across MVSA, MHP and MSD datasets, the Ber-ViT-Att\({}_{C+M}\) model consistently achieves the best performance with F1 scores of \(74.6\), \(78.0\), and \(89.7\) respectively. Generally, we observe that both ITC and ITM objectives contribute to the performance improvements of Ber-ViT-Att. For instance, Ber-ViT-Att\({}_{\textit{ITC}}\) and Ber-ViT-Att\({}_{\textit{ITTM}}\) models average improvements are 0.8 and 1.2 respectively, while Ber-ViT-Att\({}_{C+M}\) average improvement is 1.4. These findings indicate that _dual-stream_ approaches are effective in leveraging information from image-text auxiliary tasks. This aligns with prior research by
Kiela et al. (2019) which finds that unimodally pre-trained models can adapt to new multimodal tasks with less labeled data compared to _single-stream_ pre-trained models (e.g., ViLT). The performance gap between _dual-stream_ and _single-stream_ approaches is narrower on the TIR dataset, where ViLT\({}_{ITM}\) achieves \(55.7\) F1 score and Ber-ViT-Att\({}_{ITM}\) obtains \(55.9\). We believe this is likely due to the importance of visual information for this task (i.e., predicting the semiotic relationship between images and text), which is better aligned with ViLT as a visual-based model. This observation is reinforced by the fact that TIR is the only dataset where image-only models outperform text-only models, with ViT achieving \(51.4\) F1 score, while Bernice achieves only \(38.9\).
### Analysis
We analyze the predictions of Ber-ViT-Att in TIR to provide insights on when each auxiliary task is more useful (Table 2). We find that when the text is represented on the image, Ber-ViT-Att\({}_{C+M}\) obtains the best performance, especially when the visual content does not contribute to the meaning of the post. We observe that \(80.2\)% of the tweets are correctly classified representing a substantial improvement over the baseline model, Ber-ViT-Att, where only \(59.3\)% of the tweets are correctly classified. When text is not represented on the image, we find that Ber-ViT-Att\({}_{ITC}\) performs best when the visual content is relevant with \(59.3\)% of the tweets correctly classified compared to \(49.2\)% with Ber-ViT-Att. Finally, in cases where the image does not enhance the semantic meaning, Ber-ViT-Att\({}_{ITM}\) exhibits the highest performance, correctly classifying \(65\)% of the tweets. These insights are particularly valuable in social media analysis, where understanding the interplay between textual content and accompanying images is crucial (Hessel and Lee, 2020; Sanchez Villegas et al., 2021; Xu et al., 2022).
## 5 Conclusion
We proposed two auxiliary losses to be used when fine-tuning multimodal models for social media classification. Image-Text Contrastive (ITC) encourages the model to capture the underlying dependencies in image-text posts while Image-Text Matching (ITM) ensures image-text alignment. Our results show consistent improvement in predictive performance upon the inclusion of these objectives. Our approach can easily be applied to any existing architectures.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|} \hline
**Model** & ** TIR** & **MVSA** & **MIP** & **MSD** & \(\blacktriangle\) \\ \hline Majority Class & 16.0 & 59.8 & 53.4 & 45.2 & - \\ \hline \hline
**Test-only** & & & & & \\ \hline BERT & 37.2 & 70.1 & 73.3 & 83.9 & - \\ Bernice & 38.9 & 71.6 & 73.6 & 84.5 & - \\ Flam-T5 (9S prompt) & 3.8 & 58.9 & 46.5 & 59.6 & - \\ GPT-3 (9S prompt) & 16.3 & 55.9 & 58.2 & 69.6 & - \\ \hline \hline
**Image-only** & & & & & \\ \hline ResNet152 & 48.2 & 63.8 & 51.8 & 46.9 & - \\ ViT & 51.4 & 68.2 & 57.2 & 71.5 & - \\ \hline \hline
**Multimodal Predictive Models** & & & & & \\ \hline Ber-ViT-Conc & 43.6 & 70.4 & 76.6 & 88.8 & - \\ Ber-ViT-Conc\({}_{ITM}\) & 44.9\({}_{1.3}\) & \(27.0^{\dagger}_{1.6}\) & \(77.3_{0.7}\) & \(\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0} \pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@color@rgb@fill{0}{0}{0}\textbf{897.0} \color@rgb@fill{0}{0}{0}\textbf{987.0}\color@rgb@fill{0}{0}{0}\textbf{987.0} \color@rgb@fill{0}{0}{0}\textbf{1}}\) & 1.1 \\ Ber-ViT-Conc\({}_{ITM}\) & 44.1\({}_{0.5}\) & \(236.1^{\dagger}_{2.2}\) & \(77.8^{\dagger}_{1.2}\) & \(\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0} \pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@color@rgb@fill{0}{0}{0}\textbf{892.4} \color@rgb@fill{0}{0}{0}\textbf{1}}\) & 1.3 \\ Ber-ViT-Conc\({}_{C+M}\) & 45.8\({}_{2.3}\) & \(73.4^{\dagger}_{3.0}\) & \(77.1^{\dagger}_{1.1}\) & \(\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0} \pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@color@rgb@fill{0}{0}{0}\textbf{892.0} \color@rgb@fill{0}{0}{0}\textbf{987.0}\color@rgb@fill{0}{0}{0}\textbf{1}}\) & 1.8 \\ \hline Ber-ViT-Att & 53.7 & 72.1 & 76.8 & 88.8 & - \\ Ber-ViT-Att\({}_{ITM}\) & 54.8\({}_{1.1}\) & \(72.8_{0.7}\) & \(77.5_{0.7}\) & \(\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0} \pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@color@rgb@fill{0}{0}{0}\textbf{89.5} \color@rgb@fill{0}{0}{0}\textbf{985.7}\color@rgb@fill{0}{0}{0}\textbf{0}}\) & 0.8 \\ Ber-ViT-Att\({}_{ITM}\) & **55.9\({}_{1.2}\)** & \(73.5^{\dagger}_{1.4}\) & \(77.4_{0.0}\) & \(\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0} \pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@color@rgb@fill{0}{0}{0}\textbf{894.0} \color@rgb@fill{0}{0}{0}\textbf{984.0}\color@rgb@fill{0}{0}{0}\textbf{1}}\) & 1.2 \\ Ber-ViT-Att\({}_{AT}\) & 54.6\({}_{0.5}\) & \(\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0} \pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@color@rgb@fill{0}{0}{0}\textbf{78.0} \color@rgb@fill{0}{0}{0}\textbf{987.0}\color@rgb@fill{0}{0}{0}\textbf{1}}\) & \(\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0} \pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@color@rgb@fill{0}{0}{0}\textbf{1}}\) & 1.4 \\ \hline MMBT & 53.2 & 72.4 & 74.5 & 83.2 & - \\ MMBT\({}_{ITC}\) & 53.2\({}_{0.5}\) & \(73.2_{0.8}\) & \(75.1_{2.4}\) & \(\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0} \pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@color@rgb@fill{0}{0}{0}\textbf{84.4} \color@rgb@fill{0}{0}{0}\textbf{1}}\) & 0.9 \\ MMBT\({}_{ITM}\) & 53.2\({}_{0.5}\) & \(73.4_{1.0}\) & \(75.4_{0.0}\) & \(\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0} \pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@color@rgb@fill{0}{0}{0}\textbf{84.3} \color@rgb@fill{0}{0}{0}\textbf{1}}\) & 0.9 \\ MMBT\({}_{C+M}\) & 53.6\({}_{0.6}\) & \(73.5^{\dagger}_{1.1}\) & \(\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0} \pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@color@rgb@fill{0}{0}{0}\textbf{7}}\) & \(\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0} \pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@color@rgb@fill{0}{0}{0}\textbf{83.0} \color@rgb@fill{0}{0}{0}\textbf{4.0}\) & 0.9 \\ \hline LXBERT & 51.3 & 68.2 & 70.7 & 81.9 & - \\ LXBERT\({}_{ITM}\) & 51.9\({}_{0.6}\) & \(\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0} \pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@color@rgb@fill{0}{0}{0}\textbf{984.2} \color@rgb@fill{0}{0}{0}\textbf{982.7}\color@rgb@fill{0}{0}{0}\textbf{1}}\) & \(\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}
### Limitations
In this section, we list the limitations of our work. First, the datasets used in our experiments are solely in English. This choice allows for consistency and comparability across the datasets, but it does not test the generalizability of our findings to other languages. In future work, we plan to extend our research to a multilingual setting to address this limitation. Secondly, three of the datasets employed in our experiments are relatively small, containing fewer than 5,000 examples each. This relatively small scale is typical for social media datasets as data for specific tasks can be difficult to collect (e.g, data must follow specific criteria and may require domain experts to assign labels). We include a larger dataset, MSD, which comprises 24,635 examples, allowing for comparison and evaluation on a larger data scale. The effectiveness of the models incorporating auxiliary tasks depends on the underlying base model, although our approach can easily be adapted to new models. Finally, the inclusion of auxiliary tasks in our models introduces an increase in training time. For instance, the training time for Ber-ViT-Att on the TIR dataset is approximately 1.5 hours on an Nvidia A100 GPU. However, when incorporating the auxiliary tasks (Ber-ViT-Att\({}_{C+M}\)), the training time extends to around 2.5 hours, a 66% relative increase in training time.
|
2309.13939 | The Time Traveler's Guide to Semantic Web Research: Analyzing Fictitious
Research Themes in the ESWC "Next 20 Years" Track | What will Semantic Web research focus on in 20 years from now? We asked this
question to the community and collected their visions in the "Next 20 years"
track of ESWC 2023. We challenged the participants to submit "future" research
papers, as if they were submitting to the 2043 edition of the conference. The
submissions - entirely fictitious - were expected to be full scientific papers,
with research questions, state of the art references, experimental results and
future work, with the goal to get an idea of the research agenda for the late
2040s and early 2050s. We received ten submissions, eight of which were
accepted for presentation at the conference, that mixed serious ideas of
potential future research themes and discussion topics with some fun and irony.
In this paper, we intend to provide a survey of those "science fiction"
papers, considering the emerging research themes and topics, analysing the
research methods applied by the authors in these very special submissions, and
investigating also the most fictitious parts (e.g., neologisms, fabricated
references). Our goal is twofold: on the one hand, we investigate what this
special track tells us about the Semantic Web community and, on the other hand,
we aim at getting some insights on future research practices and directions. | Irene Celino, Heiko Paulheim | 2023-09-25T08:20:06Z | http://arxiv.org/abs/2309.13939v1 | The Time Traveler's Guide to Semantic Web Research: Analyzing Fictitious Research Themes in the ESWC "Next 20 Years" Track
###### Abstract
What will Semantic Web research focus on in 20 years from now? We asked this question to the community and collected their visions in the "Next 20 years" track of ESWC 2023. We challenged the participants to submit "future" research papers, as if they were submitting to the 2043 edition of the conference. The submissions - entirely fictitious - were expected to be full scientific papers, with research questions, state of the art references, experimental results and future work, with the goal to get an idea of the research agenda for the late 2040s and early 2050s. We received ten submissions, eight of which were accepted for presentation at the conference, that mixed serious ideas of potential future research themes and discussion topics with some fun and irony.
In this paper, we intend to provide a survey of those "science fiction" papers, considering the emerging research themes and topics, analysing the research methods applied by the authors in these very special submissions, and investigating also the most fictitious parts (e.g., neologisms, fabricated references). Our goal is twofold: on the one hand, we investigate what this special track tells us about the Semantic Web community and, on the other hand, we aim at getting some insights on future research practices and directions.
Semantic Web, Future Directions, Design Fiction
## 1 Introduction
The original paper by Tim Berners-Lee envisioning the Semantic Web in 2001 [2] included a future scenario in which structured knowledge and web technologies provided effective solutions to everyday problems. More than 20 years later, a lot changed and evolved in the Semantic Web community and, even if that _exact_ scenario did not become true as it was originally conceived, that vision has been the basic inspiration for our entire research field and the starting point for all the achievements in the Semantic Web realm and in related areas as well.
In its 2023 edition, The Extended Semantic Web Conference (ESWC) celebrated its 20th anniversary and during the conference there was the chance to reflect on what happened during the previous 20 years with a dedicated panel.1 The ESWC 2023 general chair - Catia Pesquita - decided to provide room to discuss not only the past, but also the future of the research in our community. For this reason, the "Next 20 years" Special Track was introduced. When appointed as chairs for this track, we first thought about inviting vision papers, but then decided to follow a more experimental route inviting _future research papers_ instead.
Footnote 1: Cf. [https://2023.esvc-conferences.org/panel/](https://2023.esvc-conferences.org/panel/)
However, 20 years is quite a long period of time and it may be hard to predict what the actual trends will be within such a temporal span. Asking for envisioning the change over a period as long as 20 years is quite challenging. In a tongue-in-cheek-statement in 1999, science fiction author Douglas Adams said [1]:
1. _Anything that is in the world when you're born is normal and ordinary and is just a natural part of the way the world works._
2. _Anything that's invented between when you're fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it._
3. _Anything invented after you're thirty-five is against the natural order of things._
Essentially, all attendees of ESWC 2023 will be over thirty-five in 20432, so this task boils down to predict very radical new inventions and changes. This is the reason why, in preparing the call for papers of the "Next 20 years" track, we decided to take a step forward into the future.
Footnote 2: To the best of the authors’ knowledge, there were no attendees aged 15 or below at ESWC 2023.
We were inspired by Design Fiction [3, 4], a practice that mixes design, fiction, narratives, and speculations to create evocative "artifacts" that are aimed to represent the contexts and possible outcomes of change. As Bruce Sterling effectively explained [17], Design Fiction deliberately uses a set of prototypes (stories and artifacts that may support a story - in our case scientific papers) in order to force an audience to "suspend their disbelief" about the future, thus being temporarily put in a different conceptual space (in our case, the future status of Semantic Web research).
Therefore, in the Call for Papers of the "Next 20 years" Special Track of ESWC, we invited the community to submit fictitious research papers, as if they were actually prepared for ESWC 2043. The papers were expected to look and feel like a real paper, including research questions, references, and state of the art (as of 2043), experimental results and possibly newer evaluation metrics, and of course future work (i.e., future future work), with the goal to get an idea of the research agenda for the late 2040s and early 2050s. We also encouraged the prospective authors to have their papers co-authored by an AI, thus imagining not only future research topics, but also future research practices.
To solicit an "out of the box" reflection, in the call for papers we also decided to illustrate the changes in the field over 20 years, by looking back at the past 20 years and at the key innovations brought forward by the Semantic Web research community. Some of those include:
* OWL (2004) and SPARQL (2008)
* FOAF (2004) and schema.org (2011)
* DBpedia (2007), Freebase (2007), Wikidata (2012)
* LOD Cloud (2007)
* R2RML (2012) and SHACL (2015)
* RESCAL (2011), TransE (2013), and RDF2vec (2016)
Considering how much we are used to these concepts, vocabularies, datasets, and techniques, some of them being referenced by hundreds and thousands of papers in the field, we asked the prospective authors to imagine a research landscape where upcoming inventions of similar impact have become standard textbook knowledge.
The changes over a couple of decades can also be visualized by having a look at the published research papers. For the field of Semantic Web research, Figure 1 depicts word clouds compiled from the titles of accepted papers at the very first and the most recent edition of ESWC.
We can easily spot that 20 years resulted in a shift of focus of the community: some words may have changed relevance (like ontologies, which are less represented), some others may have changed naming (less semantics and more knowledge), some expressions emerged (knowledge graphs is clearly a new popular buzzword), some others have disappeared (like "web", maybe because of its ubiquity).
Figure 1: Word clouds compiled from accepted papers of the first edition of ESWC (left) and ESWC 2023 (right)
In this paper, we aim at summarising the results of this sort of "social experiment" that we conducted with the Semantic Web community, in an attempt to better understand the present and to draw some lines of the future to come. The remainder of the paper is structured as follows: in Section 2 we summarise the papers received and accepted to the track; in Section 3 we provide some insights on how those papers were created, based on the interviews we conducted with the respective authors, while some analysis of the papers' content is offered in Section 4; the reaction and feedback of the ESWC 2023 audience to the papers presentation is illustrated in Section 5, also with the results of the survey that we conducted; the challenges we encountered to publish this track's proceedings are explained in Section 6; finally, we present our final considerations in Section 7.
## 2 The "Future" Papers
The call for papers of this very special track3 was published in the second half of February 2023, with the submission deadline extended until early April (less that 2 months overall). We also directly contacted some senior researchers of the community to solicit their visionary/futuristic submissions. In the end, we received 10 paper submissions and the two of us, as track chairs, peer reviewed them. After our discussion, we decided to accept 8 papers and to reject the other 2 papers, which were not in line with the call topic and spirit; among the accepted submission, we distinguished 5 papers with a broader contribution for full presentation and 3 papers with a more limited scope for short presentation. All authors of accepted papers had thus the chance to present in a plenary session at the conference (cf. also Section 5).
Footnote 3: Cf. [https://2023.eswc-conferences.org/call-for-papers-the-next-20-years/](https://2023.eswc-conferences.org/call-for-papers-the-next-20-years/).
The accepted papers were quite varied in terms of proposition: there were both papers with a single author and with multiple authors, from a single institution (group-level contribution) or different organizations (cooperative contribution), from junior, senior or mixed groups of contributors, as summarised in Table 1. We therefore think that the accepted papers are also representative of different possible research efforts, as in other more traditional conference tracks.
The papers were also quite varied in terms of contents, even if all of them had some connection with relevant topics for the Semantic Web community: the fictitious journey towards a wisdom web with a universal language to ensure interoperability also at human level [19], peer-to-peer knowledge sharing and its coherent evolution management in open data spaces opposed to large industry-closed efforts [6], agreement and matching between hyper-graph in a dystopian government-controlled ontology-mediated world [8], the evaluation of the contribution of ontologies to large language models that become "standard" [20], the emergence of a new common sense and social norms in robot communities [15], different semantics emerging from brain waves measured through brain implants [14], a very futuristic blending of the physical and the digital world with self-configuring materials and surfaces [10], and the successful application of knowledge graphs for user profiling in the education domain [13].
While offering different views and inventing different possible worlds, those papers indeed address several broad topics, like the evolution of language (both for humans and machines), the dichotomy between centralized and decentralized data management, including the role of monopolist or quasi-monopolist organizations, the importance but also the vulnerability of personal data and knowledge, the opportunities of the convergence between symbolic and non-symbolic AI approaches, the relationship between humans and the rest of the world, with special emphasis on machines or AI-equipped objects. It is also worth noting that some papers clearly and explicitly provided specific reflections to their potential audience, either in the form of "questions that people should have asked 20 years ago" [19] or "check-list with the predictions" to be evaluated in 2043 [13].
Finally, the papers are also quite diverse in terms of the degree of "fictitiousness": some authors really stretched their storytelling to include fantasy and unreal details, in order to push the boundaries of their design fiction exercise or
\begin{table}
\begin{tabular}{l|c|c|c|c} & Authors & Seniority & Institution & Scope \\ \hline Anjomshoaa et al. [19] & group & mixed & single & broad \\ Corcho et al. [6] & group & mixed & single & broad \\ d’Aquin [8] & individual & senior & single & broad \\ Wang [20] & individual & junior & single & broad \\ Motta et al. [15] & group & mixed & cooperative & broad \\ van Erp [10] & individual & senior & single & specific \\ Martorana et al. [14] & group & junior & single & specific \\ Ilkou et al. [13] & group & junior & cooperative & specific \\ \end{tabular}
\end{table}
Table 1: Summary of the eight accepted papers in terms of the involved authors, their seniority, their institutions and the paper scope.
simply to have fun, while others preferred to moderate their invented research to focus on real/realistic problems. It is worth noting that most papers included utopian, positive or neutral "predictions" about future events and scientific progress, in contrast to dystopian, negative or apocalyptic forecasts, which are also quite common in science fiction (see also Section 5).
## 3 The Future Created by the Authors
In order to better understand how those papers were created, we conducted a series of interviews with the authors to collect some more information on the process they followed, the challenges they encountered and their overall experience. In this section, we provide an overview of the main qualitative aspects that emerged in response to our questions.
### Motivation to Participate
We asked the authors why they decided to participate to this track. Most of them really liked the fun aspect of the call and they are also science fiction lovers, which made a good match. Many authors also underlined that they found the task challenging and were attracted by the novelty of the track: in this sense, they took the occasion to discuss at group level, brainstorming about current trends and challenges, collaboratively drawing interesting topics for their potential future agenda, to stretch the limits of usual discussions, to challenge themselves in a different intellectual exercise, to imagine their ideal future, to ask themselves serious questions about the future of the community, or simply just for fun.
### Process to Write the Paper
We asked the authors what process they followed to write the paper and what similarities and differences they found with respect to their usual research methodology. It is worth noting that they provided different hints and reflections even when reporting similar practices, which is a sign that most of them did not consciously reflect on the method/process, which would therefore be interesting to further investigate and to compare both to traditional scientific paper writing and to science fiction production.
All authors reported to have followed a two-phase process, beginning with a brainstorming to find the focus/angle for their submission, followed by actually proceeding to the paper writing process. Those who organized group brainstorming were also interested to collect the potential ideas from a large number of potential contributors and to identify the most willing and suitable co-authors. Those who wrote single-authors papers motivated the choice mainly for time constraints, but in some cases also to be the unique creators of new worlds. In one single case, the authors, who are all PhD students, decided to "go on their own" without their respective supervisors also to challenge themselves to write a full paper without guidance.
The main difference that most authors reported with respect to their usual practices is that the brainstorming somehow continued also during the paper writing, in that some ideas were generated only during the manuscript creation, both because the process was not usual and because they found the need to fill some gaps in their storytelling only during composition. The result was that the final papers were quite different from the initial ideas (opposed to what happens when writing scientific papers, in which it is difficult that radical changes emerge during the writing phase). One author commented that they started "really punky", by inventing very futuristic and unrealistic scenarios, but closer to the submission deadline they decided to converge towards something more realistic. Some authors also used generative AI to support an iterative writing and refinement of the paper, either to entirely create the scenarios, contents and topics or to support the addition of specific parts or elements (see below).
The challenge to come up with entirely new or invented ideas was approached in different ways: some found it difficult, especially for the potential emergence of a large number of ideas that could have been difficult to reconcile; some others liked it exactly for the freedom and opportunity to go beyond the usual research conversations; some commented that the discussion within the group was more relaxed than conventional interactions between researchers. One author commented that inventing a future scenario is not that different from the usual creativity required in research to imagine the future, and that the discussion he had with his co-authors on this special occasion was very rewarding.
### Invented Details
We also asked the authors to explain how they came up with a number of different details in their submission and if they employed generative AI.
All authors added to the papers their actual and current _affiliation_; only a few of them additionally inserted some invented ones (e.g. a potential future NGO) and/or added AI co-authors and/or created an hypothetical co-author with a hypothetical future affiliation. Most of them said that they simply didn't think about changing their affiliation or that they would have found it weird to indicate a different institution, as the papers were going to be really published, even if they commented that it is not necessarily credible that they will still be working for the same institution 20 years from now. One author commented that he thought about changing his affiliation, but decided to leave his current contact information, to allow readers to contact him in case his submission would have generated further discussion or curiosity. Another author (a first year PhD student at a prestigious university) commented that leaving his current affiliation could have been also a wish for his own future career.
In some cases, the authors inserted _invented words or names_ (see also section 4): most authors simply created those by themselves, in some cases combining existing words in new expressions (e.g., _wisdom web_, _sensolens_, _pervasive self-organization_, _standard language models_); a few authors used ChatGPT to support the paper writing, mostly for editing, but in a few cases, this also supported the "continuous brainstorming", thus helping in coming up with new ideas, in creating neologisms (e.g. _cognigram_) or in proposing the paper title. One author chose to make a strong use of Generative AI for the paper pictures (and actually she started her process from the images), also because she wanted to experiment with it, and only after that she started writing the paper, highlighting again the creativity-support role that this kind of technology can have on research paper production [9, 16]. One paper also inserted a novelty in the paper title (as well in the titles of some of the citations, see below) by adding emoticons: the interviewed author motivated this as a reflection on how we will communicate research in the future. For another paper, the authors used ChatGPT to generate descriptions for scenarios they had sketched during brainstorming, with the idea to create a pipeline to generate/write multiple papers for different future scenarios; however, in the end they proceeded with a more traditional authoring process, because ChatGPT results were not truly original or coherent.
The papers that included an _fictitious experimental evaluation_ also provided figures and graphs to support their storytelling. The authors reported different perspectives on the task of inventing evaluation results: some found it easy ("we thought about what results we wanted to get, and then built the experiments and figures to support that"), some found it difficult ("at the beginning I thought it was an impossible task, but then one of my co-author did it and came up with very effective ideas, demonstrating that it was not that difficult after all"); some didn't like the experience ("I felt guilty to generate those numbers"), while some others enjoyed it and had fun ("I thought that it may be the only occasion in which I'm actually _allowed_ to invent perfect experimental results").
One broad topic was the invention of _citations of imaginary (future) related work_. Two completely different approaches were followed: manual crafting and automatic generation. The majority of authors decided to create the citations manually, to insert self-references and invented future publications by their research groups or by other known researchers from the community (including in one case a paper by us, the track chairs) or to create papers referring to future events, in line with their storytelling. Some had fun in inventing authors (e.g. misspelled Chomsky) or in putting out-of-context names (e.g. cartoon characters, football players of the past); in one paper, a reference was inserted to satisfy the request by a colleague to have one paper "published" on a prestigious journal; in another case, the references were ordered from A to Z, with emoticons inserted in papers closer to 2043, to underline the possible change in scientific communication style. Some authors used generative AI to create fictional references, but then they reported that they had to fix them, because for example "they were illogical with wrong years, so I had to change them manually to make the timeline consistent". The challenge of straightening out a possible timeline of developments to put together a meaningful story of the future was a challenge reported by different authors.
Finally, we also observed different approaches for _acknowledgements_: some authors mentioned current and existing institutions and funding, also because of their actual support to the paper writing; some other authors preferred not to insert any real acknowledgment, to avoid a potential disappointment of supporters not happy to be associated to a fake research work; in one case, the authors invented a future European-funded project on the topics covered by their paper; in another case, the author invented a fictional sponsor that supposedly provided the devices and supported the research, with a disclaimer saying that "by contractual obligation, the author can only make positive statements", with the intention to solicit the reflection on sponsored research, which is a topic highly discussed in domains like health, but mostly discarded in computer science.
### Plausibility of the Paper Content
Finally we asked the authors to estimate what part of their papers could be considered plausible in terms of content and future predictions, or what part they would save if they were asked to write a "serious" vision paper on the future of the Semantic Web. We also asked about the kind of reflections they aimed to solicit in the audience. In this respect, the answers were really diverse, also reflecting the different characteristics of the papers, as summarised in Section 2.
Some authors were mostly interested to discuss real topics and issues, affirming that more that 50% of their papers could be considered realistic; for example, they focused on the shift from data spaces to knowledge spaces, or on the evolution of education, or on the need for new query languages, and they claim that those research trends are going to happen and it will be important to focus on them (even if, the actual realization may be different from their storytelling). Those authors believe that at least some of the vague ideas that they put on the papers are bound to happen in the future, and indeed they discovered that a few of the ideas came closer to reality after the paper submission, like the communication between LLM4.
Footnote 4: Cf. [https://github.com/chatarena/chatarena](https://github.com/chatarena/chatarena)
Some other authors declared that they decided to give up on the feasibility/plausibility side at the beginning of their discussion, to feel free to focus on the fun side. In one case, the authors discovered after their submission that some idea they had included in the paper, thinking of it as crazy or unrealistic because related to interpretation of brain waves, was actually not that far from reality, as they found some similar concepts in papers [7] and news5. In another case, the author affirmed: "I think that my paper is 10% plausible, but I cannot tell which part".
Footnote 5: Cf. [https://reut.rs/3s/09VH](https://reut.rs/3s/09VH).
Most authors declared their paper mostly or entirely non-plausible, but then - being asked about "what to save" of their predictions - all of them were able to identify an aspect, a detail or a topic that they find more realistic. It is the case for example of the risk of centralized knowledge controlled by companies, the evolution/standardization of language models, the stronger relationship between the physical and the digital worlds, the knowledge representation at individual level and the rise of data-intensive personal devices, the "unintended consequences" of evolutionary approaches, the issues of heterogeneity and the risk of under-representation of niche perspectives.
In most cases, as already mentioned in Section 2, the papers touch upon clear and well-known topics of the Semantic Web community, because the authors asked themselves what our research field will actually become and what is needed to make it still relevant in 20 years. One author reported that he started by reflecting on what is actually going to change (like scalability of technology, data and devices) and then he built the paper, inventing a lot of fictional details around this idea. Another author said that they tried to exacerbate some realistic trends, also to counter-balance the current sensationalist debate on super-robots and AIs to become an existential threat to humanity. Other authors also explained that their intention was also to solicit a reflection on community building, stimulating a discussion on what the Semantic Web wants to become, especially to prevent the risk of being "just a small AI community, which follows hypes and repeat old mistakes, instead of recognising and focusing on the central role of data and knowledge".
One author reported that he had his paper reviewed by a friend who initially didn't understand the fictional aspect: he was excited by the ideas, tried to look up some of the mentioned research, was disappointed by failing to find them and complained about it; the author commented that this proved his goal of soliciting the reaction of "I would like this research to actually happen".
## 4 The Future Emerging from the Papers
Asides from asking the authors, we have also conducted a more systematic analysis of the papers. While a total number of eight papers does not allow for statistically significant solutions, we still were able to identify a few interesting findings.
Figure 2 shows a word cloud created from the papers' abstracts. We can see that _semantics_ and _language_ are commonly occurring words, the latter most often due to the use in _language models_.6
Footnote 6: The strong appearance of the word “snow” is an artifact coming from only one paper mentioning this word rather often.
To create a map of the research landscape created, we scanned the papers for abbreviations and constructed names7. Those were double checked against existing terms to identify those describing future inventions. Table 2 shows the glossary of future terms extracted using that method.
Footnote 7: The papers were processed with regular expressions looking for words which contain at least two capitals (like _SLM_ or _WisdomWeb_, or a capital not in the first position (like _oMatch_).
Common themes that can be observed from that gloassary are: standards and platforms for knowledge exchange (beyond RDF, also including integration across modalities such as knowledge graphs and LLMs), larger (and even ultra-large) language models and those incorporating structured knowledge, quantum computers, and the extension of the graph model used in today's standards like RDF to hypergraphs.
In order to analyze more deeply how the authors perceive and envision the current and upcoming trends in the research field, we also analyzed the references of the papers. In total, the eight papers contain 165 references, 28% of which are from years up to 2023, the remaining 72% from years 2024 onwards.
We extracted the key phrases and concepts from the references which (1) appear at least 5 times and (2) appear in the references of at least two submissions. Figure 3 shows an analysis of these keyphrases over time. They show a mix of concepts which are already there and will remain (ontologies, Semantic Web) and concepts which will gain importance (knowledge graphs, robots, large language models, artificial intelligence). Interestingly, none of the authors saw Linked (Open) Data play a crucial role in the next 20 years.
## 5 The Reception from the Conference Audience
On May 31st 2023, in a morning plenary session of ESWC 2023, the eight accepted papers were briefly presented by the respective authors. All presenters made an effort to make their storytelling entertaining, while making sure to convey the main ideas included in their paper. One of the authors decided also to explain the process he followed to write the paper, with an iterative process that involved ChatGPT to "fill some empty spaces" or for editing.
After the presentations, we had all speakers lined up in a panel setting to take questions from the audience. We did not have a concise plan on the Q&A session and we did not give specific instructions on how and what to ask the authors. As a (partial) surprise to us, almost all questions were asked in the same spirit of the call, as if also the audience was in 2043, asking for clarifications or proposing alternative ideas. Some questions were made just for fun, but some others - even if posed in an ironic way - were aimed to tackle some specific relevant issue. Only one member of the community (Paul Groth) decided to ask a more "serious" question, inquiring about the motivations and goals they had in participating to the session, whether they were aimed to raise open issues and research directions or they just wanted to have fun, receiving quite different answers, as already explored in Section 3.
At the end of the session, we asked the audience to compile a short survey to give us feedback about the track, the papers, the discussion and the future expectation. In total, 63 distinct users started the survey and 51 of them completed it (completion rate over 80%).
Figure 3: Key phrases from the references over time
Figure 2: Word cloud from the abstracts
We first asked some questions about the track, the call for papers and if they considered applying: the results are summarized in Figure 4. Almost 55% of respondents had already heard about the track before the conference but 30% discovered it only in the program (top-left); almost all of those who spotted the CfP thought it was an interesting or great idea (93%, top-right); a large majority considered to submit or even submitted (74%, bottom-left), while those who did not submit were mainly discouraged by the lack of time (67%, bottom-right).
Then, we investigated the audience opinion about the papers and their contents, as displayed in Figure 5. An 88% majority was satisfied by how the papers matched the visionary spirit of the track, with 27% who thought that the presented work was beyond their expectation (top-left); the relevance of the paper topics to the community was more questioned, but still 80% of respondent gave a medium-high or high score (top-right); the perceived credibility of the topics in relation to the next 20 years was much lower, with 47% giving a medium score (bottom-left), but a large share of respondents was stimulated to reflect on the future of research (64% with a medium-high or high score, bottom-right). We also asked if there was any research topic that was partially or completely missing in the presented work, and almost 50% gave some suggestion; the mentioned topics include: graph federation, data capture, ontologies vs. embeddings, reasoning and stream reasoning, dynamic knowledge construction, query processing, constraint validation, access control, future formats, metaverse and virtual worlds, computation at hardware level, misinformation, material science, climate change, health assistants, decision-support systems.
We asked the audience also feedback on the session overall. Regarding the presentations, 50% liked them a lot and 44% thought they were great, confirming the general perception that the session was appreciated. Asking the audience to put in order some aspects of the presentations, it is clear that the fun and entertainment angle was the most appreciated one, followed by reflections on the future (cf. left side of Figure 6); the audience answer was quite varied in terms of the stimuli they received for their future research (cf. right side of Figure 6).
Towards the end of the questionnaire, we asked participants a free text question to write three words that came to their mind to sponteneaously characterize the session. The results are depicted in Figure 7. A lot of the answers emphasized the fun aspect of the session, while there were also some research topics named (AI, language, robots,...), which shows that the audience did perceive the session from both a research as well as an entertainment angle.
Figure 4: Survey to attendees: questions about the track and its CfP.
Finally, we checked the interest of the audience with respect to new editions of a similar track. Figure 8 shows on the left that 96% of respondents would welcome this idea, while on the right we see that a large majority suggests to repeat the track every 5 years.
During the rest of the conference, we also collected other comments and considerations by the conference participants. In general, our impression that attendees either loved the fun and entertaining experience or were partially disappointed by the fact that the balance of the session was too much on the fiction side and too little on the research side: a few people commented that they would have rather preferred a "serious" reflection on future Semantic Web visions after the presentations, even if everybody enjoyed the relaxed and creative presentations. One attendee also commented that, with respect to Black Mirror-inspired scientific event he had participated in the past, the presentations and discussions in the session were oriented towards more positive future visions and constructive forecasts, instead towards dystopian or gloomy predictions of risks and problems.
Figure 5: Survey to attendees: questions about the accepted papers and their content.
Figure 6: Survey to attendees: questions about the overall session.
In this respect, one of the authors commented that he would have not liked a "serious" discussion following the presentations, but he envisioned the possibility of organising follow-up workshops to take those fictional visions and discuss about the concrete future of Semantic Web research. Indeed, our intention as track chairs was to provide a fun and interesting space for discussions about (potentially crazy) ideas for the future, in order to stimulate reflection and to solicit emerging concepts and breakthroughs after the session (rather than during it).
Regarding the potential repetition of this kind of track, we would like also to report the feedback received from the PhD student authors of one of the papers: they underlined the importance of this "exercise" to stimulate creativity and to develop important research skills, which is specifically useful for student. As a matter of fact, this has already been considered as part of computer science and AI curricula [11, 12], exactly with the goal of exploiting fiction as an introduction to research. This was also mentioned by another author that cited a popular book published in 1997 [18] that explored how 2001 Space Odyssey had influenced the research and design of intelligent machines.
## 6 Publication of the Proceedings
Our initial plan was to publish the proceedings of the paper with CEUR-WS8. This, however, lead to a few conflicts with the organization's guidelines for publication. First, CEUR-WS asked for AI co-authors to be removed, as they are currently not allowed for papers published on CEUR-WS.9
Footnote 8: [https://ceur-ws.org/](https://ceur-ws.org/)
Footnote 9: [https://ceur-ws.org/ACADEMIC-ETHICS.html](https://ceur-ws.org/ACADEMIC-ETHICS.html)
At a later stage, CEUR-WS pointed to further issues. Essentially, they did not want to publish papers with "false" or "inventend" contents (where the latter, by design, was the spirit of the track). This held both for the actual papers" contents, as well as for the references sections. Another concern raised was the potential indexing of future papers by
Figure 8: Survey to attendees: questions about potential repetitions of the track.
Figure 7: Answers from the free text question
engines such as Google scholar, in which case CEUR-WS would have been involved in injecting false information into those engines.
Ultimately, it was mutually agreed that we withdraw the volume from CEUR-WS. The proceedings will now be published on Zenodo [5]. Time will tell whether the future references will be picked up by Google scholar or not.
## 7 In Lieu of Conclusions
In this paper, we presented an overview and analysis of the outcomes of the "ESWC 2043 - Next 20 Years" track of the Extended Semantic Web Conference 2023. As track chairs, we had the opportunity to run this sort of social experiment involving the Semantic Web community. Our goals were to solicit an innovative form of reflection on the future of research in this area, to collect visions and "bets" from the community, and of course also to have fun. The results went beyond our expectations, in terms of participation of researchers, content of papers and session reception by the audience. We believe that this is a clear sign of the vitality of the Semantic Web community.
Our analysis revealed not only some potential "sneak peeks" into the future, but confirmed that the research process itself may change over time, as the authors (with or without consciously reflecting on it) applied a mix of traditional methods and innovative practices. The design fiction characteristics of the call forced them to focus on the (future) storytelling; they employed technologies in various ways to assist their research process, experimenting the recent advancements in generative AI tools as part of the paper writing; they were able to test and check the possibilities and limitations of such technologies, in support to brainstorming, exploration and composition.
Topic-wise, the accepted papers confirmed the main themes and trends of the Semantic Web community, inserting some additional flavours: decentralization and data spaces, personal data management and privacy/control, interplay with other Artificial Intelligence sub-fields, common sense and interaction with natural and synthetic entities, new devices and applications, technology impact on various domains. Most authors indeed made an effort to reflect upon what would make the Semantic Web still relevant in 20 years, whether or not we will still use such name (instead, for example, the "Wisdom Web" name proposed in a couple of papers).
The response of the conference audience was also very interesting, in that attendees actively accepted the challenge of the fictional discussion, still conveying relevant content and actual challenges in their questions to the authors' panel. An open challenge, in case of future editions of this kind of track, would be to find a proper balance between the fun/imaginary content and the meaningful debate, to make room both for the design fiction aspect and for a serious conversation on the open issues and future visions.
We still believe that this very special track was totally worth the effort, in that not only it led to very interesting results, but also because the design fiction approach proved to be effective in soliciting researchers to "think outside the box". We agree with those authors who stated that this method could be very valuable for PhD students to develop their research skills. We also think that this approach can also be beneficial for a scientific community at large, to kick-start a broader discussion on the future trends and challenges of a discipline. In (lieu of) conclusion, we offer this time traveler's guide to Semantic Web research as an initial seed to promote the growth of a fun yet relevant dialogue on the future research in our community.
## Acknowledgements
First of all, we want to thank Catia Pesquita - General Chair of ESWC 2023 - for assigning us the role of chairs for the "Next 20 years" special track and for allowing us to "stretch" the scope of the track and experimenting the design fiction approach. We also want to thank all the authors that submitted their papers to this special track and that contributed with their presentations at the conference, as well as all ESWC 2023 attendees who participated to this special session and also responded to our survey.
|
2309.05183 | Data Summarization beyond Monotonicity: Non-monotone Two-Stage
Submodular Maximization | The objective of a two-stage submodular maximization problem is to reduce the
ground set using provided training functions that are submodular, with the aim
of ensuring that optimizing new objective functions over the reduced ground set
yields results comparable to those obtained over the original ground set. This
problem has applications in various domains including data summarization.
Existing studies often assume the monotonicity of the objective function,
whereas our work pioneers the extension of this research to accommodate
non-monotone submodular functions. We have introduced the first constant-factor
approximation algorithms for this more general case. | Shaojie Tang | 2023-09-11T01:00:10Z | http://arxiv.org/abs/2309.05183v2 | # Data Summarization beyond Monotonicity: Non-monotone Two-Stage Submodular Maximization
###### Abstract
The objective of a two-stage submodular maximization problem is to reduce the ground set using provided training functions that are submodular, with the aim of ensuring that optimizing new objective functions over the reduced ground set yields results comparable to those obtained over the original ground set. This problem has applications in various domains including data summarization. Existing studies often assume the monotonicity of the objective function, whereas our work pioneers the extension of this research to accommodate non-monotone submodular functions. We have introduced the first constant-factor approximation algorithms for this more general case.
## 1 Introduction
In this paper, we are motivated by the application of data summarization (Wei et al., 2013; Mirzasoleiman et al., 2016; Wei et al., 2015; Lin and Bilmes, 2011) and tackle the two-stage submodular optimization problem. In these applications, we are often faced with multiple user-specific submodular functions, which are used to evaluate the value of a set of items. A typical objective is to select a set of \(k\) items to maximize each submodular function (Krause and Golovin, 2014). While maximizing a single submodular function has been widely explored in the literature, the feasibility of existing solutions diminishes when confronted with a substantial number of submodular functions and items. Consequently, our objective is to reduce the size of the ground set in a manner that minimizes the loss when optimizing a new submodular function over the reduced ground set, as compared to the original ground set.
The problem at hand can be framed as a two-stage submodular maximization problem, as initially introduced in (Balkanski et al., 2016). While the majority of prior studies in this domain presume that each submodular function exhibits monotone non-decreasing behavior, real-world scenarios often involve objective functions that are non-monotone. These instances include feature selection (Das and Kempe, 2008), profit maximization
(Tang and Yuan, 2021), maximum cut (Gotovos et al., 2015), and data summarization (Mirzasoleiman et al., 2016). A significant contribution presented in our work is the development of the first constant-factor approximation algorithm for the non-monotone two-stage submodular maximization problem, with an approximation ratio of \(1/2e\). Remarkably, when the objective function is monotone, our algorithm achieves an improved approximation ratio of \((1-1/e^{2})/2\), thereby recovering the result presented in (Stan et al., 2017).
### Related Work
The problem of non-monotone submodular maximization has garnered substantial attention in the literature (Gharan and Vondrak, 2011; Buchbinder et al., 2014; Tang, 2021; Tang and Yuan, 2022; Tang and Yuan, 2022). The current state-of-the-art solution for this problem, especially when accounting for a cardinality constraint, is a 0.385-approximation algorithm (Buchbinder and Feldman, 2019). However, it is noteworthy that even though each individual objective function considered in our problem exhibits submodularity, the overall objective function is not submodular in general. As a result, the existing findings on non-monotone submodular maximization do not directly apply to our specific setting.
The most closely related work to our research is the study by (Balkanski et al., 2016; Mitrovic et al., 2018) and (Stan et al., 2017). They have developed constant-factor approximation algorithms, primarily tailored for the monotone case. Our work builds upon and extends their results to address the more general and challenging non-monotone scenario. To achieve this goal, we have integrated the "local-search" approach (Stan et al., 2017) with "sampling" technique (Tang, 2021) in a non-trivial way, resulting in the creation of a novel sampling-based algorithm. Furthermore, we have incorporated a trimming phase into our algorithm, enabling us to attain the first constant-factor approximation ratio for the non-monotone case.
## 2 Problem Formulation
The input of our problem is a set of \(n\) items \(\Omega\). There is a group of \(m\) non-monotone submodular functions \(f_{1},\cdots,f_{m}:2^{\Omega}\rightarrow\mathbb{R}_{\geq 0}\). Let \(\Delta_{i}(x,A)=f_{i}(\{x\}\cup A)-f_{i}(A)\) denote the marginal gain of adding \(x\) to the set \(A\) when considering the function \(f_{i}\). Here we say \(f_{i}\) is submodular if and only if \(\Delta_{i}(x,A)\geq\Delta_{i}(x,A^{\prime})\) for any two sets \(A\) and \(A^{\prime}\) such that \(A\subseteq A^{\prime}\subseteq\Omega\), and any item \(x\in\Omega\) such that \(x\notin A^{\prime}\).
Our objective is to compute a reduced ground set \(S\) of size \(l\), where \(l\!\ll\!n\), such that it yields good performance across all \(m\) functions when the choice is limited to items in \(S\). Formally, let
\[F(S)\!=\!\sum_{i\in[m]}\max_{A\subseteq S:|A|\leq k}f_{i}(A) \tag{1}\]
where \(k\) is the size constraint of a feasible solution. Our goal is to find an optimal solution \(O\!\subseteq\!\Omega\) that maximizes \(F\), i.e.,
\[O\!\in\!\operatorname*{arg\,max}_{S\subseteq\Omega:|S|\leq l}F(S). \tag{2}\]
It is worth mentioning that the objective function \(F(\cdot)\) is typically non-submodular, as observed in (Balkanski et al., 2016). Consequently, classical algorithms designed for submodular optimization may not provide any approximation guarantees.
## 3 Algorithm Design and Analysis
Before presenting our algorithm, we need some additional notations. For each \(i\!\in\![m]\), we define the gain associated with removing an item \(y\) and replacing it with \(x\) as \(\nabla_{i}(x,y,A)\!=\!f_{i}(\{x\}\cup A\setminus\{y\})-f_{i}(A)\). Then for each \(i\!\in\![m]\), we define the largest possible gain brought by \(x\), through local-search, with respect to an existing set \(A\) as \(\nabla_{i}(x,A)\). Here the local-search can be realized either by directly adding \(x\) to \(A\) (while maintaining the cardinality constraint) or by substituting it with an item from \(A\). Formally,
\[\nabla_{i}(x,A)\!=\!\begin{cases}0&\text{if }x\!\in\!A\\ \max\{0,\max_{y\in A}\nabla_{i}(x,y,A),\Delta_{i}(x,A)\}&\text{if }x\!\notin\!A \text{ and }|A|\!<\!k\\ \max\{0,\max_{y\in A}\nabla_{i}(x,y,A)\}&\text{if }x\!\notin\!A\text{ and }|A|\!=\!k\end{cases} \tag{3}\]
Let \(\mathsf{Rep}_{i}(x,A)\) represent the item in \(A\) that, when replaced by \(x\), maximizes the incremental gain while maintaining feasibility. Formally,
\[\mathsf{Rep}_{i}(x,A)\!=\!\begin{cases}\emptyset&\text{if }\nabla_{i}(x,A)\!=\!0\\ \emptyset&\text{if }\nabla_{i}(x,A)\!>\!0\text{ and }|A|\!<\!k\\ &\text{and }\max_{y\in A}\nabla_{i}(x,y,A)\!<\!\Delta_{i}(x,A)\\ \arg\max_{y\in A}\nabla_{i}(x,y,A)&\text{if }\nabla_{i}(x,A)\!>\!0\text{ and }|A|\!<\!k\\ &\text{and }\max_{y\in A}\nabla_{i}(x,y,A)\!\geq\!\Delta_{i}(x,A)\\ \arg\max_{y\in A}\nabla_{i}(x,y,A)&\text{if }\nabla_{i}(x,A)\!>\!0\text{ and }|A|\!=\!k\end{cases} \tag{4}\]
Now we are ready to present the design of our algorithm Sampling-Greedy (Algorithm 1). Throughout the process, Sampling-Greedy maintains a solution set denoted as \(S\), along with a set of feasible solutions \(T_{i}\subseteq S\) for each function \(f_{i}\) (all of which are initially set to empty). In each iteration, it first computes the top \(l\) items \(M\) from the extended ground set \(\Omega\) based on its combined contribution to each \(f_{i}\), indicated by \(\sum_{i=1}^{m}\nabla_{i}(x,T_{i})\). That is,
\[M=\underset{A\subseteq\Omega:|A|=l}{\arg\max}\sum_{x\in A}\sum_{i=1}^{m} \nabla_{i}(x,T_{i}). \tag{5}\]
Then it randomly selects one item, say \(x^{*}\), from \(M\) and adds \(x^{*}\) to \(S\). Sampling-Greedy then verifies if any of the sets \(T_{i}\) can be improved. This can be achieved by either directly adding \(x^{*}\) (while adhering to the cardinality constraint) or substituting it with an item from \(T_{i}\). For each \(i\in[m]\), we update \(T_{i}\) if and only if \(\nabla_{i}(x^{*},T_{i})>0\).
Note that there might exist some \(i\in[m]\) and \(x\in T_{i}\) such that \(f_{i}(T_{i})-f_{i}(T_{i}\setminus\{x\})<0\). In other words, certain subsets \(T_{i}\) could contain items that provide negative marginal utility to the set \(T_{i}\). Consequently, we introduce a "trimming" phase (Algorithm 2) to refine each \(T_{i}\) and ensure that no item contributes negative utility to it. This can be achieved through an iterative process of evaluating the marginal utility of each item within \(T_{i}\) and subsequently removing any items with negative marginal utility. By the submodularity of \(f_{i}\), we can show that after this trimming phase, \(T_{i}\) does not contain any items whose marginal utility if negative. It is also easy to verify that the trimming phase does not decrease the utility of our solution. A formal description of these properties is presented in the following lemma.
```
1:\(S\leftarrow\emptyset,T_{i}\leftarrow\emptyset(\forall i\in[m])\)
2:for\(j\in[l]\)do
3:\(M=\arg\max_{A\subseteq\Omega:|A|=l}\sum_{x\in A}\sum_{i=1}^{m}\nabla_{i}(x,T_{i})\).
4: randomly pick one item \(x^{*}\) from \(M\), \(S\gets S\cup\{x^{*}\}\)
5:for\(i\in[m]\)do
6:if\(\nabla_{i}(x^{*},T_{i})>0\)then
7:\(T_{i}\gets T_{i}\setminus\textsf{Rep}_{i}(x^{*},T_{i})\cup\{x^{*}\}\)
8:\(T_{i}\leftarrow\textsf{Trim}(T_{i},f_{i})\)
9:return\(S,T_{1},T_{2},\cdots,T_{m}\)
```
**Algorithm 1** Sampling-Greedy
```
1:\(A\gets B\)
2:for\(x\in A\)do
3:if\(f_{i}(A)-f_{i}(A\setminus\{x\})<0\)then
4:\(A\gets A\setminus\{x\}\)
5:return\(A\)
```
**Algorithm 2**\(\mathsf{Trim}(B,\,f_{i})\)
**Lemma 1**: _Consider any set of items \(B\subseteq\Omega\) and a function \(f_{i}\). Assume \(A\) is returned from \(\mathsf{Trim}(B,\,f_{i})\), we have \(f_{i}(A)\geq f_{i}(B)\) and for all \(x\in A\), we have \(f_{i}(A)-f_{i}(A\setminus\{x\})\geq 0\)._
The proof that \(f_{i}(A)\geq f_{i}(B)\) is straightforward, as it follows from the fact that the trimming phase only eliminates items with a negative marginal contribution. We next prove that for all \(x\in A\), we have \(f_{i}(A)-f_{i}(A\setminus\{x\})\geq 0\). We prove this through contradiction. Suppose there exists an item \(y\in A\) such that \(f_{i}(A)-f_{i}(A\setminus\{y\})<0\). Let's denote the solution before considering the inclusion of \(y\) as \(A^{\prime}\). In this case, it must hold that \(f_{i}(A^{\prime})-f_{i}(A^{\prime}\setminus\{y\})\geq 0\), as otherwise, the trimming phase would eliminate \(y\) from the solution. Furthermore, it is straightforward to confirm that \(A\subseteq A^{\prime}\). As a consequence, based on the assumption that \(f_{i}\) is a submodular function, we have \(f_{i}(A)-f_{i}(A\setminus\{y\})\geq f_{i}(A^{\prime})-f_{i}(A^{\prime} \setminus\{y\})\). This, together with \(f_{i}(A^{\prime})-f_{i}(A^{\prime}\setminus\{y\})\geq 0\), implies that \(f_{i}(A)-f_{i}(A\setminus\{y\})\geq f_{i}(A^{\prime})-f_{i}(A^{\prime} \setminus\{y\})\geq 0\). This contradicts to the assumption that \(f_{i}(A)-f_{i}(A\setminus\{y\})<0\). \(\Box\)
### Performance analysis
First, it is easy to verify that \(\mathsf{Sampling}\)-\(\mathsf{Greedy}\) requires \(O(l(mkl+mn))\) function evaluations. This is because \(\mathsf{Sampling}\)-\(\mathsf{Greedy}\) comprises \(l\) iterations, where each iteration involves \(mkl\) function evaluations in Line 3 of Algorithm 1, along with an additional \(mn\) function evaluations in Algorithm 2. In the following theorem, we show that the expected utility of our solution is at least a constant-factor approximation of the optimal solution.
**Theorem 1**: _Sampling-Greedy returns a random set \(S\) of size at most \(l\) such that_
\[\mathbb{E}_{S}[F(S)]\geq\frac{1}{2e}F(O) \tag{6}\]
_where \(O\) represents the optimal solution._
The rest of this section is devoted to proving this theorem. The basic idea behind the proof is to establish a lower bound on the expected marginal utility achieved by adding \(x^{*}\) to set \(S\) after each iteration. We demonstrate that this utility increment is substantial enough to guarantee a \(1/2e\) approximation ratio. Consider an arbitrary round \(t\in[l]\) of Sampling-Greedy, let \(S\) and \(T_{1},\cdots,T_{m}\) denote the solution obtained at the end of round \(t\). By the design of Sampling-Greedy, we randomly pick an item \(x^{*}\) from \(M\) and add it to \(S\), hence, by the definition of \(M\), the expected marginal utility of adding \(x^{*}\) to \(S\) before the "trimming phase" is
\[\mathbb{E}_{x^{*}}[\sum_{i=1}^{m}\nabla_{i}(x^{*},T_{i})]=\frac{1}{l}\max_{A \subseteq\Omega:|A|=l}\sum_{x\in A}\sum_{i=1}^{m}\nabla_{i}(x,T_{i}). \tag{7}\]
Recall that the trimming phase does not decrease utility. Therefore, the ultimate expected utility increment after each iteration is at least \(\mathbb{E}_{x^{*}}[\sum_{i=1}^{m}\nabla_{i}(x^{*},T_{i})]\). Moreover, because \(F\) is a monotone function, it is safe to assume that the size of the optimal solution is \(l\), i.e, \(|O|=l\). We next provide a lower bound on \(\mathbb{E}_{x^{*}}[\sum_{i=1}^{m}\nabla_{i}(x^{*},T_{i})]\).
Observe that
\[\mathbb{E}_{x^{*}}[\sum_{i=1}^{m}\nabla_{i}(x^{*},T_{i})]=\frac{1} {l}\max_{A\subseteq\Omega:|A|=l}\sum_{x\in A}\sum_{i=1}^{m}\nabla_{i}(x,T_{i})\] \[\geq\frac{1}{|O|}\sum_{x\in O}\sum_{i=1}^{m}\nabla_{i}(x,T_{i})= \frac{1}{l}\sum_{x\in O}\sum_{i=1}^{m}\nabla_{i}(x,T_{i}) \tag{8}\]
Let \(O_{i}\subseteq O\) represent a subset with a maximum size of \(k\) items, chosen to maximize \(f_{i}\), i.e., \(O_{i}=\arg\max_{A\subseteq O:|A|\leq k}f_{i}(A)\). Inequality (8) implies that
\[\mathbb{E}_{x^{*}}[\sum_{i=1}^{m}\nabla_{i}(x^{*},T_{i})]\geq\frac{1}{l}\sum_{ x\in O}\sum_{i=1}^{m}\nabla_{i}(x,T_{i})\geq\frac{1}{l}\sum_{i=1}^{m}\sum_{x \in O_{i}}\nabla_{i}(x,T_{i}). \tag{9}\]
It is easy to verify that there is a mapping \(\pi\) between \(O_{i}\) and \(T_{i}\) such that every item of \(O_{i}\cap T_{i}\) is mapped to itself, and every item of \(O_{i}\setminus T_{i}\) is mapped to either the empty set or an item in \(T_{i}\setminus O_{i}\). We next give a lower bound of \(\nabla_{i}(x,T_{i})\).
**Lemma 2**: _For all \(i\in[m]\) and \(x\in O_{i}\), we have_
\[\nabla_{i}(x,T_{i})\geq\Delta_{i}(x,T_{i})-\Delta_{i}(\pi(x),T_{i}\setminus \{\pi(x)\}). \tag{10}\]
_Proof:_ We prove this lemma in three cases. We first consider the case when \(x\notin T_{i}\) and \(\pi(x)\neq\emptyset\). In this case, the following chain proves this lemma.
\[\nabla_{i}(x,T_{i}) \geq f_{i}(\{x\}\cup T_{i}\setminus\{\pi(x)\})-f_{i}(T_{i}) \tag{11}\] \[= \Delta_{i}(x,T_{i})-\Delta_{i}(\pi(x),T_{i}\cup\{x\}\setminus\{\pi (x)\})\] (12) \[\geq \Delta_{i}(x,T_{i})-\Delta_{i}(\pi(x),T_{i}\setminus\{\pi(x)\}) \tag{13}\]
where the first inequality is by the definition of \(\nabla_{i}(x,T_{i})\) and the second inequality is by the assumption that \(f_{i}\) is a submodular function.
We next consider the case when \(x\notin T_{i}\) and \(\pi(x)=\emptyset\). In this case, because \(\pi(x)=\emptyset\), i.e., \(x\) is not mapped to any item from \(T_{i}\), we have \(|T_{i}|<k\). Hence,
\[\nabla_{i}(x,T_{i})=\max\{0,\max_{y\in T_{i}}\nabla_{i}(x,y,T_{i}),\Delta_{i} (x,T_{i})\}\geq\Delta_{i}(x,T_{i}). \tag{14}\]
Moreover, \(\pi(x)=\emptyset\) implies that
\[\Delta_{i}(\pi(x),T_{i}\setminus\{\pi(x)\})=0. \tag{15}\]
It follows that
\[\nabla_{i}(x,T_{i})\geq\Delta_{i}(x,T_{i})-0=\Delta_{i}(x,T_{i})-\Delta_{i}( \pi(x),T_{i}\setminus\{\pi(x)\}), \tag{16}\]
where the inequality is by inequality (14) and the equality is by equality (15).
At last, we consider the case when \(x\in T_{i}\). In this case, we have \(\Delta_{i}(x,T_{i})=0\), and \(\Delta_{i}(\pi(x),T_{i}\setminus\{\pi(x)\})\geq 0\), a consequence of the trimming phase (Lemma 1). Hence, \(\Delta_{i}(x,T_{i})-\Delta_{i}(\pi(x),T_{i}\setminus\{\pi(x)\})\leq 0\). It follows that
\[\nabla_{i}(x,T_{i})\geq 0\geq\Delta_{i}(x,T_{i})-\Delta_{i}(\pi(x),T_{i} \setminus\{\pi(x)\}). \tag{17}\]
\(\Box\)
Inequality (9) and Lemma 2 imply that
\[\mathbb{E}_{x^{*}}[\sum_{i=1}^{m}\nabla_{i}(x^{*},T_{i})] \geq \frac{1}{l}\sum_{i=1}^{m}\sum_{x\in O_{i}}\nabla_{i}(x,T_{i}) \tag{18}\] \[\geq \frac{1}{l}\sum_{i=1}^{m}\sum_{x\in O_{i}}(\Delta_{i}(x,T_{i})- \Delta_{i}(\pi(x),T_{i}\setminus\{\pi(x)\})). \tag{19}\]
Because \(f_{i}\) is submodular, we have
\[\sum_{x\in O_{i}}\Delta_{i}(x,T_{i})\geq f_{i}(O_{i}\cup T_{i})-f_{i}(T_{i}). \tag{20}\]
Moreover, no two items from \(O_{i}\) are mapped to the same item from \(T_{i}\), we have
\[\sum_{x\in O_{i}}\Delta_{i}(\pi(x),T_{i}\setminus\{\pi(x)\})\leq \sum_{y\in T_{i}}\Delta_{i}(y,T_{i}\setminus\{y\})\leq f_{i}(T_{i}) \tag{21}\]
where the first inequality is by the observation that \(\Delta_{i}(y,T_{i}\setminus\{y\})\geq 0\) for all \(y\in T_{i}\) and the second inequality is by the assumption that \(f_{i}\) is submodular.
Inequalities (19), (20) and (21) together imply that
\[\mathbb{E}_{x^{*}}[\sum_{i=1}^{m}\nabla_{i}(x^{*},T_{i})] \geq \frac{1}{l}\sum_{i=1}^{m}\sum_{x\in O_{i}}(\Delta_{i}(x,T_{i})- \Delta_{i}(\pi(x),T_{i}\setminus\{\pi(x)\})) \tag{22}\] \[\geq \frac{1}{l}\sum_{i=1}^{m}(f_{i}(O_{i}\cup T_{i})-f_{i}(T_{i})-f_ {i}(T_{i}))\] (23) \[= \frac{1}{l}\sum_{i=1}^{m}(f_{i}(O_{i}\cup T_{i})-2f_{i}(T_{i})). \tag{24}\]
Taking the expectation over \(T_{1},\cdots,T_{m}\) for both the left and right hand sides of (24), we have
\[\mathbb{E}_{T_{1},\cdots,T_{m}}\big{[}\mathbb{E}_{x^{*}}[\sum_{i= 1}^{m}\nabla_{i}(x^{*},T_{i})]\big{]} \tag{25}\] \[\geq \mathbb{E}_{T_{1},\cdots,T_{m}}[\frac{1}{l}\sum_{i=1}^{m}(f_{i}(O _{i}\cup T_{i})-2f_{i}(T_{i}))]\] (26) \[= \mathbb{E}_{T_{1},\cdots,T_{m}}[\frac{1}{l}\sum_{i=1}^{m}(f_{i}(O _{i}\cup T_{i}))]-\mathbb{E}_{T_{1},\cdots,T_{m}}[\sum_{i=1}^{m}\frac{2}{l}f_ {i}(T_{i}))]\] (27) \[= \frac{1}{l}\mathbb{E}_{T_{1},\cdots,T_{m}}[\sum_{i=1}^{m}(f_{i}(O _{i}\cup T_{i}))]-\frac{2}{l}\mathbb{E}_{T_{1},\cdots,T_{m}}[\sum_{i=1}^{m}f_ {i}(T_{i}))]\] (28) \[\geq \frac{1}{l}(1-\frac{1}{l})^{t}\sum_{i=1}^{m}f_{i}(O_{i})-\frac{2}{ l}\mathbb{E}_{T_{1},\cdots,T_{m}}[f_{i}(T_{i}))]\] (29) \[= \frac{1}{l}(1-\frac{1}{l})^{t}F(O)-\frac{2}{l}\mathbb{E}_{T_{1}, \cdots,T_{m}}[f_{i}(T_{i}))]. \tag{30}\]
The second inequality is by the observation that \(\mathbb{E}_{T_{1},\cdots,T_{m}}[\sum_{i=1}^{m}(f_{i}(O_{i}\cup T_{i}))]\geq(1 -\frac{1}{l})^{t}\sum_{i=1}^{m}f_{i}(O_{i})\). To prove this inequality, recall that in each round, Sampling-Greedy randomly picks an item from \(M\) to be included in \(S\). Hence, right before entering round \(t\) of
Sampling-Greedy, each item \(x\in\Omega\) has a probability of at most \(p=1-(1-\frac{1}{l})^{t}\) of being included in \(S\) and consequently in \(T_{i}\) for all \(i\in[m]\). By Lemma 2.2 of (Buchbinder et al. 2014), we have \(\mathbb{E}_{T_{i}}[f_{i}(O_{i}\cup T_{i})]\geq(1-p)f_{i}(O_{i})=(1-\frac{1}{l} )^{t}f_{i}(O_{i})\) for all \(i\in[m]\). It follows that \(\mathbb{E}_{T_{1},\cdots,T_{m}}[\sum_{i=1}^{m}(f_{i}(O_{i}\cup T_{i}))]\geq(1- \frac{1}{l})^{t}\sum_{i=1}^{m}f_{i}(O_{i})\).
Let \(X_{t}\) denote the value of \(\mathbb{E}_{T_{1},\cdots,T_{m}}\big{[}\mathbb{E}_{x^{*}}[\sum_{i=1}^{m}f_{i}( T_{i})]\big{]}\) at the end of round \(t\). Inequality (30) implies that
\[X_{t+1}-X_{t} \geq \frac{1}{l}(1-\frac{1}{l})^{t}F(O)-\frac{2}{l}X_{t} \tag{31}\] \[\Rightarrow 2(X_{t+1}-X_{t}) \geq \frac{1}{l}(1-\frac{1}{l})^{t}F(O)-\frac{2}{l}X_{t}\] (32) \[\Rightarrow 2X_{t+1}-2X_{t} \geq \frac{1}{l}(1-\frac{1}{l})^{t}F(O)-\frac{2}{l}X_{t}\] (33) \[\Rightarrow 2X_{t+1} \geq \frac{1}{l}(1-\frac{1}{l})^{t}F(O)+(2-\frac{2}{l})X_{t}. \tag{34}\]
Based on the above inequality, we next prove through induction that \(2X_{t}\geq\frac{t}{l}(1-\frac{1}{l})^{t-1}F(O)\). Note that \(X_{0}=0\), meaning that the utility before the start of the algorithm is zero. The induction step is established in the following manner:
\[2X_{t+1} \geq \frac{1}{l}(1-\frac{1}{l})^{t}F(O)+(2-\frac{2}{l})X_{t} \tag{35}\] \[\Rightarrow 2X_{t+1} \geq \frac{1}{l}(1-\frac{1}{l})^{t}F(O)+(1-\frac{1}{l})\frac{t}{l}(1- \frac{1}{l})^{t-1}F(O)\] (36) \[= \frac{1}{l}(1-\frac{1}{l})^{t}F(O)+\frac{t}{l}(1-\frac{1}{l})^{t }F(O)\] (37) \[= \frac{t+1}{l}(1-\frac{1}{l})^{t}F(O). \tag{38}\]
It follows that the value of \(2X_{l}\) is at least \((1-\frac{1}{l})^{l-1}F(O)\), which itself is bounded from below by \((1/e)\cdot F(O)\). Here, \(X_{l}\) represents the expected utility of our algorithm upon completion. Hence, the expected utility of our algorithm is at least \(X_{l}\geq(1/2e)\cdot F(O)\).
### Enhanced results for monotone case
For the case when \(f_{i}\) is both monotone and submodular, we will demonstrate that the approximation ratio of Sampling-Greedy is improved to \((1-1/e^{2})/2\) which recovers the results presented in (Stan et al. 2017). Observe that if \(f_{i}\) is monotone, we have \(f_{i}(O_{i}\cup T_{i})\geq f_{i}(O_{i})\). Hence, inequality (28) implies that
\[\mathbb{E}_{T_{1},\cdots,T_{m}}\big{[}\mathbb{E}_{x^{*}}[\sum_{i=1}^{m}\nabla _{i}(x^{*},T_{i})]\big{]} \tag{39}\]
\[\geq \frac{1}{l}\mathbb{E}_{T_{1},\cdots,T_{m}}[\sum_{i=1}^{m}(f_{i}(O_{i }\cup T_{i}))]-\frac{2}{l}\mathbb{E}_{T_{1},\cdots,T_{m}}[\sum_{i=1}^{m}f_{i}(T _{i}))] \tag{40}\] \[\geq \frac{1}{l}\mathbb{E}_{T_{1},\cdots,T_{m}}[\sum_{i=1}^{m}(f_{i}(O _{i}))]-\frac{2}{l}\mathbb{E}_{T_{1},\cdots,T_{m}}[\sum_{i=1}^{m}f_{i}(T_{i}))]\] (41) \[= \frac{1}{l}\sum_{i=1}^{m}f_{i}(O_{i})-\frac{2}{l}\mathbb{E}_{T_{1 },\cdots,T_{m}}[\sum_{i=1}^{m}f_{i}(T_{i}))]\] (42) \[= \frac{1}{l}F(O)-\frac{2}{l}\mathbb{E}_{T_{1},\cdots,T_{m}}[\sum_ {i=1}^{m}f_{i}(T_{i}))] \tag{43}\]
where the first equality is because \(O_{i}\) is a fixed set for all \(i\in[m]\). Let \(X_{t}\) denote the value of \(\mathbb{E}_{T_{1},\cdots,T_{m}}\big{[}\mathbb{E}_{x^{*}}[\sum_{i=1}^{m}\nabla _{i}(x^{*},T_{i})]\big{]}\) at the end of round \(t\). Inequality (43) implies that
\[X_{t+1}-X_{t}\geq\frac{1}{l}F(O)-\frac{2}{l}X_{t}. \tag{44}\]
Previous research (Stan et al., 2017) has demonstrated that by inductively solving the equation above, we can establish that \(X_{l}\geq((1-1/e^{2})/2)\cdot F(O)\).
|
2309.03294 | MALITE: Lightweight Malware Detection and Classification for Constrained
Devices | Today, malware is one of the primary cyberthreats to organizations. Malware
has pervaded almost every type of computing device including the ones having
limited memory, battery and computation power such as mobile phones, tablets
and embedded devices like Internet-of-Things (IoT) devices. Consequently, the
privacy and security of the malware infected systems and devices have been
heavily jeopardized. In recent years, researchers have leveraged machine
learning based strategies for malware detection and classification. Malware
analysis approaches can only be employed in resource constrained environments
if the methods are lightweight in nature. In this paper, we present MALITE, a
lightweight malware analysis system, that can classify various malware families
and distinguish between benign and malicious binaries. MALITE converts a binary
into a gray scale or an RGB image and employs low memory and battery power
consuming as well as computationally inexpensive malware analysis strategies.
We have designed MALITE-MN, a lightweight neural network based architecture and
MALITE-HRF, an ultra lightweight random forest based method that uses histogram
features extracted by a sliding window. We evaluate the performance of both on
six publicly available datasets (Malimg, Microsoft BIG, Dumpware10, MOTIF,
Drebin and CICAndMal2017), and compare them to four state-of-the-art malware
classification techniques. The results show that MALITE-MN and MALITE-HRF not
only accurately identify and classify malware but also respectively consume
several orders of magnitude lower resources (in terms of both memory as well as
computation capabilities), making them much more suitable for resource
constrained environments. | Sidharth Anand, Barsha Mitra, Soumyadeep Dey, Abhinav Rao, Rupsa Dhar, Jaideep Vaidya | 2023-09-06T18:17:38Z | http://arxiv.org/abs/2309.03294v1 | # MALITE: Lightweight Malware Detection and Classification for Constrained Devices
###### Abstract
Today, malware is one of the primary cyberthreats to organizations. Malware has pervaded almost every type of computing device including the ones having limited memory, battery and computation power such as mobile phones, tablets and embedded devices like Internet-of-Things (IoT) devices. Consequently, the privacy and security of the malware infected systems and devices have been heavily jeopardized. In recent years, researchers have leveraged machine learning based strategies for malware detection and classification. Malware analysis approaches can only be employed in resource constrained environments if the methods are lightweight in nature. In this paper, we present MALITE, a lightweight malware analysis system, that can classify various malware families and distinguish between benign and malicious binaries. MALITE converts a binary into a gray scale or an RGB image and employs low memory and battery power consuming as well as computationally inexpensive malware analysis strategies. We have designed MALITE-MN, a lightweight neural network based architecture and MALITE-HRF, an ultra lightweight random forest based method that uses histogram features extracted by a sliding window. We evaluate the performance of both on six publicly available datasets (Malimg, Microsoft BIG, Dumpware10, MOTIF, Drebin and CICAndMal2017), and compare them to four state-of-the-art malware classification techniques. The results show that MALITE-MN and MALITE-HRF not only accurately identify and classify malware but also respectively consume several orders of magnitude lower resources (in terms of both memory as well as computation capabilities), making them much more suitable for resource constrained environments.
Keywords:Malware detection Malware classification Lightweight Constrained environment
Introduction
Malicious software or malware is a huge problem worldwide with over 5.5 billion attacks during 2022 [8]. Malware is an application that can potentially damage the environment in which it is executed. Cyber criminals propagate and introduce malware into various computing systems mostly using the Internet with the intent of damaging such systems, espionage and information theft thereby violating user security and privacy. Such computing systems include personal desktop computers, laptops, workstations, servers, mobile phones, tablets and even embedded devices like Internet-of-Things (IoT) devices. Out of the aforementioned computing environments, mobile phones, tablets and embedded devices are considered as resource constrained devices with respect to available memory, battery capacity and computational power. Over the past few years, there has been numerous incidents of malware attacks on different types of computing systems, with over 60 million malware attacks against IoT devices [8].
Therefore, safeguarding these systems against the various malware families is of utmost importance. Researchers have invested a considerable amount of effort in designing strategies for identifying and classifying malware. The rapid development of artificial intelligence in recent years has lead to the growing interest in leveraging machine learning and deep learning models to effectively detect and classify malware [14], [30], [34], [57]. Inspite of the emergence of a huge number of methods dedicated towards malware analysis, few of these approaches pay attention to the overhead imposed in terms of memory consumption, battery consumption and computational complexity. These overheads need to be duly accounted for and even optimized if defensive strategies against malware are to be deployed in resource constrained environments like mobile phones, tablets and IoT devices. In fact, in recent years, there has been a growing concern regarding the security of these devices due to the surge in the spread of android and IoT malware. Thus, we need lightweight malware detection and classification techniques to protect such constrained devices.
In this paper, we propose **M**alware **A**nalysis using **L**ightweight Methods or MALITE to distinguish between benign and malware binaries as well as classify the different malware families. MALITE is capable of performing accurate malware analysis and requires much lesser memory and computation power resulting in reduced battery consumption compared to several state-of-the-art methods. The main contributions of the paper are summarized as follows:
* We propose MALITE to identify malware binaries and categorize the various malware families by transforming the binaries into gray scale or RGB images. The underlying lightweight methods employed by MALITE use low cost strategies like histogram computation, random forest classifier and residual bottleneck layers [53] resulting in two variants, MALITE-HRF and MALITE-MN.
* We design an ultra lightweight technique MALITE-HRF in terms of both parameter count and computational cost. MALITE-HRF employs a sliding window to extract the histogram feature from an input image. These histogram features are then used by a random forest classifier to categorize
malware. To ensure the lightweight nature of the proposed method, we use histogram binning and restrict the number of trees (also called estimators) as well as the height of each estimator in random forest.
* We also present MALITE-MN, a lightweight neural network architecture designed using computationally inexpensive residual bottleneck layers [53]. Further, the design of this architecture ensures a low parameter count leading to reduced memory usage.
* We show the efficacy of MALITE-HRF and MALITE-MN by evaluating them using six open-source datasets like Malimg [43], Microsoft BIG [52], Dumpware10 [5], MOTIF [28], Drebin [1] and CICAndMal2017 [36].
* We compare our strategies with four existing malware categorization methods that include 3C2D [42], DTMIC [35], an approach proposed by Wong et al. [61] and MalConv2 [50] with respect to identification of various malware families and benign vs. malware classification.
* The experimental results highlight that our proposed methods are not only accurate for malware analysis but are also extremely lightweight in terms of parameter count, number of multiplication and addition operations performed and model size when compared to the above mentioned four state-of-the-art approaches. Specifically, MALITE-MN requires between 226 to 2 times lesser computational overhead while being between 375 to 6 times smaller in size than these existing methods while achieving comparable or even better performance. MALITE-HRF is even more lightweight, requiring between 528611 to 5598 times lesser computational overhead while being between 6761 to 107 times smaller in size than these existing methods and still achieves comparable or better performance.
The rest of the paper is organized as follows. Section 2 reviews the existing literature on malware detection and categorization. In Section 3, we describe MALITE, our lightweight framework for malware analysis. Dataset description and performance evaluation of MALITE are presented in Section 4. Finally, we conclude the paper in Section 5.
## 2 Related Work
Over the past several decades, researchers have focused on designing techniques for detecting and classifying different types of malware. The strategies employed for performing the detection and classification tasks include static analysis, dynamic analysis, rule-based approach and graph based methods. The rapid progress of research in the field of artificial intelligence, specially machine learning and deep learning, has led to the development of malware analysis and detection strategies using machine learning. An active learning based approach to collect potentially suspicious files in order to update existing malware databases has been proposed in [44]. Bae et al. [3] propose a machine learning based method to distinguish ransomware from benign files and classify it among other malware families. A deep learning based strategy to classify malware has been proposed
in [2]. Recently, a rule-based malware detection approach based on the industry standard of YARA rules 6 has been presented in [6].
Footnote 6: [https://yara.readthedocs.io/en/stable/](https://yara.readthedocs.io/en/stable/)
Android is currently one of the most popular operating systems for mobile phones. As a result, android has been extensively targeted for malware injection. Hence, in the recent past, several techniques have been put forth to make android resilient to malware. Li et al. in [37] have proposed Significant Permission IDentification (SigPID) to identify android malware based on the permissions used by the android applications. The authors in [46] have designed a graph based android malware detection framework. Jerbi et al. [25] have proposed a dynamic malware detection strategy and a genetic algorithm based artificial malware pattern generator. A machine learning based android malware detection approach based on dynamic analysis has been presented in [40]. A random forest and API call based strategy that can perform benign vs. malicious android app classification has been outlined in [29]. Other android malware detection and classification strategies encompassing static analysis, dynamic analysis, API call based features, machine learning and deep learning include [15], [18], [19], [24], [30], [32], [33], [34], [39], [45], [47], [48], [54], [56], [63]. Researchers have also proposed methods resilient to malware evolution and obfuscation [20], [31], [67].
A very popular direction of research in malware detection and classification involves converting the malware binaries into gray scale or RGB images and then classifying these images. Baptista et al. [4] have proposed a deep learning based method for identifying malware by transforming the malware binaries into RGB images. The authors in [22] use a Convolutional Neural Network (CNN) based architecture to detect malware after converting the malware files into gray scale and color images. Vasan et al. [58], [59] have also developed image based malware classification techniques that use CNN. In [49], the authors have converted the malware and benign files into gray scale and color images and have used CNN to perform malware vs. benign classification. An image and deep transfer learning based malware categorization strategy has been presented in [35]. The work in [65] has transformed the malware binaries into images and have used deep convolutional neural network to classify the images. Other image and deep learning based strategies to distinguish between malware and benign files and to determine the various malware families include [7], [13], [21], [26], [27], [38], [42], [50], [51], [57], [60], [61], [62].
The image based strategies for malware analysis have also been applied for android malware. In [68], Unver et al. have developed an android malware classification method by converting the android application files into gray scale images. DEXRAY, a CNN-based approach for malware detection that converts the dex files of the android apps into gray scale images has been presented in [10]. In [11], the authors have developed a method to identify android malware by converting the malware files into gray scale images and using GIST features. [14] outlines an autoencoder based android malware classification technique that produces an image representation of the API call sequences of various android apps. Several other similar image based approaches are [16], [17], [41], [64].
In spite of the presence of the wide array of works on malware analysis, few of them take into consideration the memory overhead and computational complexity of the classification models. In fact, very few of the existing approaches attempt to make the detection and classification tasks lightweight so that the resulting models are suitable for mobile and embedded devices. Some of the methods like [20] and [45] highlight the lightweight property in terms of the feature vector size and execution time. However, to the best of our knowledge, none of the current malware analysis works actually showcase the lightweight nature of the detection and classification models in terms of the memory consumed, parameter count and the number of operations performed. This paper attempts to bridge this gap by presenting MALITE for resource constrained devices.
## 3 Proposed Approach
In this section, we present our lightweight methods suitable for embedded or memory constrained devices to classify malware. Our proposed methods perform malware identification and categorization by converting the malware binary into an image. The image conversion method is described in Sub-section 3.1. The converted malware images are then classified into malware families based on the features extracted from these images. We propose an end-to-end malware classification technique by designing a lightweight Convolutional Neural Network (CNN), and another computationally efficient feature extraction based classification method. The proposed classification methods are discussed in Subsection 3.2.
### Binary Visualization
A visual representation of the internal static structure of a binary file can be obtained by converting the binary file into an array of 8-bit unsigned integers. This array forms an image-like plot of the binary file fragments and is known as byteplot. This technique was originally introduced in [9] and is an efficient method to interpret binary files. Byteplot was later applied by Nataraj et al. [43] for the purpose of classifying malware based on their image representations.
In this work, we apply the byteplot method to transform binary files, both malware and benign, into image files. We first convert each byte of the binary file into an unsigned 8-bit integer, ranging from 0 to 255, such that 0 corresponds to black and 255 to white. We then reshape this integer array into fixed-width images, whose height depends on the malware or benign
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|} \hline
**File Size (in KB)** & \textless{}10 & 10 - 30 & 30 - 60 & 60 - 100 & 100 - 200 & 200 - 500 & 500 - 1000 & \textgreater{}1000 \\ \hline
**Width** & 32 & 64 & 128 & 256 & 384 & 512 & 768 & 1024 \\ \hline \end{tabular}
\end{table}
Table 1: Relation between File Size and Image Width [43]
images with zeros to make their heights multiples of 32. The fixed widths are chosen based on the previous work of Nataraj et al. [43], as shown in Table 1. We generate both gray scale and color images from the integer array. For gray scale images, each integer value represents the pixel intensity. For color images, we use three consecutive integer values to form the RGB components of each pixel. We resize all the images to square images, each of dimension \(256\times 256\) for our work. Some sample gray scale images are shown in Fig. 1.
### MALITE: Lightweight Framework for Malware Detection and Classification
To perform lightweight malware identification and classification, we present MALITE, a framework that consists of two novel methods. The first method, MALITE-HRF, extracts patchwise histogram features from malware and benign images and uses a random forest classifier to distinguish among different malware families. The second method, MALITE-MN, leverages a lightweight Convolutional Neural Network (CNN) architecture to learn discriminative features from malware images and perform classification. In the subsequent sections, we discuss these methods in detail.
**MALITE-HRF:** A lightweight technique for malware identification and classification, MALITE-HRF is developed by computing a histogram of the intensity values present in a malware or benign image. A histogram of the intensity values represents the frequency of a particular intensity value present in an image. We utilize this concept to understand the frequency of hex codes present in the original malware and benign binary files. Usually, hex codes present in the binary files refer to some assembly language instructions. Our hypothesis is that combinations of a few unique sets of such instructions constitute a specific binary file,
Figure 1: Examples of constructed images for selected malware families from Microsoft BIG dataset [52] (top part) and Dumpware10 [5] (bottom part, includes benign sample).
and one binary file differs from another based on the presence of such unique sets of instructions.
We utilize the concept of the histogram on patches of a binary image to identify the unique sets of instructions. The patches in the images are obtained in such a way that the height and the width of each patch are factors of 8. We have experimented with different patch sizes, and we empirically found that a patch of dimension \(32\times 256\) for an image of dimension \(256\times 256\) gives the best result when patches are computed with an overlapping window of 50%. Overall block diagram of MALITE-HRF is shown in Fig. 2.
To reduce the model size and overall computational cost, we further binned the histogram range values into 64 bins empirically. Thus, each patch is represented with 64-dimension histogram features, and for an image of dimension \(256\times 256\), we obtain 16 such patches. Therefore, an image of dimension \(256\times 256\) is represented with a 1024-dimensional feature vector. Typical examples of extracted histogram features for a few selected malware families from the Microsoft BIG dataset [52] are shown in Fig. 3. From this figure, it is evident that there are indeed some common and distinct patterns present between two binary files from the same and different malware families, respectively. We train a random forest classifier with this 1024-dimensional feature vector for the purpose of malware identification and classification process.
**MALITE-MN:** The design choice of MALITE-MN is influenced by the efficient mobile architectures for computer vision applications. MobileNetV2 [53] is one such architecture. The key component of MALITE-MN is the residual bottleneck block, which reduces the computational complexity of convolutional layers by a factor of \(k^{2}\), where \(k\) is the convolution kernel size. In our case, we choose \(k=3\), resulting in a nine-fold reduction in number of multiplication and addition operations. The typical composition and structure of bottleneck layers are shown in Table 2.
We describe the architecture of MALITE-MN as follows. The image embedding extractor consists of a convolutional layer, followed by eight bottleneck blocks, followed by another convolutional layer. We apply batch normalization
Figure 2: MALITE-HRF: Each colored box denotes a sliding window from which histogram features are extracted that are used for classification using random forest
and activation layers after each layer in the extractor. We use a kernel size of \(3\times 3\) for all convolutional and bottleneck layers. The expansion factor \(t\) of the bottleneck blocks is set to 6, except for the first one, based on the recommendation of Sandler et al. [53]. The classification head consists of a fully connected layer with a softmax activation layer. The number of output channels of the fully connected layer matches the number of target classes of the classifier. The overall architecture of MALITE-MN is shown in Fig. 4.
## 4 Results and Discussion
We present the datasets, experimental setup, and results of our experiments in this section. We evaluate MALITE on different publicly available datasets and analyze its performance in detail.
\begin{table}
\begin{tabular}{|l|l|l|} \hline
**Input Resolution** & **Layers** & **Output Resolution** \\ \hline \(h\times w\times x\) & conv2d (\(1\times 1\)), stride=1 + BN + ReLU & \(h\times w\times tx\) \\ \hline \(h\times w\times tx\) & dconv (\(3\times 3\)), stride=\(s\) + BN + ReLU & \(\frac{h}{s}\times\frac{w}{s}\times tx\) \\ \hline \(\frac{h}{s}\times\frac{w}{s}\times tx\) & conv2d (\(1\times 1\)), stride=1 + BN + Linear & \(\frac{h}{s}\times\frac{w}{s}\times\hat{x}\) \\ \hline \end{tabular}
\end{table}
Table 2: Bottleneck residual block [53], where \(t\) is the expansion factor, \(x\) and \(\hat{x}\) are input and output channels of the bottleneck layer, \(s\) is the stride size. conv2d (\(1\times 1\)), stride=1 refers to a convolution layer with kernel size of (\(1\times 1\)), dconv (\(3\times 3\)), stride=\(s\) refers to a depthwise separable convolution with kernel size of (\(3\times 3\)) and stride \(s\), whereas BN refers to batch normalization layer.
Figure 3: Typical examples of histogram features computed from Microsoft BIG dataset [52] for 3 randomly selected malware families, namely, Ramnit, Kelihos_ver3, and Obfuscator.ACY (x-axis: feature dimension, y-axis: frequency).
### Dataset Description
In this sub-section, we provide a brief description of the open-source datasets that we have used for our experiments.
**Malimg [43]**: This dataset contains 25 malware families and has an overall sample count of 9,458. Some malware families present are Allaple.L, Yuner.A, Lolyda.AA 1, Instantaccess, Fakerean, Adialer.C, Dontovo.A, Skintrim.N. The dataset contains gray scale image representations of the different malware binaries.
**Microsoft BIG [52]**: This dataset was published in 2015 with the inception of the Microsoft Malware Classification Challenge. The dataset is almost 0.5 TB in uncompressed form. It contains 9 families of malware such as Gatak, Lollipop, Vundo, Ramnit, Simda, Obfuscator.ACY, Kelihos_ver1, Kelihos_ver3 and Tracur.
**Dumpware10 [5]**: Dumpware10 contains 11 classes out of which one is the benign class and 10 are malware families. It contains a total of 4,294 samples that include 3,433 for training and 861 for validation. The samples are present in the form of RGB images. The malware families included in this dataset are Adposhel, BrowseFox, Allaple, Dinwood, Amonetize, InstallCore, AutoRun, VBA, MultiPlug and Vilsel.
**MOTIF [28]**: MOTIF contains 3,095 malicious samples spanning across 454 malware families. To the best of our knowledge, this is the largest open-source malware dataset till date. The dataset was labelled using the publicly available threat reports of several cyber security organizations. However, the dataset is heavily imbalanced due to the presence of a large number of classes. In our experiments, we have considered classes having more than 5 samples. The total number of such classes is 136.
Figure 4: MALITE-MN Architecture. In this figure x, \(\hat{\text{x}}\), and s respectively represent input channels, output channels, and stride per layer, whereas, t represents the expansion factor of the bottleneck layer and cls represents the number of target classes for the classification task.
**Drebin [1]**: The Drebin dataset is an android malware dataset and was published in 2014. It contains 5,560 malware samples from 179 families of malware. The top 20 classes of this dataset include FakeInstaller, Opfake, Adrd, GingerMaster, Kmin, Plankton, Geinimi, DroidDream, FakeRun, Gappusin, MobileTx, LinuxLotoor, Iconosys, BaseBridge, DroidKungFu, SMSreg, GoldDream, FakeDoc, SendPay and Imlog and contain a total of 1,048 samples. In our experiments, we have used these top 20 classes.
**CICAndMal2017 [36]**: This android malware dataset consists of 429 malware samples and 5,065 benign samples. It contains samples from Adware, Ransomware, Scareware and SMS malware spanning 42 malware families some of which are Dowgin, Gooligan, Jisut, Pletor, AVpass, FakeTaoBao, Biige, Mazarbot.
### Experimental Setup
The experiments of MALITE-HRF involved varying several parameters for feature extraction: the bin size of the histogram, the patch height and width, and the number of estimators in the random forest classifier. Each experiment is labeled as 'bin-ph-pw', where bin, ph, and pw correspond to the parameters bin size, patch height and patch width respectively. The patch height and width were constrained by ph \(\leq\) pw, and pw was either equal to ph or 256. The patch height values were set to 8, 16, 32, 64, 128, and 256. The bin size values ranged from 16 to 256. The random forest classifier was tested with four different numbers of estimators: 11, 31, 51, and 101, all with a maximum depth of 15. The Microsoft BIG dataset [52] was used for the experiments for parameter selection. Fig. 5 presents the results. The legend for each line graph is labeled as RFe, where e denotes the number of estimators used for random forest. The figure shows that the optimal parameters are bin = 64, ph = 32, pw = 256, and 51 estimators. The
Figure 5: Parameter selection for MALITE-HRF using precision as an evaluation metric. Along x-axis, we present various experimental setups using bin size of the histogram, height and width of the patch for feature computation. The x-axis is labeled in the format of ‘bin-ph-pw’, where bin corresponds to bin size and ph and pw represent the height and width of a patch, respectively. The y-axis denotes precision.
performance improved with increasing bin size from 16 to 64, but not further. The number of estimators affected the trade-off between accuracy and model size.
In MALITE-MN model, we varied the expansion factor \(t\) of the bottleneck layers from 5 to 10. We found that the performance of the model was not sensitive to the choice of \(t\) within this range for various vision classification tasks. Therefore, we fixed \(t\) at 6 for the rest of the experiments, following the suggestion of Sandler et al. [53].
To analyze MALITE with respect to various malware classification and identification tasks on the aforementioned datasets, we have used four state-of-the-art approaches for malware analysis. Details of the models used in our experiments are as follows:
**3C2D:** The model was proposed by Mohammed et al. [42] for malware classification. This model is a simple CNN based model with three convolutional layers followed by two fully connected layers, and thus the name 3C2D.
**DTMIC:** Kumar et al. [35] proposed a transfer learning based method for malware classification. In this model, they had used VGG16 [55] network pretrained on Imagenet dataset [12]. They had frozen the initial encoder part of the network for training. During training, only the classification head of the network was trained for malware classification purposes.
**SDN-LSVM:** Wong et al. [61] proposed a transfer learning based method that leveraged two pretrained models, ShuffleNet [66] and DenseNet-201 [23], for feature extraction. These models, with 173 and 201 layers respectively, were trained on the ImageNet dataset [12]. They concatenated the feature vectors obtained from the global average pooling layer of each model and fed them to a linear SVM classifier with a one vs. one scheme. In the rest of the paper, we refer to this approach as SDN-LSVM.
**MalConv2:** The memory-improved version of the MalConv model was proposed by Raff et al. [50] for the classification of malware by consuming the whole malware binary file as sequential data. MalConv2 consists of two 1D convolutional layers. A 1D convolutional layer uses a filter and a stride, each of size 512, 128 channels and 8 embedding dimensions.
We trained all the models using the Adam optimizer and the Categorical Cross-Entropy Loss. We applied cosine annealing with a quarter period to the learning rate, which decayed from \(1e^{-4}\) to \(5e^{-5}\) for 1000 epochs, with a warmup phase of 5000 steps. The training was performed on an Nvidia A100 GPU.
### Results and Discussion
Table 3 compares the proposed models MALITE-HRF and MALITE-MN with four existing state-of-the-art approaches for malware detection and categorization based on the number of parameters, the number of multiplication-addition (Mult-Add) operations, and the size of the models. The number of parameters reflects the complexity and the capacity of the models. The number of Mult-Add operations is directly proportional to the model computational complexity which in turn relates to the inference time and the battery power consumption of the
device on which the model is deployed. The size of a model represents its memory requirement and storage overhead. In our work, we compute the number of multiplication and addition operations for convolutional layers and bottleneck layers by using the methods described in [53]. For histogram feature extraction, the number of operations is equivalent to (\(\mathrm{n}\times\mathrm{ph}\times\mathrm{pw}\)), where \(\mathrm{n}\) is the number of patches, \(\mathrm{ph}\) and \(\mathrm{pw}\) are the height and width of each patch, respectively. For random forest, the number of computations is atmost (\(\mathrm{e}\times\mathrm{ht}\)), where \(\mathrm{e}\) is the number of estimators present in the random forest classifier and \(\mathrm{ht}\) is the height of each such estimator tree.
In Table 3, all the values are computed considering input binary images of dimension 256 \(\times\) 256, and the number of output classes to be 10. In case of Malconv2, the input image of dimension 256 \(\times\) 256 is considered as a single dimensional array.
From the table, it can be seen that the proposed methods use a significantly lower number of parameters, Mult-Add operations, and have a significantly lesser size than the existing approaches. This implies that our techniques are more efficient and lightweight for malware detection. Specifically, MALITE-HRF employs only 0.01 million parameters, 0.13 million Mult-Add operations, and has a size of 0.03 MB, which are orders of magnitude lower than the other models, thus making MALITE-HRF ultra lightweight. MALITE-MN uses 0.18 million parameters, 303.54 million Mult-Add operations, and has 0.81 MB size, which are also several times lower than 3C2D, DTMIC, SDN-LSVM and MalConv2.
The existing methods, on the other hand, have a much higher number of parameter count, Mult-Add operations, and size than the proposed models, which indicate that they are more complex and resource intensive for malware detection. Among them, 3C2D has the highest number of parameters (67.61 million) and the largest size (276.46 MB), which are mainly due to the use of high-dimensional fully connected layers. DTMIC and SDN-LSVM have the higher number of Mult-Add operations (15353.06 million and 18724.06 million, respectively), which can be attributed to their use of deep pretrained CNN networks for feature computation. MalConv2 has a moderate number of parameters (1.07 million) and size (4.30 MB), but a very high number of Mult-Add operations (68719.51 million) because of the use of a long 1D convolutional neural network that requires a large number of sliding windows and feature maps. Our proposed approaches demonstrate a clear advantage over existing models in terms of effi
\begin{table}
\begin{tabular}{l r r r} \hline \hline \multirow{2}{*}{**Model**} & \# **Parameters** & \# **Mult-Adds Ops.** & **Size** \\ & **(in millions)** & **(in millions)** & **(in MB)** \\ \hline MALITE-HRF (proposed) & 0.01 & 0.13 & 0.03 \\ MALITE-MN (proposed) & 0.18 & 303.54 & 0.81 \\
3C2D [42] & 67.61 & 727.85 & 276.46 \\ DTMIC [35] & 17.92 & 15 353.06 & 71.74 \\ SDN-LSVM [61] & 23.27 & 18 724.06 & 82.96 \\ MalConv2 [50] & 1.07 & 68 719.51 & 4.30 \\ \hline \hline \end{tabular}
\end{table}
Table 3: # Parameters, # Multiplication-Addition Operations, and Sizes of Models.
ciency and lightweightness. It is to be noted here that MALITE is 6,761 (HRF) to 375 (MN) times and 5,598 (HRF) to 2 (MN) times smaller with respect to parameter count and number of Mult-Add operations, respectively than the largest model 3C2D. Moreover, MALITE is also smaller than the smallest model MalConv2 with respect to parameter count and Mult-Add operations by 107 (HRF) to 6 (MN) times and 5,28,611 (HRF) to 226 (MN) times, respectively.
To evaluate the proposed methods in terms of their effectiveness for malware classification tasks, we compare our models with four state-of-the-art techniques on six publicly available datasets. The metrics used in our comparison are accuracy which measures the overall correctness and F1-Score which is the harmonic mean of precision and recall. Precision and recall respectively measure the fraction of true positives among predicted positives, and the fraction of true positives among actual positives. The higher the values of these metrics, the better the performance of the model.
We present in Fig. 6 the results of malware family classification on four datasets namely, Malimg, Microsoft BIG, Dumpware10, and MOTIF using six different models, including our models, MALITE-HRF and MALITE-MN. From Fig. 6, it can be observed that MALITE-HRF and MALITE-MN achieve comparable or superior performance than the other four techniques on most of the datasets. MALITE-MN outperforms all the other models on Microsoft BIG, achieving the highest accuracy and F1-Score of 97.89% and 97.56% respectively. MALITE-MN gives an accuracy of 99.47% and an F1-Score of 99.50% on Malimg which is approximately within 0.25% of the best performing model 3C2D. MALITE-HRF also performs well on these two datasets, with accuracy and F1-Score values almost equal to or above 95%.
On Dumpware10 dataset, the proposed models also show good performance, with accuracy value above 91% and F1-Score above 92% for MALITE-HRF and accuracy and F1-Score values of \(\approx\)96% for MALITE-MN. However, on this dataset, the 3C2D model achieves the best performance, with accuracy and F1-Score of 97.31%, and 97.50%, respectively. This suggests that 3C2D can better handle the diversity and complexity of Dumpware10, which contains malware samples from 10 different sources. However, the performance of MALITE-MN is approximately within 1.4% of 3C2D. On the MOTIF dataset, our proposed models outperform all other models by considerable margins of 10% and 5% in terms of accuracy and F1-Score, respectively. The MOTIF dataset poses significant challenges for malware classification using machine learning and especially, Deep Neural Network (DNN) methods. First, the dataset is highly imbalanced and it consists of 136 malware families with more than 5 samples. Second, the majority of the malware classes have very few data points, which limits the ability of DNN methods to learn meaningful features and generalize well. Here, MALITE-HRF gives the best performance.
We also compare MALITE on two android malware datasets, Drebin, and CICAndMal2017. The comparison of the two proposed methods and the four state-of-the-art approaches is shown in Fig. 7.
Figure 6: Malware family classification comparison of MALITE with 3C2D, DTMIC, SDN-LSVM and MalConv2 for 4 malware datasets, Malimg, Microsoft BIG, Dumpware10 and MOTIF in terms of Accuracy and F1-Score. In each graph, a model is represented by a colored bubble and the bubble size represents the corresponding model size with respect to the size of the smallest model, MALITE-HRF. x-axis: No. of Mult-Add operations in millions, y-axis: Accuracy or F1-Score.
Figure 7: Performance comparison of MALITE with 3C2D, DTMIC, SDN-LSVM and MalConv2. Top figure shows malware family classification results for 2 android malware datasets, Drebin Top20 and CICAndMal2017 and bottom figure shows benign vs. malware classification results for CICAndMal2017 and Dumpware10 in terms of Accuracy and F1-Score. In each graph, a colored bubble represents a model and the bubble size represents the corresponding model size with respect to MALITE-HRF.
It can be observed that on both the datasets, 3C2D outperforms all other models with respect to accuracy and F1-Score with values greater than 97% and \(\approx\)98%, respectively. However, on these datasets MALITE-MN obtains an accuracy and F1-Score of \(\approx\)96% or higher. Moreover, we observe that accuracy of MALITE-HRF is approximately equal to or more than 90% and F1-Score of \(\approx\)92%. Thus, at least one of our models give a performance of within 2% margin of the best model, 3C2D. Overall, the results indicate that the proposed models can effectively capture the structural and semantic features of the malware binaries and distinguish among different malware families.
We further analyze MALITE with respect to benign vs. malware classification on CICAndMal2017 and Dumpware10 since only these two datasets contain benign samples. Performance of the methods is shown in Fig. 7 (bottom part). It can be observed in this figure that our proposed methods MALITE-HRF and MALITE-MN achieve comparable results on these two datasets for malware vs. benign classification, outperforming some of the baseline models and being close to the best-performing ones.
Our experimental study demonstrates that our proposed models are capable of performing malware classification and identification with quite a high accuracy, despite being several orders of magnitude smaller than state-of-the-art methods in terms of memory and computational cost. Out of the eight sets of experiments, in terms of F1-Score, MALITE gives best performance in two cases and achieves F1-Score values within less than 0.5%, 1.5% and 2% of the best performing model in one, three and two cases respectively. Therefore, we recommend MALITE-MN for resource-constrained devices, while MALITE-HRF is recommended for devices having extreme memory and computation constraints.
## 5 Conclusion
In this paper, we have proposed MALITE, a lightweight framework for malware identification and classification. We have designed two variants of MALITE, MALITE-HRF that combines lightweight methods like patch based histogram computation and random forest classifier, and MALITE-MN which is a light weight neural network based classifier using computationally inexpensive bottleneck layers. Experimental results on six open-source datasets demonstrate the effectiveness of the proposed techniques inspite of being several times lighter in terms of computational overhead and memory consumption when compared to state-of-the-art malware analysis approaches, thus making MALITE suitable for constrained computing systems. In future, we intend to design lightweight non-image based malware detection and classification strategies. Moreover, we would also like to design lightweight techniques that are resilient to malware evolution and obfuscation and are capable of detecting zero-day attacks.
## Acknowledgments
The authors gratefully acknowledge the computing time provided on the high performance computing facility, Sharanga, at the Birla Institute of Technology and Science - Pilani, Hyderabad Campus. |
2309.12399 | Towards exact algorithmic proofs of maximal mutually unbiased bases sets
in arbitrary integer dimension | In this paper, we explore the concept of Mutually Unbiased Bases (MUBs) in
discrete quantum systems. It is known that for dimensions $d$ that are powers
of prime numbers, there exists a set of up to $d+1$ bases that form an MUB set.
However, the maximum number of MUBs in dimensions that are not powers of prime
numbers is not known.
To address this issue, we introduce three algorithms based on First-Order
Logic that can determine the maximum number of bases in an MUB set without
numerical approximation. Our algorithms can prove this result in finite time,
although the required time is impractical. Additionally, we present a heuristic
approach to solve the semi-decision problem of determining if there are $k$
MUBs in a given dimension $d$.
As a byproduct of our research, we demonstrate that the maximum number of
MUBs in any dimension can be achieved with definable complex parameters,
computable complex parameters, and other similar fields. | Santiago Cifuentes, Nicolás Ciancaglini, Guido Bellomo, Santiago Figueira, Ariel Bendersky | 2023-09-21T18:00:42Z | http://arxiv.org/abs/2309.12399v1 | Towards exact algorithmic proofs of maximal mutually unbiased bases sets in arbitrary integer dimension
###### Abstract
In this paper, we explore the concept of Mutually Unbiased Bases (MUBs) in discrete quantum systems. It is known that for dimensions \(d\) that are powers of prime numbers, there exists a set of up to \(d+1\) bases that form an MUB set. However, the maximum number of MUBs in dimensions that are not powers of prime numbers is not known.
To address this issue, we introduce three algorithms based on First-Order Logic that can determine the maximum number of bases in an MUB set without numerical approximation. Our algorithms can prove this result in finite time, although the required time is impractical. Additionally, we present a heuristic approach to solve the semi-decision problem of determining if there are \(k\) MUBs in a given dimension \(d\).
As a byproduct of our research, we demonstrate that the maximum number of MUBs in any dimension can be achieved with definable complex parameters, computable complex parameters, and other similar fields.
## 1 Introduction
The Uncertainty Principle, famously stated by Heisenberg, outlines the fundamental limitations of quantum mechanics. It highlights the trade-off between the accuracy of knowledge about a quantum system's position and momentum.
In the realm of discrete quantum systems, the concept of MUBs extends the notion of complementarity. Bases \(\mathcal{B}_{1},\ldots,\mathcal{B}_{n}\) are considered mutually unbiased if the inner product between two states, one from each basis, results in a uniform probability distribution if the bases are distinct and a delta function if they are the same. This means that if a state is prepared in basis \(k\) and then measured projectively in basis \(k^{\prime}\neq k\), all results are equally likely.
It has been established [35] that the maximum number of MUBs in a given dimension \(d\) is less or equal than \(d+1\), with equality holding true for dimensions that are powers of a prime number. However, for dimensions that are not a power of a prime, the maximum number of MUBs is not known. Despite extensive research, the maximum number of MUBs in the lowest non-prime power dimension of \(6\) remains uncertain. Currently, it is known that there are three MUBs in this case, but the possibility of
a fourth MUB has not been ruled out.
On a different note, First-Order Logic, also referred to as Predicate Logic, is a system of formal languages that utilizes quantified variables to express statements about various structures. An example of a sentence in First-Order Logic is:
\[(\forall y)(\exists x)\ x^{2}=y \tag{1}\]
This sentence is true for structures such as \(\mathbb{R}_{>0}\) and \(\mathbb{C}\), but false for others such as \(\mathbb{R}\), \(\mathbb{N}\), and \(\mathbb{Q}\). The versatility of First-Order Logic allows it to be applied to a wide range of theories and structures.
Algorithms for determining the truth value of a given First-Order formula exist for some theories, specifically for the theory of the reals, but not in general. This makes First-Order Logic a powerful tool when working with these specific theories. In this paper, we will leverage this property to provide algorithms that can prove the existence (or lack thereof) of \(k\) MUBs in dimension \(d\) within a finite time. Additionally, we will demonstrate the existence of proofs for the number of MUBs in a given dimension and offer insights into the nature of these proofs, the mathematical concepts involved, and the properties of MUBs in fields other than the complex numbers.
This paper is organized as follows. In section 2 we present the problem of MUBs and provide an overview of the known results in this area. In section 3, we introduce First-Order Logic and its relevant concepts, including the First-Order theory of the reals, the existential theory of the reals, and elimination of quantifiers. These concepts serve as the foundation for the exact algorithms we present in section 4, which can decide the truth of the sentence "there exists \(k\) MUBs in dimension \(d\)". Additionally, in section 5 we discuss a heuristic algorithm that can be used to prove the non-existence of \(k\) MUBs in dimension \(d\), although it is not guaranteed to halt in every case. Finally, in section 6 we prove that the maximum number of MUBs in fields other than the complex numbers is equivalent to that of the complex numbers.
## 2 Mutually unbiased bases
Given \(k\) orthonormal bases \(\mathcal{B}_{m}=\left\{\left|\psi_{i}^{m}\right\rangle,i=1,...,d\right\}\), for \(k\in\left\{1,...,k\right\}\), we say they are mutually unbiased whenever
\[\left|\langle\psi_{i}^{m}|\psi_{j}^{n}\rangle\right|^{2}=\begin{cases}\frac{1 }{d}&\text{if }m\neq n\\ \delta_{ij}&\text{otherwise.}\end{cases} \tag{2}\]
A set \(\left\{\mathcal{B}_{1},\ldots,\mathcal{B}_{k}\right\}\) is called a set of mutually unbiased bases if each pair of distinct bases is mutually unbiased. Consider a scenario where we perform two measurements, first in base \(\mathcal{B}_{1}\), and then in base \(\mathcal{B}_{2}\). Regardless of the outcome of the first measurement, each possible outcome for the second measurement is equally likely. In this sense, they are referred to as _unbiased_. In other words, bases \(\mathcal{B}_{1}\) and \(\mathcal{B}_{2}\) are maximally noncommutative.
While it has been established since 1989 [35] that the maximum number of MUBs in \(\mathbb{C}^{d}\) cannot exceed \(d+1\), and that a complete set of MUBs exists when \(d\) is a prime power, the upper bound on the number of MUBs remains unknown for all other dimensions. Even for the smallest non-prime-power dimension \(d=6\), the existence of a complete set of MUBs is still an open problem, and both numerical [12, 9, 29] and analytical [10, 11, 22, 27, 8] evidence indicates that it is unlikely that such a set exists.
The concept of MUBs has found multiple applications in quantum information problems such as quantum state estimation [21, 1, 37, 4, 20], entanglement detection or certification [31, 28, 17, 3], and cryptography [36, 32, 13], among others (refer to [7, 16] for more comprehensive reviews). Hence, the inquiry into the existence of a maximal set of mutually unbiased bases in
dimension \(d\) has become a topic of great relevance. Generally, there exist several explicit methods to construct a complete set of MUBs for prime-power dimensions \(d=p^{n}\), including the use of finite fields [35, 24], the Heisenberg-Weyl group [5], generalized angular momentum operators [23], and identities from number theory [2]. Furthermore, for special cases where \(d=2^{n}\) and \(d=p^{2}\), it has been demonstrated that such sets can be constructed in a simple and experimentally accessible manner [30, 33].
## 3 First-Order logic
In this section we give a short and informal introduction to First-Order Logic, and the main concepts needed for this work. For a more formal explanation, the reader is referred to classical textbooks on model theory such as [25].
First-Order Logic, also known as Predicate Logic, is a formal language to reason on mathematical structures. First-Order Logic contains equality, Boolean connectives - conjunction (\(\wedge\)), disjunction (\(\vee\)), implication (\(\rightarrow\)), negation (\(\neg\))-, variables, and quantification -"for all" (\(\forall\)) and "exists" (\(\exists\))- over variables. Depending on the kind of mathematical objects under study, one fixes an appropriate language \(\mathcal{L}\) with symbols of relations, functions and constants. Hence we talk about \(\mathcal{L}\)_-formulas_ and \(\mathcal{L}\)_-structures_, to make explicit that they are constructed over the language \(\mathcal{L}\). A First-Order \(\mathcal{L}\)-formula \(\phi\) has a truth value provided it is evaluated in 1) an \(\mathcal{L}\)-structure \(\mathcal{M}\), which is a set of elements \(X\) (called domain), and an interpretation of all symbols in \(\mathcal{L}\) as mathematical objects in \(X\): an \(n\)-ary relation symbol of \(\mathcal{L}\) is interpreted as a subset of \(X^{n}\), an \(n\)-ary function symbol of \(\mathcal{L}\) is interpreted as a function \(X^{n}\to X\) and a constant symbol of \(\mathcal{L}\) is interpreted as an element of \(X\), and 2) a _valuation_\(v\) which maps all free (i.e. non-quantified) occurrences of variables in \(\phi\) to elements of \(X\). For an \(\mathcal{L}\)-structure \(\mathcal{M}\) and valuation \(v\), the \(\mathcal{L}\)-formula \(\phi\) is _true_ in \(\mathcal{M},v\) if the mathematical property stated by \(\phi\) holds in \(\mathcal{M}\) when mapping free variables of \(\phi\) to the domain of \(\mathcal{M}\) as stated by \(v\) and interpreting the relation, function and constant symbols of \(\mathcal{L}\) as stated by \(\mathcal{M}\). In this case, we also say that \(\mathcal{M},v\)_satisfies_\(\phi\). If \(\phi\) does not have any free variables then the valuation \(v\) above plays no role and we simply talk about \(\phi\) being true in \(\mathcal{M}\) or \(\mathcal{M}\) satisfying \(\phi\). An \(\mathcal{L}\)-formula with no free variables is called \(\mathcal{L}\)_-sentence_, and represents a property that any structure may or may not satisfy. If \(T\) is a set of \(\mathcal{L}\)-sentences then we say that \(\mathcal{M}\) is a _model_ of \(T\) if \(\mathcal{M}\) satisfies all the \(\mathcal{L}\)-sentences of \(T\).
TheoriesAn \(\mathcal{L}\)_-theory_\(T\) is simply a set of \(\mathcal{L}\)-sentences. The _consequences_ of \(T\) is the set of all \(\mathcal{L}\)-formulas \(\phi\) such that for any \(\mathcal{L}\)-structure \(\mathcal{M}\), if \(\mathcal{M}\) satisfies \(T\) then \(\mathcal{M}\) satisfies \(\phi\). The consequences of \(T\) are also called _theorems_ of \(T\). Some \(\mathcal{L}\)-theories may be axiomatized by a finite or a finitely represented set of axioms, meaning that there is a finite or finitely represented \(\mathcal{L}\)-theory \(T^{\prime}\) (whose formulas are called _axioms_) such that the consequences of \(T\) and \(T^{\prime}\) coincide. An \(\mathcal{L}\)-theory \(T\) is _complete_ if for any \(\mathcal{L}\)-sentence \(\phi\), either \(\phi\) or its negation is a theorem of \(T\). In that case, any two models \(\mathcal{M},\mathcal{M}^{\prime}\) satisfying \(T\) will be _elementary equivalent_: for any \(\mathcal{L}\)-sentence \(\phi\), \(\mathcal{M}\) satisfies \(\phi\) if and only if \(\mathcal{M}^{\prime}\) satisfies \(\phi\).
The language of ordered ringsThe language \(\mathcal{L}_{\rm or}\) of ordered rings contains the relational symbol \(<\), function binary symbols \(+\), \(-\) and \(\cdot\), and the constant symbols \(0\) and \(1\). It is important to notice that the syntactic rules for constructing First-Order formulas over this language allows us to write formulas like
\[(\forall x)\ a\cdot x^{2}+b\cdot x+c>0 \tag{3}\]
with free variables \(a\), \(b\) and \(c\) and a (bound) variable \(x\) universally quantified1, but not formulas like \((\exists x)\ \sin(x)=1\) since \(\sin\) is not part of \(\mathcal{L}_{\mathrm{or}}\). The validity, of course, depends on which structure the formula is evaluated - recall the example of sentence (1). Observe that First-Order logic only allows to quantify elements, but not other objects such as sets of elements, polynomials, etc.
Footnote 1: Strictly speaking, \(x^{2}\) is not an allowed term in \(\mathcal{L}_{\mathrm{or}}\), but for any \(n\in\mathbb{N}\) we use the standard notation of \(x^{n}\) to denote \(x\cdot\ldots\cdot x\) (\(n\) times).
### Real Closed Fields
The theory of the Real Closed Fields (RCF) is the set of \(\mathcal{L}_{\mathrm{or}}\)-sentences which contains \(1\)) all the axioms for fields, \(2)\) an axiom stating that no square or sum of squares is equal to \(-1\), \(3)\) an axiom stating that any \(x\) is either \(y\) or \(-y\), where \(y\) is a square, and \(4)\) an axiom stating that any polynomial of odd degree has a root. It is not difficult to see that all these properties can be expressed as \(\mathcal{L}_{\mathrm{or}}\)-sentences. Alfred Tarski showed that RCF is a complete theory and, since \(\mathbb{R}\) is a model of RCF, any \(\mathcal{L}_{\mathrm{or}}\)-sentence \(\phi\) is a consequence of RCF if and only if \(\phi\) is true in \(\mathbb{R}\).
Quantifier eliminationA _decision problem_ is a problem that can be formulated as a question with a binary answer _yes_ or _no_ depending on a (finite) set of input parameters. A decision problem is _decidable_ if there exists an algorithm which is capable of computing the decision output for each input parameter.
The problem of deciding whether an \(\mathcal{L}_{\mathrm{or}}\)-sentence is a consequence of RCF (and, therefore, deciding if it is true over \(\mathbb{R}\)) is decidable, since it is possible to exhaustively enumerate all theorems of RCF by applying proper derivation rules until either \(\phi\) or its negation occurs. However, this algorithm is extremely impractical regarding both the memory and time that it requires.
Nonetheless, RCF has one more interesting property, also shown by Tarski: _quantifier elimination_. This means that for any \(\mathcal{L}_{\mathrm{or}}\)-formula \(\phi\) there is a quantifier-free formula \(\psi\) which is _equivalent_ to \(\phi\) in RCF; in other words, for any valuation \(v\), we have that \(\phi\) is true in \(\mathbb{R},v\) if and only if \(\psi\) is true in \(\mathbb{R},v\). For example, a formula like (3) is equivalent in RCF to2
Footnote 2: For \(n\in\mathbb{N}\) the term \(n\) is a short for the \(\mathcal{L}_{\mathrm{or}}\)-term \(1+\cdots+1\) (\(n\) times)
\[(a=0\wedge b=0\wedge c>0) \vee\] \[(a\geq 0\wedge b=0\wedge c>0\wedge 4\cdot a\cdot c>b^{2}) \vee\] \[(a>0\wedge 4\cdot a\cdot c>b^{2})\]
Furthermore, there is an algorithm that transforms any input \(\mathcal{L}_{\mathrm{or}}\)-formula \(\phi\) into a equivalent quantifier-free formula \(\psi\). This suffices to determine if \(\phi\) is true in \(\mathbb{R}\) for a given valuation \(v\), because it is easy to evaluate the quantifier-free formula \(\psi\) over \(v\).
The importance of quantifier elimination is that it allows for real computer implementations (at least compared with the algorithm based on enumerating all theorems of RCF). The most popular algorithm for quantifier elimination in the theory of Real Closed Fields is the cylindrical algebraic decomposition (CAD) algorithm, which was developed mainly by George E. Collins in 1975 [14]. This algorithm is doubly exponential in the number of variables of the input formula, and is implemented in many popular computer algebra systems, such as Maple, Mathematica or Singular. Also, it is one of the most important algorithms from the field of computational algebraic geometry and has received a lot of attention and improvements since its origin.
Moreover, regarding the computational complexity of these tasks, it is known that when restricted to existential formulas (those of the
form \((\exists x_{1}),\ldots,(\exists x_{n})\)\(\psi\), for \(\psi\) free of quantifiers), the problem of deciding whether a \(\mathcal{L}_{\text{or}}\)-formula is true is NP-hard and belongs to PSPACE [26] (namely, it is decidable by a deterministic Turing machine that uses \(O(n^{k})\) many cells of memory, where \(n\) is the size of the input formula and \(k\) is fixed). The same applies if the number of blocks of quantifiers of the input formula is fixed [6].
Our settingFor our developments we will need to express properties on the complex numbers \(\mathbb{C}\). Obviously RCF cannot be used straightforwardly since truths in \(\mathbb{R}\) do not coincide with truths on \(\mathbb{C}\) (e.g. axiom 2 of RCF stated above). However, the properties \(\phi\) on \(\mathbb{C}\) that we will use can be expressed as \(\mathcal{L}_{\text{or}}\)-formulas by duplicating each complex variable \(z\) to match the meaning of the real and imaginary part of \(z\). For example, if \(z\) is a complex variable, the property \(|z|>3\) can be translated to the \(\mathcal{L}_{\text{or}}\)-sentence \(z_{1}^{2}+z_{2}^{2}>3^{2}\) by identifying \(z\) with a pair of real variables \((z_{1},z_{2})\). We can translate in this way a number of properties on the complex numbers as properties on the reals. Observe, however, that trigonometric functions cannot be mapped straightforwardly as they are not part of \(\mathcal{L}_{\text{or}}\).
## 4 Exact algorithms to prove the existence of \(k\) MUBs in dimension \(d\)
Proving the existence of \(k\) MUBs in dimension \(d\) amounts to decide whether there are orthonormal bases \(\mathcal{B}_{1}\ldots\mathcal{B}_{k}\) of \(\mathds{C}^{d}\) that satisfy (2), where the inner product is the usual for \(\mathds{C}^{n}\), namely
\[\langle v|w\rangle=\sum_{i=1}^{d}v_{i}\overline{w_{i}}\]
Considering the equations related to orthonormality conditions as well, we are left with a system of equations over \(\mathbb{C}\) involving \(\binom{kd}{2}+kd\) equations, and \(kd^{2}\) complex variables. As explained in the previous section, we can also decompose each complex variable in its real and imaginary part and develop the equations correspondingly, obtaining a multivariate polynomial system over \(\mathbb{R}\) involving twice as many equations and variables. Note that the system of equations over the complex variables is not a polynomial system because the conjugate operator is used in the definition of the inner product.
Taking this into account, it is then possible to compute, given \(k\) and \(d\), an \(\mathcal{L}_{\text{or}}\)-sentence \(\phi_{k,d}\) stating that "there are at least \(k\) MUBs in dimension \(d\)". The truth value of \(\phi_{k,d}\) in \(\mathbb{R}\) can be found using any available quantifier elimination algorithm for RCF and, since \(\phi_{k,d}\) only uses existential quantifiers, it can be also computed in polynomial space.
In the context of the problem of determining the existence of 4 MUBs in dimension 6, the corresponding multivariate polynomial system over \(\mathbb{R}\) has 288 variables. Even after exploiting some symmetries (such as fixing the first base \(\mathcal{B}_{1}\) to the canonical one and the phase of all vectors) the whole system contains 180 variables, and therefore is still unfeasible to solve using any of the methods described in the previous section. This motivates us to approach the problem through a heuristic algorithm.
To emphasise on the unfeasibility of employing the CAD algorithm to solve the whole formula we estimate its running time considering Renegar's bound [19], which states that there exists a CAD implementation whose time complexity is
\[L(\log L)(\log\log L)(md)^{O(n)}\]
where \(L\) is the coefficients bit length (i.e. the number of bits required to express the coefficients of the polynomials), \(m\) the number of
polynomials, \(d\) the total degree and \(n\) the number of variables. Even assuming small hidden constants (\(1\) for the exponent and for the whole complexity) the estimated number of operations is in the order of \(10^{520}\). Note, nonetheless, that this is the worst case complexity, and considering the particular highly symmetrical instances that arise due to the MUB conditions it might be possible to speed up the quantifier elimination procedure by some particular algorithm exploiting this structure.
## 5 A heuristic algorithm to disprove the existence of \(k\) MUBs in dimension \(d\)
We develop a heuristic to attempt to disprove the existence of solutions for a given polynomial system. Suppose we have a system \(\{p_{i}(x_{1},\ldots,x_{k})=0\}_{1\leq i\leq n}\) of \(n\) polynomial equations on \(k\) variables over the reals. Deciding the existence of a real root (i.e. a \(\vec{z}=(z_{1},\ldots,z_{k})\in\mathbb{R}^{k}\) such that \(p_{i}(\vec{z})=0\) for all \(1\leq i\leq n\)) amounts to deciding whether the \(\mathcal{L}_{\text{or}}\)-sentence
\[\Phi\equiv\exists x_{1},\ldots,x_{k}\bigwedge_{i=1}^{n}\Phi_{i}\]
is true in \(\mathbb{R}\), where \(\Phi_{i}\equiv p_{i}(x_{1},\ldots,x_{k})=0\)
If a conjunction is satisfiable then any subset of the conjuncts is also satisfiable as well. Therefore, for \(S\subseteq\{1,\ldots,n\}\), if \(\Phi_{S}=\exists x_{1},\ldots,x_{k}\bigwedge_{i\in S}\Phi_{i}\) is false in \(\mathbb{R}\) we can conclude that \(\Phi\) is false in \(\mathbb{R}\) as well. Furthermore, if \(|S|\) is small enough, it is possible to use the CAD algorithm to find the truth value of \(\Phi_{S}\) in a _reasonable_ time.
CAD can also be used to perform quantifier elimination considering only a subset \(V\) of the \(k\) variables \(x_{1},\ldots,x_{k}\). Given the formula \(\Phi_{S,V}=\exists\{x_{j}\}_{j\in V}\bigwedge_{i\in S}\Phi_{i}\) the algorithm will return a new formula \(\Phi^{\prime}\) that does not use the variables \(\{x_{j}\}_{j\in V}\) such that \(\Phi^{\prime}\) is true if and only if \(\Phi_{S,V}\) is true. Observe that this new formula \(\Phi^{\prime}\) is not necessarily of the form \(q(x_{1},\ldots,x_{k})=0\) being \(q\) a multivariate polynomial, but rather a set of equations defining the union of semialgebraic sets [6].
More formally, given a set of formulas \(\Psi=\{\Phi_{1},\ldots,\Phi_{n}\}\) over the variables \(x_{1},\ldots,x_{k}\) we define a _creation step_ as picking a subset \(S\subseteq\{1,\ldots,n\}\) of formula indices, a subset \(V\subseteq\{1,\ldots,k\}\) of variable indices and performing quantifier elimination over the sentence \(\Phi_{S,V}=\exists\{x_{j}\}_{j\in V}\bigwedge_{i\in S}\Phi_{i}\) to obtain a new formula \(\Phi^{\prime}\). Observe that if the chosen formula \(\Phi_{S,V}\) is false and has all its variables quantified then the new formula \(\Phi^{\prime}\) will be the False atom (\(\Phi^{\prime}=\bot\)), in which case one can already conclude that the formula \(\Phi=\exists x_{1},\ldots,x_{m}\bigwedge_{i=0}^{n}\Phi_{i}\) is false as well.
Our heuristic algorithm to disprove the existence of \(k\) MUBs in dimension \(d\) will define a starting set \(\Psi_{0}\) with all formulas related to the multivariate polynomial system described in Section 4. Iteratively, we will define \(\Psi_{i+1}=\Psi_{i}\cup\{\Phi_{i}^{\prime}\}\) where \(\Phi_{i}^{\prime}\) is a formula obtained by applying a creation step to \(\Psi_{i}\). \(\Psi_{i+1}\) has one more formula than \(\Psi_{i}\), and they are equal in terms of validity (the quantified conjunction of all formulas of \(\Psi_{i}\) is true if and only if the same happens for \(\Psi_{i+1}\)). If for some \(i\) it is the case that \(\Phi_{i}^{\prime}=\bot\) then the formula defined by the set \(\Psi_{i}\) is false, in which case the starting \(\Psi_{0}\) is false as well, and we conclude that there is no set of \(k\) MUBs in dimension \(d\). Meanwhile, if \(\Phi_{i}^{\prime}\neq\bot\), we add it to the set of formulas in order to consider it for the next creation steps. The idea is that the formulas iteratively added to \(\Psi_{0}\) can be employed in the following creation steps to reach an ultimate contradiction by merging different simplified conditions of the original system.
If the initial set of formulas \(\Psi_{0}\) is valid then this procedure will never halt. Meanwhile, if it is false, it might be able to find a proof of that
fact represented as a deductive reduction form \(\Psi_{0}\) to \(\bot\).
We note that this scheme does not rely on the CAD algorithm, but rather on any implementation of the quantifier elimination procedure. In particular, we use the Wolfram Engine method Resolve that aims to reduce the given formula in any possible way (i.e. it might use a different quantifier elimination algorithm than CAD). Nonetheless, such method allows to force a certain strategy (for example, the algorithm defined at [18]).
The heuristic described above depends heavily on the way we choose the sets \(S\) and \(V\), specially on the size of them. We experimented to find values for them that allow _1)_ The Resolve method to finish in the order of minutes, without consuming too much memory3_2)_ The chosen subsystem to be large and complex enough (i.e. involving equations that share variables) to capture relevant constraints of the original system. We observed that in order to accomplish _1)_ the size of \(|S|\) has to depend on the number of variables present in the selected equations (as expected, considering that the CAD algorithm depends double exponentially on the number of variables), and to achieve _2)_ we need to quantify a _reasonable_ number of the variables in the chosen equations: quantifying few of them will make the resulting \(\Phi^{\prime}\) to be extremely long and unusable for the rest of the computations, while quantifying many of them will cause the resulting subsystem to be valid, and therefore \(\Phi^{\prime}\) will be the True atom (\(\Phi^{\prime}=\top\)), which is useless for our purpose.
Footnote 3: A big issue present in most implementations of quantifier elimination is memory consumption: in the worst case, intermediate calculations can turn out to be exponential in size.
Alongside the iterations of this heuristic we will periodically perform a _resolution step_ (in contrast with the _creation step_ described above): we will take a bigger set of formulas and try to reduce them quantifying all its variables. This step is meant to take advantage of the new formulas combining them together, focusing on reaching an ultimate contradiction instead of creating new constraints.
As a proof of concept we use our algorithm to prove the nonexistence of 4 MUBs in dimension 2. In this case there are 36 equations and 16 variables. After some testing we parameterize the algorithm in the following way:
* The set \(S\) is picked uniformly at random, conditioning that \(|S|=4\).
* The set \(V\) is picked uniformly at random from the variables of the formulas indicated by \(S\), conditioning that \(|V|=6\).
* The _resolution step_ takes 13 different formulas, and runs every 5 steps of the _creation step_.
* All Wolfram queries have a timeout limit of 30 seconds.
On such a run, after 20 iterations we obtained a set of equations of the system that could not be satisfied. The formula can be found in the Appendix.
We conclude this section by observing that the described algorithm can be easily modified to work on weaker and easier to solve conjectures involving the existence of MUBs. For example, by removing some equations from the system one could test whether there are three MUBs in dimension 6 _and_ some vector \(v\) that is unbiased with respect to all other vectors from the mutually unbiased bases, or even ask for independent sets of normalized vectors instead of bases (i.e. allowing them to contain less than \(d\) vectors).
## 6 Reals and other structures
A consequence of the fact mentioned in section 3.1 is that _any_\(\mathcal{L}_{\mathrm{or}}\)-structure satisfying the axioms of RCF has the same truths as \(\mathbb{R}\), and is
therefore indistinguishable from \(\mathbb{R}\) by means of \(\mathcal{L}_{\text{or}}\)-First-Order formulas. In the context of this work, this means that the maximum number of MUBs in dimension \(d\) with coefficients with real part and imaginary part in the reals, is the same as the maximum number of MUBs in dimension \(d\) with coefficients with real and imaginary part in _any_ real closed field. Examples of Real Closed Fields other than the reals are the real algebraic numbers (those which are roots of a non-zero polynomial in one variable with integer coefficients) or the computable real numbers (those for which there is an algorithm that approximates them with a precision given as parameter to the algorithm). Since the studied properties regarding the existence of MUBs can be written in the language \(\mathcal{L}_{\text{or}}\) over the reals (via the explained mapping of complex numbers to the real and imaginary part), then all our results automatically hold in any Real Closed Field, as the algebraic or computable complex numbers.
## 7 Closing remarks
In this work we presented an exponential-time algorithm based on First Order Logic tools able to decide the existence of \(k\) MUBs in dimension \(d\) for any values of \(k,d\in\mathds{N}\). We also showed that the problem is in PSPACE, since it reduces to deciding the truth value of a formula from the theory of the Real Closed Fields using only existential quantification (the "Existential theory of the reals").
Since this algorithm requires an enormous amount of time to solve the decision problem even when \(d=6\) we defined an heuristic approach to design a semi-decision procedure that can detect whether there are not \(k\) mutually unbiased bases in dimension \(d\), and that will not halt if they exist. This algorithm does not actually exploit any particularities of the MUB problem, rather than the fact that it can be expressed in the RCF logic. We implemented the defined heuristic and provided a proof of concept by using it to show that there are not 4 MUBs in dimension 2.
As a byproduct of these results it can be proved that, given any model \(M\) of RCF (such as \(\mathds{R}\) or the algebraic reals) if \(m\) is the maximum number of MUBs in any dimension \(d\), then there is a set \(\{\mathcal{B}_{1},\ldots,\mathcal{B}_{m}\}\) of mutually unbiased bases such that the real and imaginary part of all imaginary numbers involved in the vectors from \(\mathcal{B}_{1},\ldots,\mathcal{B}_{m}\) all belong to \(M\). More formally, for any \(1\leq i\leq m\) and \(v\in\mathcal{B}_{i}\) it is the case that \(\Re(v_{j}),\Im(v_{j})\in M\) for \(1\leq j\leq d\). This implies that when studying the maximum number of MUBs in any dimension there is no loss of generality in assuming that the coefficients of the vectors belong to any simpler model of RCF, limiting therefore their algebraic complexity.
Finally, we conclude this work by noting that the decidability of other quantum information problems has already been addressed, obtaining both positive and negative results [15, 34].
## 8 Appendix
In our implementation the bases \(\{\mathcal{B}_{1},\mathcal{B}_{2},\mathcal{B}_{3},\mathcal{B}_{4}\}\) are denoted with variables \(x,y,z\) and \(w\), and subindices are used to refer to the particular components of the vectors. For example, the real part of the \(i\)th component of the \(k\)th vector from the base \(y\) is represented by the variable \(yki0\), while the complex part is represented by \(yki1\). The base \(\mathcal{B}_{1}\) is fixed to be the canonical one, and it is assumed that every vector is phase shifted in such a way that the first component is a real number, and therefore the corresponding imaginary part is 0.
During the _creation steps_ one of the formulas that was created is the following one:
(w011 < 0 AND -(1/Sqrt[2]) <= w111 <= 1/Sqrt[2] AND w110 == -(Sqrt[1 - 2*w111 ^2]/Sqrt[2])) OR (w110 == Sqrt[1 - 2*w111^2]/Sqrt [2]) OR (w011 == 0 AND w010 < 0 AND w111 == -(1/ Sqrt[2]) && w110 == 0) OR ...
Listing 1: Example of a formula deduced by performing quantifier elimination over a subset of formulas and variables.
This expression represents, as explained in Section 5, the union of some semialgebraic sets.
The subset of formulas that was found to forbid the existence of 4 MUBs in dimension 2 included the previous one, as well as some others such as
(1/Sqrt[2]*1/Sqrt[2]+w010* w110+w011*w111)^2+(w011* w110-w010*w111)^2==0
(1/Sqrt[2]*1/Sqrt[2]+z110* z110+z111*z111)^2+(z111* z110-z110*z111)^2==1
Listing 2: Examples of initial equations that were used in the last resolution step to reach an ultimate contradiction
The formulas listed in 2 correspond, in order, with the orthogonality conditions between vectors of the same base, the normality condition, and the unbiased equation relating vectors of different bases.
Note that these equations are already simplified using the symmetries described in Section 4. For instance, in the orthogonality condition it can be appreciated that w000 and w100 are assumed to be \(\frac{1}{\sqrt{2}}\), while w001 and w101 are 0.
|
2303.18189 | Towards a Classification of Charge-3 Monopoles with Symmetry | We classify all possible charge-3 monopole spectral curves with non-trivial
automorphism group and within these identify those with elliptic quotients. By
focussing on elliptic quotients the transcendental constraints for a monopole
spectral curve become ones regarding periods of elliptic functions. We
construct the Nahm data and new monopole spectral curves with $D_6$ and $V_4$
symmetry, the latter based on an integrable complexification of Euler's
equations, and for which energy density isosurfaces are plotted. Extensions of
our approach to higher charge and hyperbolic monopoles are discussed. | H. W. Braden, Linden Disney-Hogg | 2023-03-31T16:45:41Z | http://arxiv.org/abs/2303.18189v3 | # Towards a classification of charge-3 monopoles with symmetry
###### Abstract.
We classify all possible charge-3 monopole spectral curves with non-trivial automorphism group and within these identify those with elliptic quotients. By focussing on elliptic quotients the transcendental constraints for a monopole spectral curve become ones regarding periods of elliptic functions. We construct the Nahm data and new monopole spectral curves with \(D_{6}\) and \(V_{4}\) symmetry, the latter based on an integrable complexification of Euler's equations, and for which energy density isosurfaces are plotted. Extensions of our approach to higher charge and hyperbolic monopoles are discussed.
**Acknowledgements. We are grateful to Conor Houghton for correspondence and to Paul Sutcliffe for discussions and providing code (discussed below) that we have modified to plot the accompanying monopole solutions. The research of LDH is supported by a UK Engineering and Physical Sciences Research Council (EPSRC) studentship.** Data Availability. The datasets generated during the current study and the code for their creation/analysis are available from the second author upon reasonable request. EMPG-23-06.
third description of monopoles, the _spectral curve_, and proved the equivalence of all three descriptions. Here the spectral curve \(\mathcal{C}\subset T\mathbb{P}^{1}\stackrel{{\pi}}{{\to}}\mathbb{P}^{1}\) is a compact algebraic curve with no multiple components of genus \((k-1)^{2}\) such that (i) \(\mathcal{C}\) is real with respect to a (to be given) anti-holomorphic involution \(\tau\); (ii) there is a family of line bundles \(\mathcal{L}^{s}\) on \(\mathcal{C}\) such that \(\mathcal{L}^{2}\) is trivial and \(\mathcal{L}(k-1):=\mathcal{L}\otimes\pi^{*}\mathcal{O}_{\mathbb{P}^{1}}(k-1)\) is real; and (iii) \(H^{0}(\mathcal{C},\mathcal{L}^{s}(k-2))=0\) for all \(s\in(0,2)\). Introducing coordinates \(\zeta,\eta\) on \(T\mathbb{P}^{1}\) corresponding to the base \(\mathbb{P}^{1}\) and fibre respectively then \(\mathcal{L}^{s}\to T\mathbb{P}^{1}\) (and by restriction, to \(\mathcal{C}\)) is the (holomorphic) line bundle defined by \(\exp(s\eta/\zeta)\) with \(\eta/\zeta\in H^{1}(T\mathbb{P}^{1},\mathcal{O})\). Note that \(T\mathbb{P}^{1}\) arises here because it is the mini-twistor space of oriented geodesics in Euclidean 3-space [10]. The action of \(\tau\) is \((\zeta,\eta)\mapsto(-1/\bar{\zeta},-\bar{\eta}/\bar{\zeta}^{2})\), and so a generic spectral curve satisfying the reality constraints may be written as the zero set of a polynomial
\[0=\eta^{k}+\sum_{r=1}^{k}p_{2r}(\zeta)\eta^{k-r},\quad p_{2r}(\zeta)=(-1)^{r} \zeta^{2r}\overline{p_{2r}(-1/\bar{\zeta})},\]
where \(p_{2r}\) is a polynomial of degree \(2r\) in \(\zeta\). In what follows we shall refer to _Nahm data_ as matrices \(\{T_{i}\}\) satisfying Nahm's three constraints and a _monopole spectral curve_ as a curve \(\mathcal{C}\) satisfying Hitchin's three constraints.
Our understanding of monopoles, the self-duality equations and Nahm's equation have developed greatly in the 40 years since [10]. The moduli space of monopoles of a given charge has attracted much attention, and a rational map description [12] allows different insights and facilitates numerical solutions; integrable systems techniques have also been brought to the fore [1, 2]. Yet despite this progress very few monopole spectral curves have been found in the intervening period owing to the transcendentality of the Hitchin conditions (see [1]). While monopoles of charge 1 and 2 are well-understood (for a review, see [2]) little progress has been made for higher charges. In all cases known the use of symmetry to simplify the conditions has been required; in nearly all of these we may quotient by the group of symmetries to an elliptic curve. Motivated by this history our first result here is to classify all possible charge-3 monopole spectral curves by their automorphism groups and within these identify those with elliptic quotients. From these we will then construct new monopole spectral curves with \(D_{6}\) and \(V_{4}\) symmetry. In section SS2 we prove:
**Theorem 1.1**.: _Let \(\mathcal{C}\subset T\mathbb{P}^{1}\) be a charge-3 monopole spectral curve with \(H\leq\operatorname{Aut}(\mathcal{C})\) such that the quotient genus \(g(\mathcal{C}/H)=1\). Then, up to an automorphism of \(T\mathbb{P}^{1}\), the curve is given by the vanishing of one of the following 5 forms:_
1. \(\eta^{3}+\eta[(a+ib)\zeta^{4}+c\zeta^{2}+(a-ib)]+[(d+ie)\zeta^{6}+(f+ig)\zeta^{ 4}-(f-ig)\zeta^{2}-(d-ie)]\)_,_
2. \(\eta^{3}+\eta[a(\zeta^{4}+1)+b\zeta^{2}]+ic\zeta(\zeta^{4}-1)\)_,_
3. \(\eta^{3}+a\eta\zeta^{2}+ib\zeta(\zeta^{4}-1)\)_,_
4. \(\eta^{3}+a\eta\zeta^{2}+b(\zeta^{6}-1)\)_,_
5. \(\eta^{3}+ia\zeta(\zeta^{4}-1)\)_,_
_where \(a,b,c,d,e,f,g\in\mathbb{R}\)._
Our result does not itself guarantee the existence of monopole spectral curves in these families, and the classes intersect (for example, 5 is a special case of 3). In previous works monopole spectral curves of the form 3 and 5 have been understood in [11] and [12] as corresponding to charge-3 twisted line scattering and the tetrahedrally-symmetric monopole respectively, while one special case of the form 2 was understood in [11] as the
class of inversion-symmetric monopoles, with another in [11] as the axially-symmetric 3-monopole. Curves of the form 2 had been observed in [12, (3.71)], but the Hitchin constraints were only imposed for a restricted subset. In Theorem 3.2 we determine the general monopole spectral curve and Nahm data in class 4 and in Theorem 4.3 the same for class 2. This provides the necessary data in order to plot energy density isosurfaces following [10]1; Figure 1 gives for example a previously unknown \(V_{4}\) configuration. Along with the class of charge-3 monopoles described via an implicit condition in [1], these form all the charge-3 monopole spectral curves currently known, which fit together as shown in Figure 2 for some parameter values. Figure 3 shows the relations between the symmetry groups of the curves.
Footnote 1: We are grateful to Paul Sutcliffe for providing us with the initial code which we modified to make our plots.
Our approach is as follows. In section SS2 of the paper we will prove Theorem 1.1. Once we have the automorphism groups of interest we will take the procedure introduced in [14] and developed in [10, 11, 12] and apply this to the relevant symmetry. This procedure is recalled in Appendix B where Nahm's equations for case 4 are reduced to the Toda equations before further reduction is described in the text. Similarly in Appendix C Nahm's equations for the \(V_{4}\) symmetric case are determined. This yields a complex extension of the Euler equations. We then show how these equations are solved in terms of elliptic functions on the quotient elliptic curve, first in Section SS3 for the \(D_{6}\)-symmetric monopole and then in Section SS4 for the \(V_{4}\)-symmetric monopole. Here the rationale for focussing on elliptic quotients is most evident: the transcendental constraints implicit in the works of Hitchin and Ercolani-Sinha become ones regarding periods of elliptic functions. We relegate to Appendix A a number of properties of elliptic and related functions used in the
Fig. 1: Surface of constant energy density \(\mathcal{E}=0.18\) for the \(V_{4}\) monopole given by the parameters (see Theorem 4.3) \(m=0.6\), \(\alpha=-2.0\), \(\mathrm{sgn}=1\).
text and proofs of some statements requiring these. We will not deal with the remaining \(C_{2}\)-symmetric case here discussing this further in Section SS5 which is a conclusion.
## 2. Classifying Curves by Automorphism Group
In this section we determine the charge-3 monopole spectral curves we shall focus on, beginning with minimal restrictions and gradually imposing these.
A monopole spectral curve is a compact algebraic curve \(\mathcal{C}\) lying in Euclidean mini-twistor space \(\mathbb{MT}\), the space of oriented lines in Euclidean 3-space. If the direction of the oriented line is given by \(\zeta\), an affine coordinate of \([\zeta_{0}:\zeta_{1}]\in\mathbb{P}^{1}\), and \(\eta\in\mathbb{C}\) describes the point in the plane perpendicular to this through which the line passes then we have \(\eta\partial_{\zeta}\in T\mathbb{P}^{1}\cong\mathbb{MT}\). A generic charge-\(k\) monopole spectral curve may then be written as the zero set of a polynomial \(0=\eta^{k}+\sum_{r=1}^{k}p_{2r}(\zeta_{0},\zeta_{1})\eta^{k-r}\), where \(p_{2r}\) is a homogeneous polynomial of degree \(2r\) in \(\zeta_{0}\), \(\zeta_{1}\); equivalently a polynomial of degree \(2r\) in \(\zeta\). Now \(T\mathbb{P}^{1}\) is non-compact and two compactifications of this are common, either by inclusion in the (singular) weighted
Figure 3. Automorphism groups of known charge-3 spectral curves and their relations, presented as \(G\) or \(H\leq G\) where \(G\) is the full automorphism group and \(H\) is the subgroup quotienting to an elliptic curve when it exists.
Figure 2. Known charge-3 spectral curves and their relations. We do not specify the constraints on the parameters.
projective space \(\mathbb{P}^{1,1,2}=\{(\zeta_{0},\zeta_{1},\eta)\in\mathbb{C}^{3}\setminus\{0\}\,| \,(\zeta_{0},\zeta_{1},\eta)\sim(\lambda\zeta_{0},\lambda\zeta_{1},\lambda^{2} \eta),\ \lambda\in\mathbb{C}^{*}\}\), or (as by Hitchin) in the Hirzebruch surface \(\mathbb{F}_{2}\). We adopt the former view and note that the singular point \([0:0:1]\) does not lie in \(T\mathbb{P}^{1}\) and hence on \(\mathcal{C}\). Next, via the Veronese embedding, we have \(\iota:\mathbb{P}^{1,1,2}\hookrightarrow\mathbb{P}^{3}\), \(\iota([\zeta_{0}:\zeta_{1}:\eta])=[\zeta_{0}^{2}:\zeta_{0}\zeta_{1}:\zeta_{1} ^{2}:\eta]\). Under this a homogeneous polynomial of degree \(2r\) becomes a homogeneous polynomial of degree \(r\) in the new coordinates and \(\mathbb{P}^{1,1,2}\) becomes a cone, a quadric, over the cone point \(\iota([0:0:1])\) (see, for example, [20, SS8.2.11]). Thus a monopole spectral curve may be viewed as the complete intersection of a quadric cone and a degree-\(k\) hypersurface in \(\mathbb{P}^{3}\). This is known to be a curve of genus \((k-1)^{2}\) (see for example [14, Exercise V.2.9]) which is non-hyperelliptic for \(k\geq 3\) ([14, Exercise IV.5.1]). For \(k=3\) we then have that \(\mathcal{C}\) is a non-hyperelliptic genus-4 curve.
In 1895 Wiman [15] classified all non-hyperelliptic genus-4 curves by their automorphism group and gave explicit defining equations for these. Wiman's classification had two families: curves arose either as the intersections of a cubic surface and non-singular quadric in \(\mathbb{P}^{3}\), or as the intersection of a cubic surface and quadric cone in \(\mathbb{P}^{3}\). Thus charge-3 monopole spectral curves with automorphism group must lie in Wiman's second family. (The two rulings of the non-singular quadric of Wiman's first family lead to projections from the curve to \(\mathbb{P}^{1}\times\mathbb{P}^{1}\), which is relevant for spectral curves of hyperbolic monopoles; this will be developed elsewhere.) We note that although \(\operatorname{Aut}(\mathbb{P}^{3})=\operatorname{PGL}(4,\mathbb{C})\) differs from \(\operatorname{Aut}(\mathbb{P}^{1,1,2})\cong\mathbb{C}^{3}\rtimes(\operatorname {GL}(2,\mathbb{C}))/\{\pm\operatorname{Id}_{2}\}\) (with the natural action of \(\operatorname{GL}(2,\mathbb{C})\) on \((\zeta_{0},\zeta_{1})\) inducing that on \((\zeta_{0}^{2},\zeta_{0}\zeta_{1},\zeta_{1}^{2})\)), Wiman, in determining his normal forms, considered only those transformations of \(\operatorname{Aut}(\mathbb{P}^{3})\) that preserved the cone, and so his normal forms include all possible charge-3 monopole spectral curves. In Table 1 we give those curves in Wiman's classification which lie on a cone presenting2 these in terms of a curve given by the vanishing of a polynomial \(P(x,z)\). We also write down their full automorphism group \(G:=\operatorname{Aut}(\mathcal{C})\) and the corresponding signature \(c:=c_{G}=(g_{0};c_{1},\dots,c_{r})\) giving the quotient genus \(g_{0}=g(\mathcal{C}/G)\) and the ramification indices \(c_{i}\) of the quotient map \(\mathcal{C}\to\mathcal{C}/G\) (see [13]). These have been calculated with the help of the information available from [14]. We make some remarks about Table 1.
Footnote 2: We set \(y=1\) in Wiman’s notation, so as to make clear the connection to monopole spectral curves.
* The label \(D_{n}\) refers to the dihedral group of order \(2n\), following the convention [14].
* Wiman's parameters are to be understood as generic: there may exist specific values of the parameters for which the automorphism group is larger than that indicated.
* Wiman provides a form where the \(z^{2}\) term is always zero, equivalent to centring the monopole.
* All the curves given are irreducible, so we can only find reducible spectral curves as limiting members of the above families.
* The completeness of the above data on signatures and elliptic quotients is reliant on the completeness of the data of the LMFDB.
* We recognise the curve with \(C_{3}\times S_{4}\) symmetry as corresponding to the tetrahedrally-symmetric monopole.
Not all curves on the list will yield monopoles spectral curves; for example by the following result.
**Proposition 2.1** ([1]).: _There are only two curves in the family \(\eta^{3}+\chi(\zeta^{6}+b\zeta^{3}+1)=0\), \(\chi,b\in\mathbb{R}\), that correspond to BPS monopoles; these are tetrahedrally-symmetric monopole spectral curves._
### Moduli Space Dimension
We now also briefly discuss another aspect of this group theoretic approach that can be used to identify particularly tractable monopole curves.
There is a moduli space \(N_{k}\) of charge-\(k\) monopoles up to gauge transform, which one typically enlarges by a phase to \(M_{k}\) which has a natural action of the Euclidean group \(\mathrm{E}(3)\) and circle group \(U(1)\). This restricts to a moduli space of (strongly-) centred charge-\(k\) monopoles \(M_{k}^{0}=M_{k}/(S^{1}\times\mathbb{R}^{3})\cong N_{k}/\mathbb{R}^{3}\) with action of the orthogonal group \(\mathrm{O}(3)\) which parametrises monopoles up to gauge transform with fixed centre [1]. The spectral curves corresponding to such monopoles have \(a_{1}(\zeta)=0\), while the corresponding Nahm data has \(\mathrm{Tr}(T_{i})=0\). \(M_{k}^{0}\) is a totally geodesic manifold of real dimension \(4(k-1)\), and given \(G\leq\mathrm{O}(3)\) we may consider the submanifold of the moduli space of \(G\)-invariant strongly-centred monopoles, which is also totally geodesic [1]. As scattering of magnetic monopoles is approximated by geodesic motion in the moduli space [21], if one can find \(G\leq\mathrm{O}(3)\) for which \(\dim(M_{k}^{0})^{G}=4\), it is known that after a rotation the corresponding \(1\)-parameter family of monopoles corresponds to a scattering process. We may use group theory to help identify such families via the following result.
**Proposition 2.2**.: \(\dim_{\mathbb{R}}(M_{3}^{0})^{G}\leq 3g_{0}+r\)_._
Proof.: [17, Lemma 3.1] gives that the complex dimension of each component of the locus of equivalence classes of genus \(g\geq 2\) curves admitting a \(G\)-action with signature \(c\) is (provided it is non-empty) \(\delta(g,G,c):=3(g_{0}-1)+r=\dim_{\mathbb{C}}\mathcal{M}_{g_{0},r}\) the moduli space of genus
\begin{table}
\begin{tabular}{c|c|c|c|c} \(P\) & \(G\) & \(c_{G}\) & \(H\) & \(c_{H}\) \\ \hline \hline \(z^{3}+z(ax^{4}+bx^{2}+c)+(dx^{6}+ex^{4}+fx^{2}+g)\) & \(C_{2}\) & \((1;2^{6})\) & \(C_{2}\) & \((1;2^{6})\) \\ \(z^{3}+z(ax^{4}+bx^{2}+c)+dx(x^{4}+ex^{2}+f)\) & \(C_{2}\) & \((2;2^{2})\) & & \\ \(z^{3}+z[a(x^{4}+1)+bx^{2}]+x[c(x^{4}+1)+dx^{2}]\) & \(C_{2}\times C_{2}\) & \((0;2^{7})\) & & \\ \(z^{3}+z[a(x^{4}+1)+bx^{2}]+x(x^{4}-1)\) & \(C_{2}\times C_{2}\) & \((1;2^{3})\) & \(C_{2}^{2}\) & \((1;2^{3})\) \\ \(z^{3}+azx^{2}+x(x^{4}+1)\) & \(D_{4}\) & \((0;2^{4},4)\) & \(C_{4}\) & \((1;4^{2})\) \\ \(z^{3}+z(x^{4}+a)+(bx^{4}+c)\) & \(C_{4}\) & \((0;2,4^{4})\) & & \\ \(z^{3}+azx^{2}+x^{6}+bx^{3}+1\) & \(S_{3}\) & \((0;2^{6})\) & & \\ \(z^{3}+azx^{2}+x^{6}+1\) & \(D_{6}\) & \((0;2^{5})\) & \(S_{3}\) and \(C_{6}\) & \((1;2^{2})\) \\ \(z^{3}+z(ax^{3}+b)+(x^{6}+cx^{3}+d)\) & \(C_{3}\) & \((1;3^{3})\) & \(C_{3}\) & \((1;3^{3})\) \\ \(z^{3}+az(x^{3}+1)+(x^{6}+20x^{3}-8)\) & \(A_{4}\) & \((0;2,3^{3})\) & & \\ \(z^{3}+az+x^{6}+b\) & \(C_{6}\) & \((0;2,6^{3})\) & & \\ \(z^{3}+z+x^{6}\) & \(C_{12}\) & \((0;4,6,12)\) & & \\ \(z^{3}+az+x^{5}+b\) & \(C_{5}\) & \((0;5^{4})\) & & \\ \(z^{3}+z+x^{5}\) & \(C_{10}\) & \((0;5,10^{2})\) & & \\ \(z^{3}-(x^{6}+ax^{4}+bx^{2}+1)\) & \(C_{6}\) & \((0;2^{2},3^{3})\) & & \\ \(z^{3}-x(x^{4}+ax^{2}+1)\) & \(C_{6}\times C_{2}\) & \((0;2^{2},3,6)\) & & \\ \(z^{3}-(x^{6}+ax^{3}+1)\) & \(C_{3}\times S_{3}\) & \((0;2^{2},3^{2})\) & & \\ \(z^{3}-(x^{5}+1)\) & \(C_{15}\) & \((0;3,5,15)\) & & \\ \(z^{3}-(x^{6}+1)\) & \(C_{6}\times S_{3}\) & \((0;2,6^{2})\) & & \\ \(z^{3}-x(x^{4}+1)\) & \(C_{3}\times S_{4}\) & \((0;2,3,12)\) & \(A_{4}\) & \((1;2)\) \\ \end{tabular}
\end{table}
Table 1. Potential charge-3 monopole spectral curves with nontrivial automorphism group and those (with subgroups) quotienting to genus \(1\).
\(g_{0}\) curves with \(r\) marked points. There is an \(\operatorname{SO}(3)\) action on the moduli space of monopoles that is trivial on the moduli space of curves because it induces a birational isomorphism, and \(\dim_{\mathbb{R}}\operatorname{SO}(3)=3\). The result then follows as the \(\operatorname{SO}(3)\) orbits of the moduli space of monopoles will form a component of this locus, hence \(\dim_{\mathbb{R}}(M_{9}^{0})^{G}-3=\dim_{\mathbb{R}}\left(\mathcal{M}_{g_{0}, r}\right)^{\tau}\), and using the fact from Teichmuller theory that \(\dim_{\mathbb{R}}\left(\mathcal{M}_{g_{0},r}\right)^{\tau}=\dim_{\mathbb{C}} \mathcal{M}_{g_{0},r}\)[1].
**Remark 2.3**.: _Note in the above we could have used \(H\leq G\) and it's corresponding signature, but this would have given a weaker bound as \(\delta(g,G,c_{G})\leq\delta(g,H,c_{H})\)[13]._
The remarkable fact about this is that the bound depends on the signature only. The curves of [13], with automorphism group \(H\) of order \(2k(k-1)\) for \(k=3,4,6\), have signature \(c_{H}=(1;k-1)\)[13, Proposition 4]. For these curves and all known monopole spectral curves of charge \(3\) we have \(\dim(M_{k}^{0})^{H}-3=\delta(g,H,c_{H})-1\), and hence one might conjecture that this is always true in the case \(g_{0}=1\). A calculation for the case of inversion-symmetric monopoles considered in [1] for which there is a \(C_{2}\) action \(\eta\mapsto-\eta\) shows that such a conjecture would certainly not be true for all \(g_{0}\).
### Genus-1 Reductions
Table 1 gives us a list of putative spectral curves with symmetry before we have imposed the further constraints of Hitchin. We know from [11] that Nahm's equations correspond to a linear flow in the Jacobian of the corresponding spectral curve \(\mathcal{C}\); the direction of this linear flow is given by the Ercolani-Sinha vector \(\boldsymbol{U}\)[10]. Braden [12] has shown that when we have a symmetry group \(G\) we may be able to reduce to the quotient curve \(\mathcal{C}\stackrel{{\rightarrow}}{{\rightarrow}}\mathcal{C}^ {\prime}:=\mathcal{C}/G\) and reduced Ercolani-Sinha vector \(\boldsymbol{U}^{\prime}\) when \(\boldsymbol{U}=\pi^{*}\boldsymbol{U}^{\prime}\). For example charge-\(k\) monopoles with \(C_{k}\) symmetry reduce to questions about a genus-\((k-1)\) hyperelliptic curve. The \(k=3\) case was studied in [1].
Notwithstanding the attendant simplifications, the list of Table 1 is too long for the purposes of this letter and we require a further criterion to reduce this. Here we adopt the following: does the genus-\(4\) spectral curve (assumed with real structure) quotient (either by \(\operatorname{Aut}(\mathcal{C})\) or a subgroup) to an elliptic curve? The rationale for this is that the remaining of Hitchin's conditions are most straightforwardly answered for elliptic curves; equivalently the Ercolani-Sinha constraint becomes one on the real period of an elliptic curve. There are also a number of curves known with this property [13, 14, 15, 16].
Thus we seek curves \(\mathcal{C}\) with real structure from Wiman's list for which there exists \(H\leq\operatorname{Aut}(\mathcal{C})\) such that \(g(\mathcal{C}/H)=1\). Here we may use the database of [10] which has enumerated all the possible \(H\) and the corresponding signatures for genus-\(4\) curves. We may then use our knowledge of the explicit forms of the curves to match up these cases, which leaves us with the reduced list in the final two columns of Table 1. As previously noted, the \(H=A_{4}\) case corresponds to the tetrahedrally-symmetric monopole [13], and the \(H=C_{4}\) case has already been solved in [14]. We also see that the cases \(H=S_{3}\) and \(H=C_{6}\) arise from the same curve, indicating that the curve has two distinct quotients to an elliptic curve.
In the following sections we will investigate in more detail the two new cases \(H=C_{6}\) (or equivalently \(H=S_{3}\)) with full automorphism group \(G=D_{6}\), and \(H=V_{4}\) (with full automorphism group \(G=V_{4}\)); we do not treat the \(C_{2}\) case here. We will begin with the \(D_{6}\) case which is both illustrative and simpler, though ultimately the new solutions and their scattering family are less interesting.
Before turning to these however we may complete the proof of Theorem 1.1. With the exception of the \(H=C_{3}\) curve, imposing reality on the curves with groups \(H\) listed in Table 1 yields the curves of Theorem 1.1 (and in the same order). Imposing reality on the \(G=H=C_{3}\) family of curves means that \(a=b=0\) in the corresponding defining
equation \(P\); the resulting curve then lies in the family described by Proposition 2.1. Only the tetrahedrally-symmetric monopole within this family quotients to an elliptic curve and by a rotation this may written as \(\eta^{3}+ia\zeta(\zeta^{4}-1)=0\), the final entry of the Theorem. We have thus established Theorem 1.1.
## 3. \(D_{6}\) Monopoles
To understand the spectral curves of this section it helps to first begin with the general centred \(C_{k}\) invariant spectral curve \(\mathcal{C}\) (with reality imposed),
\[\eta^{k}+\alpha_{2}\eta^{k-2}\zeta^{2}+\alpha_{3}\eta^{k-3}\zeta^{3}+\ldots+ \alpha_{k-1}\eta\,\zeta^{k-1}+\alpha_{k}\zeta^{k}+\beta[\zeta^{2k}+(-1)^{k}]=0, \tag{2}\]
where \(\alpha_{k},\beta\in\mathbb{R}\). This is invariant under \(s:(\eta,\zeta)\to(\omega\eta,\omega\zeta)\), \(\omega=\exp(2\pi i/k)\) with \(C_{k}=\langle s\rangle\). The work of [1] shows (2) is the unbranched cover of the hyperelliptic curve
\[y^{2}=(x^{k}+\alpha_{2}x^{k-2}+\alpha_{3}x^{k-3}+\ldots+\alpha_{k})^{2}-(-1)^{ k}4\beta^{2}, \tag{3}\]
where \(x=\eta/\zeta\) and \(y=\beta[\zeta^{k}-(-1)^{k}\zeta^{-k}]\). The curve (2) also has the symmetry \(t:(\zeta,\eta)\mapsto(-1/\zeta,-\eta/\zeta^{2})\) and \(G=\langle s,t\rangle=D_{k}\) is the full automorphism group. The transformation \(t\) corresponds to a reflection \(\underline{\mathbf{y}}\to\mathrm{diag}(1,-1,1)\underline{\mathbf{y}}\) in \(\mathrm{O}(3,\mathbb{R})\) and it becomes the hyperelliptic involution \(t:(x,y)\to(x,-y)\) on the quotient curve. For \(k=3\) we are describing the curve in Table 1 with full automorphism group \(G=S_{3}\) (\(=D_{3}\) in [14] notation). Further, the work of [1] shows that Nahm's equations for every charge-\(k\) monopole with \(C_{k}\) rotational symmetry are equivalent to the \(A_{k-1}^{(1)}\)3 affine Toda equations (in real Flaschka variables)
Footnote 3: This is the notation of [11].
\[a_{i}^{\prime}=\frac{1}{2}a_{i}(b_{i}-b_{i+1}),\quad b_{i}^{\prime}=a_{i}^{2} -a_{i-1}^{2}, \tag{4}\]
where \(i\) is taken mod \(k\), and we use \({}^{\prime}\) to denote \(\frac{d}{ds}\). These equations may also be found in other ways. In Appendix B we describe how taking the \(C_{k}\) invariant polynomials \(Q_{i}=\zeta_{0}^{i}\zeta_{1}^{i},i=1,\ldots,k\) and \(Q_{k+1}=\zeta_{0}^{k}-\zeta_{1}^{k}\) as the inputs to the procedure of [13] we obtain the equations (4).
In general the solutions to the Toda system (4) linearise on the genus-\((k-1)\) Jacobian of the curve (3). For \(k=3\) this was the approach taken in [1] where a family of monopoles including the tetrahedrally-symmetric monopole was investigated. Are simplifications possible? In the remainder of this section we shall show that a for \(k=3\) a one-parameter family of elliptic Nahm data exists with \(C_{3}\) symmetry. As this is contrary to results in the literature we begin with four results that lead to such an elliptic reduction before determining the parameters that yield Nahm data. Three particular points in the family will be identified before concluding with a description of the scattering described by the family.
### Four Lessons
For \(k=3\) the equations (4) take the form (with \(a_{0}\equiv a_{3}\), \(b_{0}\equiv b_{3}\))
\[a_{0}^{\prime}= \frac{1}{2}a_{0}(b_{3}-b_{1}),\quad a_{1}^{\prime}= \frac{1}{2}a_{1}(b_{1}-b_{2}),\quad a_{2}^{\prime}= \frac{1}{2}a_{2}(b_{2}-b_{3}),\] \[b_{1}^{\prime}= a_{1}^{2}-a_{0}^{2},\qquad\qquad b_{2}^{\prime}= a_{2}^{2}-a_{1}^{2},\qquad\qquad\,b_{3}^{\prime}= a_{0}^{2}-a_{2}^{2}. \tag{5}\]
Here we find the constants
\[\alpha_{2}=b_{1}b_{2}+b_{1}b_{3}+b_{2}b_{3}+a_{0}^{2}+a_{1}^{2}+a_{2}^{2}, \quad\alpha_{3}=b_{1}b_{2}b_{3}+b_{1}a_{2}^{2}+b_{2}a_{0}^{2}+b_{3}a_{1}^{2}, \quad\beta=a_{0}a_{1}a_{2},\]
and the (centred) spectral curve
\[\eta^{3}+\alpha_{2}\eta\zeta^{2}+\alpha_{3}\zeta^{3}+\beta(\zeta^{6}-1)=0\]
which covers
\[y^{2}=(x^{3}+\alpha_{2}x+\alpha_{3})^{2}+4\beta^{2}. \tag{6}\]
We have 6 differential equations, 6 variables and three conserved quantities.
#### 3.1.1. Direct Simplification
Appendix B shows that we may use the constants \(\alpha_{2},\alpha_{3},0=\sum b_{i}\) to eliminate the \(b_{i}\), resulting in the equations
\[0=\sum_{i=0}^{2}a_{i}^{2}-\alpha_{2}-\frac{1}{3}(d_{1}^{2}+d_{1}d_{2}+d_{2}^{2} ),\ 0=a_{1}^{2}d_{2}-a_{2}^{2}d_{1}+\alpha_{3}+\frac{1}{3}\alpha_{2}(d_{1}-d_{2})+ \frac{1}{27}(d_{1}-d_{2})^{3},\]
where we have introduced \(d_{i}=2a_{i}^{\prime}/a_{i}\). Using \(\beta=a_{0}a_{1}a_{2}\) to eliminate \(a_{0}\) we then have two non-linear ODE's in two variables, the maximal reduction one can achieve with generic \(\alpha_{i}\) and \(\beta\). The appendix shows further that if \(a_{1}^{2}=a_{2}^{2}\) additional simplification is possible; and that we may consistently set \(a_{1}^{2}-a_{2}^{2}=0\) provided \(b_{2}a_{1}^{2}=0\). As \(b_{2}^{\prime}=a_{2}^{2}-a_{1}^{2}\), this means we can consistently set \(a_{1}^{2}=a_{2}^{2}\) and \(b_{2}=0\). Making these restrictions we find \(\alpha_{3}=0\) and that we reduce to one equation
\[a_{1}^{2}\left(2\frac{da_{1}}{ds}\right)^{2}=\beta^{2}+2a_{1}^{6}-\alpha_{2}a _{1}^{4}.\]
Upon setting \(u=x^{2}\) this becomes
\[\left(\frac{du}{ds}\right)^{2}=\beta^{2}+2u^{3}-\alpha_{2}u^{2}, \tag{7}\]
to which we shall return. We record that the \(j\)-invariant of the associated elliptic curve \(y^{2}=\beta^{2}+2u^{3}-\alpha_{2}u^{2}\) is \(16\alpha_{2}^{6}/(\beta^{2}[\alpha_{2}^{3}-27\beta^{2}])\).
#### 3.1.2. Sutcliffe's ansatz
Some time ago, in the context of Seiberg-Witten theory, Sutcliffe [10] gave an ansatz for charge-\(k\) cyclically symmetric monopoles in terms of affine Toda theory. With \(a_{j}=\gamma e^{(q_{j}-q_{j+1})/2}\), \(b_{j}=q_{j}^{\prime}\) equations (4) follow from a Hamiltonian4\(H=\frac{1}{2}\sum_{j=1}^{k}b_{j}^{2}-\sum_{j=0}^{k-1}a_{j}^{2}=\frac{1}{2} \sum_{j=1}^{k}p_{j}^{2}-\gamma^{2}\sum_{j=0}^{k-1}e^{q_{j}-q_{j+1}}.\) (The constant \(\gamma\) here is to account for the constant \(\prod_{i=0}^{k-1}a_{i}=\gamma^{k}=(-1)^{k-1}\beta\).) Sutcliffe showed that for \(k=2\) Nahm data could be constructed, but for \(k=3\) although he could solve the equations he couldn't find solutions with the correct pole behaviour. The solution was obtained from the infinite chain solution as follows. We have from
Footnote 4: This Hamiltonian is unbounded from below: such is necessary as the monopole boundary conditions require a pole at \(s=0,2\) hence the momenta and correspondingly the potential must also be unbounded below. Thus while the dynamical system being described is integrable, a corresponding interpretation in terms of a mechanical system is less helpful. Further, while the \(a_{i}\) will always be real throughout we have freedom to choose their sign and we will make sign choices for the \(a_{i}\) where we cannot take \(\log a_{i}\) and obtain real values for the associated Toda position variables.
\[\left(\ln a_{j}^{2}\right)^{\prime\prime}=-a_{j-1}^{2}+2a_{j}^{2}-a_{j+1}^{2}\]
and the standard elliptic function identity for the Weierstrass \(\wp\)-function
\[\frac{d^{2}}{du^{2}}\,\ln[\wp(u)-\wp(v)]=-\wp(u+v)+2\wp(u)-\wp(u-v)\]
that with \(u=ju_{0}+t+t_{0}\) and \(v=u_{0}\) then
\[\frac{d^{2}}{dt^{2}}\,\ln[\wp(ju_{0}+t+t_{0})-\wp(u_{0})]=-\wp([j+1]u_{0}+t+t_ {0})+2\wp(ju_{0}+t+t_{0})-\wp([j-1]u_{0}+t+t_{0})\]
and we may identify \(a_{j}=\wp(ju_{0}+t+t_{0})-\wp(u_{0})\). This yields the solution for the infinite chain and we must still impose periodicity to obtain a solution. Imposing periodicity yields (for \(k=3\)) that \(a_{j}=\wp(2jK/3+t)-\wp(2K/3)\) which is equivalent to the solution of [13] which is given in Jacobi elliptic functions.5
Footnote 5: To make connection with [13] we use Lawden’s notation [14, §6.3.1, §6.9]. Thus for \(k=3\) we take \(u_{0}=2K/3\). Now
\[dc^{2}(u) =\frac{\wp(u)-e_{2}}{\wp(u)-e_{1}}=1+\frac{e_{1}-e_{2}}{\wp(u)-e _{1}}=1+\frac{1}{e_{1}-e_{3}}\left[\wp(u+\omega_{1})-e_{1}\right]=1+\left[\wp( u+\omega_{1})-e_{1}\right],\] \[cs^{2}(2K/3) =\wp(2K/3)-e_{1}.\]
Note Sutcliffe's '\(q_{j}^{2}\); is our \(a_{j}\). Then [13, 3.41] is \(dc^{2}(u)-1-cs^{2}(2K/3)=\wp(u+\omega_{1})-\wp(2K/3)=\wp(u+K)-\wp(2K/3)\) and so with \(u=2jK/3+t+K\) (his choice of \(\delta\)) we get \(a_{j}=\wp(2jK/3+t)-\wp(2K/3)\) and the corresponding asymptotics given in [13, 3.42-45]. Now the ansatz employed here forces only one of the \(a_{j}\) to be singular at any point, and this means the pole condition on the Nahm matrices cannot be satisfied. If we are to find an alternative solution that does indeed yield a monopole then this would suggest that one appropriate route would be to pick a simplification which forces multiple variables to have poles simultaneously. Such is the case when \(a_{1}^{2}=a_{2}^{2}\) found previously.
#### 3.1.3. Imposing Symmetry on Nahm Matrices
We next observe that \(a_{1}^{2}=a_{2}^{2}\) follows from the symmetry6
Footnote 6: We have the correspondences between the \(\mathbb{R}^{3}\) transformations and \((\zeta,\eta)\) actions
\[r:=\begin{pmatrix}1&0&0\\ 0&-1&0\\ 0&0&-1\end{pmatrix}\leftrightarrow(\zeta,\eta)\mapsto(1/\zeta,-\eta/\zeta^{2}), \quad rt:=\begin{pmatrix}1&0&0\\ 0&1&0\\ 0&0&-1\end{pmatrix}\leftrightarrow(\zeta,\eta)\mapsto(1/\bar{\zeta},-\bar{ \eta}/\bar{\zeta}^{2}).\]
If the rotation \(s:=(\eta,\zeta)\to(\omega\eta,\omega\zeta)\) takes place in the \(xy\)-plane then we obtain the point groups \(D_{k}=\langle s,r\rangle\), \(C_{kv}\cong D_{k}=\langle s,t\rangle\) and \(C_{kh}\cong C_{k}\times C_{2}=\langle s,rt\rangle\). The prismatic dihedral group \(D_{kh}=\langle s,r,t\rangle\) is obtained by adding any two of the above to the rotations, so giving the third. Abstractly \(D_{kh}\cong D_{k}\times C_{2}\).
\[T_{1}=\frac{1}{2}\begin{pmatrix}0&a_{1}&-a_{0}\\ -a_{1}&0&a_{2}\\ a_{0}&-a_{2}&0\end{pmatrix},\quad T_{2}=\frac{1}{2i}\begin{pmatrix}0&a_{1}&a_ {0}\\ a_{1}&0&a_{2}\\ a_{0}&a_{2}&0\end{pmatrix},\quad T_{3}=\frac{-i}{2}\begin{pmatrix}b_{1}&0&0 \\ 0&b_{2}&0\\ 0&0&b_{3}\end{pmatrix}, \tag{8}\]
and so
\[T_{1}^{A}=\frac{1}{2}\begin{pmatrix}0&-a_{1}&a_{0}\\ a_{1}&0&a_{2}\\ -a_{0}&-a_{2}&0\end{pmatrix},\quad T_{2}^{A}=\frac{1}{2i}\begin{pmatrix}0&-a_{1 }&-a_{0}\\ -a_{1}&0&a_{2}\\ -a_{0}&a_{2}&0\end{pmatrix},\quad T_{3}^{A}=\frac{-i}{2}\begin{pmatrix}b_{1}&0 &0\\ 0&b_{2}&0\\ 0&0&b_{3}\end{pmatrix}.\]
Equivalence of the spectral curves means that there exists a constant invertible matrix \(C\) such that
\[C(-T_{1}^{A}+iT_{2}^{A})C^{-1}=T_{1}+iT_{2},\quad C(-T_{3}^{A})C^{-1}=T_{3}, \quad C(-T_{1}^{A}-iT_{2}^{A})C^{-1}=T_{1}-iT_{2}.\]
Because \(T_{3}^{A}=T_{3}\) is diagonal and traceless, the only way to achieve this is if at least one of the \(b_{i}\) is \(0\) and \(C\) permutes the other two. We can, without loss of generality, pick \(b_{2}=0\) so \(b_{1}=-b_{3}\) which gives that the generic \(C\) is \(C=\left(\begin{smallmatrix}0&0&a\\ 0&b&0\\ c&0&0\end{smallmatrix}\right)\). Picking a generic \(a,b,c\) we get
\[a_{0}(a-c)=a_{1}a+a_{2}b=a_{1}c+a_{2}b=a_{1}b+a_{2}c=a_{2}a+a_{1}b=0.\]
To avoid having an \(a_{i}=0\) we required \(a=c\), and so these reduce to
\[a_{1}a+a_{2}b=0=a_{1}b+a_{2}a,\]
and consequently \((a/b)^{2}=1\) and \(a_{1}=\pm a_{2}\) yielding the desired \(a_{1}^{2}=a_{2}^{2}\). Note that this also means \(\alpha_{3}=0\).
#### 3.1.4. Reduction of the spectral curve; folding
In order to get the curve with \(D_{6}\) symmetry of Table 1 we must set \(\alpha_{3}=0\). We have seen that for \(k=3\) this is a consequence of the symmetry \(r:(\zeta,\eta)\to(1/\zeta,-\eta/\zeta^{2})\). For general \(k\) this means we keep only the even terms of (2),
\[\eta^{k}+\alpha_{2}\eta^{k-2}\zeta^{2}+\alpha_{4}\eta^{k-4}\zeta^{4}+\ldots+ \beta[\zeta^{2k}+(-1)^{k}]=0. \tag{9}\]
The full automorphism group of this curve is \(D_{k}\times C_{2}\); for \(k=3\) this is the curve with full automorphism group \(D_{6}\cong D_{3}\times C_{2}\) that we are interested in. Setting \(x=\eta/\zeta\) in (9) we have
\[x^{k}+\alpha_{2}x^{k-2}+\alpha_{4}x^{k-4}+\ldots+\alpha_{k}+\beta[\zeta^{k}+ \zeta^{-k}] =0, k\text{ even},\]
\[x^{k}+\alpha_{2}x^{k-2}+\alpha_{4}x^{k-4}+\ldots+\alpha_{k-1}x+\beta[\zeta^{k} -\zeta^{-k}] =0, k\text{ odd}.\]
If \(y=\beta[\zeta^{k}-(-1)^{k}\zeta^{-k}]\) then \(r:(x,y)\to(-x,(-1)^{k-1}y)\); thus \(y\) is invariant under \(r\) only for \(k\) odd, in which case it will be a function on the quotient curve \(\hat{\mathcal{C}}/\left\langle s,r\right\rangle\); for \(k\)-even \(v=xy\) is invariant. Thus we have curves
\[v^{2} =x^{2}(x^{k}+\alpha_{2}x^{k-2}+\alpha_{4}x^{k-4}+\ldots+\alpha_{k })^{2}-4\beta^{2}x^{2} k\text{ even},\] \[y^{2} =(x^{k}+\alpha_{2}x^{k-2}+\alpha_{4}x^{k-4}+\ldots+\alpha_{k-1}x )^{2}+4\beta^{2} k\text{ odd}.\]
Setting \(k=2l\) or \(k=2l-1\) for the even and odd cases of the curves then with \(u=x^{2}\) we have these curves covering \(2:1\) the curves
\[v^{2} =u(u^{l}+\alpha_{2}u^{l-1}+\alpha_{4}u^{l-2}+\ldots+\alpha_{k})^{ 2}-4\beta^{2}u k\text{ even}, \tag{11}\] \[y^{2} =u(u^{l-1}+\alpha_{2}u^{l-2}+\alpha_{4}u^{l-3}+\ldots+\alpha_{k-1 })^{2}+4\beta^{2} k\text{ odd}. \tag{10}\]
The first has genus \(l\) and the second has genus \(l-1\). Under the cyclic transformation, it was shown in [1] that
\[\frac{\zeta^{k-2}d\zeta}{\partial_{\eta}P}=\pi^{*}\left(-\frac{1}{k}\frac{x^{k- 2}dx}{y}\right)\]
for the curve (3) and we observe that this differential is invariant under \(r\) for \(k\) both even and odd. Further
\[\frac{x^{k-2}dx}{y}=\begin{cases}\frac{x^{2l-2}dx}{y}=\frac{x^{2l-2}du}{2xy}= \frac{u^{l-1}du}{2v},\\ \frac{x^{2l-3}dx}{y}=\frac{x^{2l-4}du}{2y}=\frac{u^{l-2}du}{2y}.\end{cases}\]
In each case we obtain the maximum degree in \(u\) differential on the corresponding hyperelliptic curve and the work of [1] tells us the Ercolani-Sinha vector, if it exists, will reduce to one on the quotient curve.
In particular the \(k=3\) curve \(y^{2}=(x^{3}+\alpha_{2}x)^{2}+4\beta^{2}\) covers the elliptic curve \(\mathcal{E}=\mathcal{C}/H\),
\[y^{2}=u(u+\alpha_{2})^{2}+4\beta^{2},\]
with \(H=\left\langle s,r\right\rangle\cong S_{3}\). The \(j\)-invariant of this curve is \(j_{\mathcal{E}}=16\alpha_{2}^{6}/(\beta^{2}[\alpha_{2}^{3}-27\beta^{2}])\), the value observed earlier. We note that the genus-2 curve also covers the elliptic curve \(\mathcal{E}^{\prime}=\mathcal{C}/H^{\prime}\),
\[w^{2}=u^{2}(u+\alpha_{2})^{2}+4\beta^{2},\]
where now \(H^{\prime}=\langle s,rt\rangle\cong C_{6}\) with \(w=xy\) the invariant coordinate. Because \(\pi^{*}(du/(2w))=dx/y\) does not pull back to the differential appearing in the Ercolani-Sinha constraint we cannot solve the Hitchin constraints in terms of \(\mathcal{E}^{\prime}\). We record that the curve is in general distinct \(j_{\mathcal{E}^{\prime}}=\left(\alpha_{2}^{4}+48\beta^{2}\right)^{3}\big{/} \left(\beta^{4}\left[\alpha_{2}^{4}+64\beta^{2}\right]\right)\). We have that \(\mathcal{E}\) and \(\mathcal{E}^{\prime}\) are the two quotients identified in Table 1.
We remark that the reduction of the spectral curve we have just described may be understood directly in terms of the Toda equations and 'folding'. For the \(k=3\) case at hand set \(e^{\rho_{i}}:=a_{i}^{2}=\beta^{2/3}e^{q_{i}-q_{i+1}}\) (so that \(a_{0}a_{1}a_{2}=\beta\)) and again take \(b_{j}=q_{j}^{\prime}\) and Hamiltonian \(H=\frac{1}{2}\sum_{j=1}^{k}b_{j}^{2}-\sum_{j=0}^{k-1}a_{j}^{2}=\frac{1}{2} \sum_{j=1}^{k}p_{j}^{2}-\beta^{2/3}\sum_{j=0}^{k-1}e^{q_{j}-q_{j+1}}.\) Then the Toda equations take the form
\[\rho_{i}^{\prime\prime}=2e^{\rho_{i}}-e^{\rho_{i-1}}-e^{\rho_{i+1}}=\overline {K}_{ij}e^{\rho_{j}}\]
where \(\overline{K}_{ij}\) is the extended Cartan matrix of \(A_{2}\). Folding [10] corresponds to the action \(\rho_{i}\to\rho_{\sigma(i)}\) by a diagram automorphism \(\sigma\) of the extended Dynkin diagram: this retains integrability and here corresponds to identifying \(\rho_{1}=\rho_{2}:=\rho_{12}\), equivalently \(a_{1}^{2}=a_{2}^{2}\). Using \(e^{\rho_{0}}=\beta^{2}e^{-2\rho_{12}}\) the equations of motion \(\rho_{12}^{\prime\prime}=e^{\rho_{12}}-e^{\rho_{0}}\) and \(\rho_{0}^{\prime\prime}=2(e^{\rho_{0}}-e^{\rho_{12}})\) reduce to the one equation,
\[\rho_{12}^{\prime\prime}=e^{\rho_{12}}-\beta^{2}e^{-2\rho_{12}},\]
the ODE reduction of the Bullough-Dodd equation, a known integrable equation. This may be directly integrated. With \(u=e^{\rho_{12}}\) we obtain precisely (7). More generally we are seeing the reduction by folding \(A_{2l-1}^{(1)}\to C_{l}^{(1)}\) for \(k=2l\) even, and \(A_{2[l-1]}^{(1)}\to A_{2[l-1]}^{(2)}\) for \(k=2l-1\) odd, both coming from an order-2 symmetry of the Dynkin diagram.
### Solving for Nahm Data
A number of different arguments lead us then to an elliptic reduction of the Toda equations for \(k=3\) with corresponding ODE (7). The aim of this subsection is to show that Nahm data can be constructed from this. In doing so we will use properties of hypergeometric functions, and we lay out some of these details in Appendix A.
We have seen that the reduction leads to \(a_{1}^{2}=a_{2}^{2}\) and \(b_{2}=0\). In continuing to solve for the Nahm data one finds that the choice of sign of \(a_{2}\) relative to \(a_{1}\) does not affect the ability to impose the Hitchin constraints. Indeed, changing the choice of sign merely corresponds to changing the sign of \(\beta\), and again as we will see this does not restrict the spectral curve. As such we take \(a_{2}=-a_{1}\) in what follows. Now setting \(\tilde{u}=u-\frac{\alpha_{2}}{6}\) and \(\tilde{s}=s/\sqrt{2}\) we may transform (7) into standard Weierstrass form with solution
\[\tilde{u}=\wp((s-s_{0})/\sqrt{2};g_{2},g_{3}),\]
where \(g_{2}=\frac{\alpha_{2}^{2}}{3}\) and \(g_{3}=\frac{\alpha_{2}^{3}}{27}-2\beta^{2}\). Here we assume \(\Delta:=g_{2}^{3}-27g_{3}^{2}=4\beta^{2}(\alpha_{2}^{3}-27\beta^{2})\neq 0\) to avoid nonsingularity, commenting on the singular limits at the appropriate junctions. The \(j\)-invariant of the elliptic curve is as we have already seen
\[j=1728\frac{g_{2}^{3}}{g_{2}^{3}-27g_{3}^{2}}=\frac{16\alpha_{2}^{6}}{\beta^{2 }(\alpha_{2}^{3}-27\beta^{2})}.\]
To be Nahm data we require that the Nahm matrices have a pole at \(s=0\) which can be achieved by setting \(s_{0}=0\). We can then express all the Flaschka variables as
\[a_{1} =\pm\sqrt{\wp(s/\sqrt{2};g_{2},g_{3})+\frac{\alpha_{2}}{6}}, a_{2} =-a_{1}, a_{0} =\frac{\beta}{a_{1}a_{2}}, \tag{13}\] \[b_{1} =\pm\sqrt{2a_{1}^{2}+a_{0}^{2}-\alpha_{2}}, b_{2} =0, b_{3} =-b_{1}. \tag{12}\]
We have some signs of the square roots to set above.
1. Using that, around \(s=0\), \(\wp(s/\sqrt{2};g_{2},g_{3})\sim 2s^{-2}\Rightarrow a_{1}^{2}\sim\frac{2}{s^{2}}\), we have \(a_{0}\sim\frac{\beta s^{2}}{2}\). The ODE for \(a_{0}^{\prime}\), with \(b_{3}=-b_{1}\), gives \[b_{1}=-\frac{a_{0}^{\prime}}{a_{0}}\sim-\frac{(\beta s)}{(\beta s^{2}/2)}=- \frac{2}{s}.\] This requires us to take the negative square root for \(b_{1}\) around \(s=0\). We will want residues at \(s=2\), and it will turn out by applying similar analysis that we need the positive root around \(s=2\). These swap over when \(b_{1}=0\), which corresponds to \(a_{1}^{\prime}=0\). As we see later this must happen at \(s=1\). Alternatively one can see this from the observation that \(a_{1}\) is even about \(s=1\) by a judicious choice of period, and so \(b_{1}=\frac{2a_{1}^{\prime}}{a_{1}}\) is odd about the same point.
2. The sign of \(a_{1}\) is a free choice, and does not affect the geometry of the monopole, hence in what follows below we always take the positive sign.
The corresponding Nahm matrices (8) have residues at \(s=0\) given by
\[R_{1}=\frac{1}{\sqrt{2}}\left(\begin{array}{ccc}0&1&0\\ -1&0&-1\\ 0&1&0\end{array}\right),\quad R_{2}=\frac{i}{\sqrt{2}}\left(\begin{array}{ cccc}0&-1&0\\ -1&0&1\\ 0&1&0\end{array}\right),\quad R_{3}=i\left(\begin{array}{ccc}1&0&0\\ 0&0&0\\ 0&0&-1\end{array}\right)\]
which yield a \(3\)-dimensional irreducible representation.
Next we require a simple pole at \(s=2\) again forming a \(3\)-dimensional irreducible representation. There are two ways to achieve a residue at \(s=2\):
1. have that \(2/\sqrt{2}=\sqrt{2}\) is in the lattice corresponding to the values \(g_{2},g_{3}\), or
2. have that around \(s=2\), \(\wp(s/\sqrt{2};g_{2},g_{3})\sim-\frac{\alpha_{2}}{6}+\mathcal{O}(s-2)\).
These correspond to having \(a_{1}\) and \(a_{0}\) be singular at \(s=2\) respectively. (Because of the constant \(\beta\) they cannot both be singular.) One can check that the second condition would give a reducible representation at \(s=2\) (as again only one of the \(a_{i}\) have a pole here) and so we discount it.
Focusing then on the first condition, one way to fix the real period of the associated lattice is to invert the \(j\)-invariant of the elliptic curve corresponding to \(g_{2},g_{3}\) to give the period \(\tau\). Here this is most readily achieved by solving the quadratic (for example, see [1])
\[4\alpha(1-\alpha)=\frac{1728}{j}=108(\beta^{2}/\alpha_{2}^{3})\left[1-27(\beta ^{2}/\alpha_{2}^{3})\right],\]
for which we see the two solutions are \(\alpha=\frac{27\beta^{2}}{\alpha_{2}^{3}}\), and \(1-\alpha\). The corresponding normalised period is
\[\tau=\tau(\alpha):=i\frac{2F_{1}(1/6,5/6,1;1-\alpha)}{2F_{1}(1/6,5/6,1;\alpha)}.\]
Some analytic properties of this function we need are given in Appendix A.1.
**Remark 3.1**.: _If we had taken the other root \(\alpha\) in the numerator of the hypergeometric function then this would give the period \(-1/\tau\)._
As we want the lattice corresponding to \(g_{2},g_{3}\) to be \(\sqrt{2}\mathbb{Z}+\sqrt{2}\tau\mathbb{Z}\), we get the transcendental equations
\[\frac{1}{3}\alpha_{2}^{2}=\frac{1}{4}g_{2}(1,\tau),\quad\frac{1}{27}\alpha_{2} ^{3}-2\beta^{2}=\frac{1}{8}g_{3}\left(1,\tau\right).\]
For any given value of \(\alpha\in(0,1)\), let \(\alpha_{2}^{2}=\frac{3}{4}g_{2}(1,\tau)\). We then have two equations defining \(\beta\):
\[\beta^{2}=\frac{\alpha\alpha_{2}^{3}}{27},\quad\beta^{2}=\frac{1}{2}\left[ \frac{1}{27}\alpha_{2}^{3}-\frac{1}{8}g_{3}(1,\tau)\right].\]
To have a valid solution we must have that the two equations are consistent with each other, which one can check (see Appendix A.2) is equivalent to \(\operatorname{sgn}(g_{3}(1,\tau))=\operatorname{sgn}(\alpha_{2})\operatorname {sgn}(1-2\alpha)\). A consideration of the information given about \(\tau\) and \(g_{3}\) in Appendix A tells us that we only get solutions in the region \(\alpha\in[0,1]\), where \(\alpha=0,1\) really correspond to the limits \(\lim_{\epsilon\to 0^{+}}\epsilon,1-\epsilon\) respectively.
In order to exclude the possibility of other poles of the Nahm matrices in the region \(s\in(0,2)\), it is necessary that for all \(s\in(0,2)\)
\[\wp(s/\sqrt{2};g_{2},g_{3})+\frac{\alpha_{2}}{6}>0.\]
We know that (i) \(\wp\) takes its minimum at \(s=1\); (ii) that the minimum value is the most-positive root of the corresponding cubic \(4\wp^{3}-g_{2}\wp-g_{3}=0\); (iii) that this root is positive [6, SS23.5]. Therefore there are no other poles in \((0,2)\). Further as \(\alpha_{2}\neq 0\) has the same sign as \(\alpha\), then \(\alpha_{2}>0\) so \(a_{1}^{2}>0\). Therefore we know that \(a_{1}\) is always real, and hence so are all the Flaschka variables, thus giving all the Nahm variables being real as desired.
The remaining condition required for valid Nahm data is that \(T_{i}(s)=T_{i}(2-s)^{T}\). The nature of the Weierstrass \(\wp\) is such that \(\wp((2-s)/\sqrt{2};g_{2},g_{2})=\wp(s/\sqrt{2};g_{2},g_{3})\), so we automatically have that \(a_{1}(2-s)=a_{1}(s)\), \(a_{0}(s)=a_{0}(2-s)\). Moreover, because of the change in the sign of the square root giving \(b_{1}\) at \(s=1\), we have that \(b_{1}(2-s)=-b_{1}(s)\). Taken together these ensure the desired symmetry of the Nahm matrices and we have a one-parameter family of new solutions.
As such, we have now proven the following theorem.
**Theorem 3.2**.: _Given \(\alpha\in[0,1]\), define_
\[\tau=\tau(\alpha)=i\frac{{}_{2}F_{1}(1/6,5/6,1;1-\alpha)}{{}_{2}F_{1}(1/6,5/6,1 ;\alpha)}.\]
_Solving_
\[\frac{1}{3}\alpha_{2}^{2}=\frac{1}{4}g_{2}(1,\tau),\quad\frac{1}{27}\alpha_{ 2}^{3}-2\beta^{2}=\frac{1}{8}g_{3}\left(1,\tau\right),\]
_with \(\operatorname{sgn}(\alpha_{2})=\operatorname{sgn}(\alpha)\) yields a monopole spectral curve with \(D_{6}\) symmetry_
\[\eta^{3}+\alpha_{2}\eta\zeta^{2}+\beta(\zeta^{6}-1)=0.\]
_Moreover, the Nahm data is given explicitly in terms of \(\wp\)-functions by (8) and (12)._
### Distinguished Curves
Having solved for general \(\alpha\in[0,1]\) we now investigate the special values of \(\alpha=0,1/2,1\).
#### 3.3.1. \(\alpha=0^{+}\)
The limit \(\alpha\to 0\) corresponds to \(\tau\to+i\infty\), and we have using the asymptotic expansion of the Eisenstein series that \(g_{2}(1,\tau)\to\frac{4\pi^{4}}{3}\), \(g_{3}(1,\tau)\to\frac{8\pi^{6}}{27}\), so \(\alpha=0\) is indeed a solution with \(\beta=0\), \(\alpha_{2}=\pi^{2}\). This recreates the well known axially-symmetric monopole with spectral curve \(\eta(\eta^{2}+\pi^{2}\zeta^{2})=0\)[11, 12].
If we had \(\beta=0\) from the beginning (and so \(\Delta=0\), and for \(\alpha_{2}\neq 0\) then \(\alpha=0\)), we would have found a singular elliptic curve
\[4\tilde{u}^{3}-\frac{1}{3}\alpha_{2}^{2}\tilde{u}-\frac{1}{27}\alpha_{2}^{3}= 4\left(\tilde{u}+\frac{\alpha_{2}}{6}\right)^{2}\left(\tilde{u}-\frac{\alpha_{ 2}}{3}\right),\]
with solution to the corresponding ODE (using known integrals) given by
\[\tilde{u}=\frac{\alpha_{2}}{3}+\frac{\alpha_{2}}{2}\tan^{2}\left[\frac{\sqrt{ \alpha_{2}}}{2}(s-s_{0})\right],\quad a_{1}=\sqrt{\frac{\alpha_{2}}{2}}\sec \left[\frac{\sqrt{\alpha_{2}}}{2}(s-s_{0})\right].\]
We could then manufacture the right residue at \(s=0\) by having \(s_{0}=\frac{\pi}{2}\cdot\frac{2}{\sqrt{\alpha_{2}}}\). To get the correct periodicity, we would require that \(\frac{\pi}{2}=\frac{\sqrt{\alpha_{2}}}{2}(2-s_{0})\) and consequently that \(\alpha_{2}=\pi^{2}\) again giving the axially-symmetric monopole.
#### 3.3.2. \(\alpha=1^{-}\)
To get this limit, we use \(\tau(1^{-})=-1/\tau(0^{+})\), so
\[g_{2}(1,\tau(1^{-}))=g_{2}(1,-1/\tau(0^{+}))=\tau(0^{+})^{4}g_{2}(1,\tau(0^{+} ))=\frac{1}{\tau(1^{-})^{4}}\frac{4\pi^{4}}{3},\]
and likewise for \(g_{3}\). Solving gives
\[\alpha_{2}\sim-\left(\frac{\pi}{\tau}\right)^{2},\quad\beta\sim\pm\frac{i}{3 \sqrt{3}}\left(\frac{\pi}{\tau}\right)^{3},\]
or equivalently writing \(\tau=i\epsilon\) for \(0<\epsilon\ll 1\),
\[\alpha_{2}\sim 3\left(\frac{\pi}{\sqrt{3}\epsilon}\right)^{2},\quad\beta \sim\pm\left(\frac{\pi}{\sqrt{3}\epsilon}\right)^{3},\]
The corresponding spectral curve thus factorises as
\[0 =\eta^{3}+3\left(\frac{\mp\pi}{\sqrt{3}\epsilon}\right)^{2}\eta \zeta^{3}-\left(\frac{\mp\pi}{\sqrt{3}\epsilon}\right)^{3}(\zeta^{6}-1),\] \[=\left[\eta-\left(\frac{\mp\pi}{\sqrt{3}\epsilon}\right)(\zeta^ {2}-1)\right]\left[\eta-\left(\frac{\mp\pi}{\sqrt{3}\epsilon}\right)(\omega \zeta^{2}-\omega^{2})\right]\left[\eta-\left(\frac{\mp\pi}{\sqrt{3}\epsilon} \right)(\omega^{2}\zeta^{2}-\omega)\right].\]
This corresponds to three well-separated 1-monopoles on the vertices of an equilateral triangle in the \(x,y\)-plane with side length \(\frac{\pi}{\epsilon}\)[13]. As \(\epsilon\) tends to zero these three vertices tend to the point \(\infty\), the singular degeneration to the cuspidal elliptic curve with \(\Delta=0\) and \(\alpha=1\).
#### 3.3.3. \(\alpha=1/2\)
In this case \(\tau=i\), and the lattice is the square lattice. The values of \(g_{2},g_{3}\) for this lattice are known explicitly [13, 23.5.8], giving the equations
\[\frac{1}{3}\alpha_{2}^{2}=\frac{1}{4}\frac{\Gamma(1/4)^{8}}{16\pi^{2}},\quad \frac{1}{27}\alpha_{2}^{3}-2\beta^{2}=0\Rightarrow\alpha_{2}=\frac{\sqrt{3} \Gamma(1/4)^{4}}{8\pi},\quad\beta=\pm\frac{\Gamma(1/4)^{6}}{32(\sqrt{3}\pi)^{ 3/2}}.\]
**Remark 3.3**.: _The coefficients seen here are the same, up to a sign, as those of a distinguished monopole found in [10]. This is no accident, but arises because the square lattice is behind the distinguished "twisted figure-of-eight" monopole, as we show later in SS4.2._
### Scattering
To complete our understanding of these monopoles we discuss the corresponding scattering. This has already been described using the rational map approach in [14]. The \(D_{6}\)-symmetric monopoles described here corresponds to the prismatic subgroup \(D_{3h}\) of \(\mathrm{O}(3)\): this confines the monopoles to lie in a plane, and thus any scattering observed must be planar. Note for each value of \(\alpha\neq 0\) there are two choices of \(\beta\) from the defining equations, and these two branches coalesce where \(\beta=0\Leftrightarrow\alpha=0\). This gives us a view of scattering from \(\alpha=1\) with three initially well-separated 1-monopoles with a choice of sign. They move inwards along the axes of symmetry of the corresponding equilateral triangle through \(\alpha=0\) where the 3-monopoles instantaneously takes the configuration of the axially-symmetric monopole. Here we change branch (i.e. sign of \(\beta\)), and move back out to \(\alpha=1\) where now because of the change of sign these three well-separated 1-monopoles are
deflected by \(\pi/3\) radians. Note that as with the planar scattering of \(2\)-monopoles [1], because of symmetry one cannot associate a given in-going monopole with an out-going one but rather interpret the scattering process as the the \(3\) in-going monopoles splitting into thirds which then recombine to form the out-going monopoles.
## 4. \(V_{4}\) Monopoles
### Solving for Nahm Data
For our curve with \(V_{4}\) symmetry the generators of the automorphism group are \((\zeta,\eta)\mapsto(-\zeta,-\eta)\) and \((\zeta,\eta)\mapsto(-1/\zeta,\eta/\zeta^{2})\); equivalently these correspond to the rotations \(\operatorname{diag}(-1,-1,1)\) and \(\operatorname{diag}(-1,1,-1)\) whose product is the earlier \(r\). If we impose further the involution \((\zeta,\eta)\mapsto(\zeta,-\eta)\) as a symmetry (the composition of inversion with the anti-holomorphic involution) we restrict to the case of the inversion-symmetric \(3\)-monopoles known in [10]. Here they solve for Nahm matrices given in terms of \(3\) real-valued functions \(f_{i}\) satisfying \(f_{1}^{\prime}=f_{2}f_{3}\) (and cyclic), with the corresponding spectral curve being
\[\eta^{3}+\eta\left[\left(f_{1}^{2}-f_{2}^{2}\right)(\zeta^{4}+1)+(2f_{1}^{2}+ 2f_{2}^{2}-4f_{3}^{3})\zeta^{2}\right]=0.\]
We find in Appendix C that the same procedure, now without imposing the extra symmetry, yields Nahm matrices in terms of \(3\) complex-valued functions satisfying7
Footnote 7: These equations are also found in [11, (3.57)] where they are attributed to [1]; they also appear as the \(x\)-independent solutions in the description of \(3\)-wave scattering [23, (17) p177]. We thank Pol Vanhaecke and Sasha Mikhailov for this latter reference.
\[\bar{f}_{1}{}^{\prime}=f_{2}f_{3}\quad\text{(and cyclic)}, \tag{14}\]
with the corresponding spectral curve being
\[\eta^{3}+\eta\left[a(\zeta^{4}+1)+b\zeta^{2}\right]+c\zeta(\zeta^{4}-1)=0, \tag{15}\]
where
\[a=\left|f_{1}\right|^{2}-\left|f_{2}\right|^{2},\quad b=2\left|f_{1}\right|^{ 2}+2\left|f_{2}\right|^{2}-4\left|f_{3}\right|^{3},\quad c=2(f_{1}f_{2}f_{3}- \bar{f}_{1}\bar{f}_{2}\bar{f}_{3}).\]
The Nahm matrices are given by
\[T_{1}=\begin{pmatrix}0&0&0\\ 0&0&-\bar{f}_{1}\\ 0&f_{1}&0\end{pmatrix},\qquad T_{2}=\begin{pmatrix}0&0&f_{2}\\ 0&0&0\\ -\bar{f}_{2}&0&0\end{pmatrix},\qquad T_{3}=\begin{pmatrix}0&-\bar{f}_{3}&0\\ f_{3}&0&0\\ 0&0&0\end{pmatrix}. \tag{16}\]
**Remark 4.1**.: _We observe that equations (14) come from the Poisson structure \(\left\{f_{i},\bar{f}_{j}\right\}=\delta_{ij}\), with Hamiltonian \(c/2=f_{1}f_{2}f_{3}-\bar{f}_{1}\bar{f}_{2}\bar{f}_{3}\). This complex extension of the Euler equations is integrable._
**Remark 4.2**.: _We have not fully used up the gauge symmetry available to us. Namely, if we conjugate the \(T_{i}\) by \(U=\operatorname{diag}(u_{1},u_{2},u_{3})\) where \(u_{j}=e^{i\phi_{j}}\) and \(\sum\phi_{j}=0\), we get_
\[f_{1}\mapsto u_{3}u_{2}^{-1}f_{1},\quad f_{2}\mapsto u_{1}u_{3}^{-1}f_{2}, \quad f_{3}\mapsto u_{2}u_{1}^{-1}f_{3},\]
_which preserves the form of the equations._
A consequence of this remark and the form (16) is that for the \(T_{i}\) to have residues which form an irreducible representation of \(\mathfrak{su}(2)\) it is sufficient for the \(f_{i}\) to have simple poles at \(s=0,2\).
In order to find a solution we note that \(a_{ij}=\left|f_{i}\right|^{2}-\left|f_{j}\right|^{2}\) and \(c=2(f_{1}f_{2}f_{3}-\bar{f}_{1}\bar{f}_{2}\bar{f}_{3})\) are now constants. As \(c\) is imaginary it will be useful to introduce \(\tilde{c}:=-ic\). Setting \(F=\left|f_{1}\right|\) we have
\[(F^{\prime})^{2} =\left\{\left[\left(f_{1}\bar{f}_{1}\right)^{1/2}\right]^{\prime} \right\}^{2}=\left\{\frac{1}{2}(f_{1}\bar{f}_{1})^{\prime}(f_{1}\bar{f}_{1})^{ -1/2}\right\}^{2}=\frac{1}{4}(f_{1}f_{2}f_{3}+\bar{f}_{1}\bar{f}_{2}\bar{f}_{3 })^{2}F^{-2},\] \[=\frac{1}{4}F^{-2}\left[(c/2)^{2}+4|f_{1}|^{2}|f_{2}|^{2}|f_{3}|^{ 2}\right],\] \[=\frac{1}{4}F^{-2}\left[(c/2)^{2}+4F^{2}(F^{2}-a_{12})(F^{2}+a_{ 31})\right],\]
and so with \(G=F^{2}\) we get
\[(G^{\prime})^{2}=\frac{1}{4}c^{2}+4G(G-a_{12})(G+a_{31}),\]
which then has solutions in terms of elliptic functions. In terms of the coefficients of the spectral curve we already have \(a_{12}=a\), and we can moreover find \(a_{31}=\frac{-1}{4}(b+2a)\), so we can rewrite the equation as
\[(\tilde{G}^{\prime})^{2}=4\tilde{G}^{3}-g_{2}\tilde{G}-g_{3}, \tag{17}\]
where \(\tilde{G}=G-\frac{b+6a}{12}\), \(g_{2}=a^{2}+\frac{b^{2}}{12}\), and \(g_{3}=\frac{b(b^{2}-36a^{2})}{216}+\frac{1}{4}\tilde{c}^{2}\). Then \(\tilde{G}=\wp\), the Weierstrass \(\wp\)-function. The \(j\)-invariant for this elliptic curve is
\[j=1728\frac{g_{2}^{3}}{g_{2}^{3}-27g_{3}^{3}}=\frac{(12a^{2}+b^{2})^{3}}{ \left(a^{6}-\frac{1}{2}a^{4}b^{2}+\frac{1}{16}a^{2}b^{4}+\frac{9}{4}a^{2}b \tilde{c}^{2}-\frac{1}{16}b^{3}\tilde{c}^{2}-\frac{27}{16}\tilde{c}^{4}\right)}, \tag{18}\]
which is precisely that of the quotient of (15) by the \(V_{4}\) symmetry. We also note that the pull-back of the invariant differential of this quotient is exactly that needed when discussing the Ercolani-Sinha constraint.
Before going on to solve this completely, let's recall what remains to be shown to get a monopole spectral curve (i.e. to have our Nahm matrices satisfy all the conditions to give Nahm data). We need to have that the \(\wp\)-function associated to the above elliptic curve has real period \(2\), but we will be able to impose this by tuning the coefficients. Also, as the right-hand side of
\[\wp=\left|f_{1}\right|^{2}-\frac{b+6a}{12} \tag{19}\]
is always real this requires \(\wp\) to be real and so to be taken on a rectangular or rhombic lattice. Also for reality we need that
\[G(s)=\wp(s)+\frac{b+6a}{12},\quad G(s)-a_{12}=\wp(s)+\frac{b-6a}{12},\quad G(s )+a_{31}=\wp(s)-\frac{b}{6}\]
are always positive. Once we have achieved this we will have regularity in the region \((0,2)\), and so get the right pole structure. The final condition is symmetry about \(s=1\), which is enforced on the \(\left|f_{j}\right|\) (because \(\left|f_{j}\right|\sim\sqrt{\wp}\)), and so the remaining Nahm constraint \(T_{j}(s)=T_{j}(2-s)^{T}\) becomes simply \(f_{j}(s)=-\bar{f}_{j}(2-s)\): that is we require \(\arg f_{j}(s)=\pm\pi-\arg f_{j}(2-s)\).
Indeed writing \(f_{j}=\left|f_{j}\right|e^{i\theta_{j}}\) we can work out the equations for the angles, using
\[f_{j}^{\prime}=\left(\left|f_{j}\right|^{\prime}+i\theta_{j}^{\prime}\left|f_{ j}\right|\right)e^{i\theta_{j}}=\left(\frac{\left|f_{j}\right|^{\prime}}{ \left|f_{j}\right|}+i\theta_{j}^{\prime}\right)f_{j}\Rightarrow\theta_{j}^{ \prime}=\frac{1}{i}\left[\frac{f_{j}^{\prime}}{f_{j}}-\frac{\left|f_{j} \right|^{\prime}}{\left|f_{j}\right|}\right]=\frac{-\tilde{c}}{4\left|f_{j} \right|^{2}}. \tag{20}\]
The \(\theta_{j}\) are thus strictly monotonic (unless \(\tilde{c}=0\), in which case they are constant), and symmetry about \(s=1\) of \(\left|f_{j}\right|\) then necessitates that \(\theta_{j}(s)-\theta_{j}(1)\) is antisymmetric about \(s=1\).
We also have that
\[\tilde{c}=4\left|f_{1}\right|\left|f_{2}\right|\left|f_{3}\right|\sin(\theta_{1}+ \theta_{2}+\theta_{3})=\sqrt{\tilde{c}^{2}+4(G^{\prime})^{2}}\sin(\theta_{1}+ \theta_{2}+\theta_{3}).\]
At \(s=1\) where \(G^{\prime}(s)=0\) we need \(\sin(\theta_{1}+\theta_{2}+\theta_{3})=1\), and by our gauge freedom we can choose \(\theta_{1}(1)=\pi/2=\theta_{2}(1)\) and so \(\theta_{3}(1)=-\pi/2\), thus enforcing our condition of symmetry about \(s=1\). We then see that the anti-symmetry of \(\theta_{j}(s)-\theta_{j}(1)\) about \(s=1\) enforces the remaining reality condition. We also note that as \(|f_{j}(s)|^{2}=\wp(s)-c_{j}:=\wp(u)-\wp(v_{j})\) for appropriate \(s\) and \(v_{j}=\int_{\infty}^{c_{j}}\left[4u^{3}-g_{2}u-g_{3}\right]^{-1/2}du\) we have [1, (6.14.6)]
\[\int\frac{du}{\wp(u)-\wp(v)}=\frac{1}{\wp^{\prime}(v)}\left[2u\zeta(v)+\ln \frac{\sigma(u-v)}{\sigma(u+v)}\right], \tag{21}\]
allowing us to find the \(\theta_{j}(s)\) explicitly which is done in (22) in Appendix A.3.
It remains to fix the real period of the corresponding elliptic curve. We describe two methods. The first makes use of the Jacobi elliptic functions to express the lattice invariants in terms of complete elliptic functions [1, SS18.9]. We explain this in Appendix A.4 in which, by showing that we may fix the real period, establishes the following theorem.
**Theorem 4.3**.: _Given \(\alpha\in\mathbb{R}\), \(m\in[0,1]\), and \(\operatorname{sgn}=\pm 1\), define \(g_{2},g_{3}\) by \(g_{2}=12\left(K(m)^{2}/3\right)^{2}q_{1}(m)\), \(g_{3}=4\left(K(m)^{2}/3\right)^{3}(2m-1)q_{2}(m)\), where_
\[q_{1}(m)=\left\{\begin{array}{cc}1-m+m^{2}&\operatorname{sgn}=1,\\ 1-16m+16m^{2}&\operatorname{sgn}=-1,\end{array}\right.\quad q_{2}(m)=\left\{ \begin{array}{cc}(m-2)(m+1)&\operatorname{sgn}=1,\\ 2(32m^{2}-32m-1)&\operatorname{sgn}=-1.\end{array}\right.\]
_If \(m\) is such that \(g_{2}>0\) and the polynomial \((4-2\alpha)x^{3}-g_{2}x-g_{3}\) has a real root \(x_{*}\) with \(|x_{*}|<\sqrt{g_{2}/3}\) and \(\operatorname{sgn}(x_{*})=-\operatorname{sgn}(\alpha)\), then we may solve_
\[a^{2}+\frac{b^{2}}{12}=g_{2},\quad\frac{b(b^{2}-36a^{2})}{216}+\frac{\tilde{c }^{2}}{4}=g_{3}\]
_for \(a,b,\tilde{c}\in\mathbb{R}\). Then_
\[\eta^{3}+\eta\left[a(\zeta^{4}+1)+b\zeta^{2}\right]+i\tilde{c}\zeta(\zeta^{4}- 1)=0\]
_is a monopole spectral curve with \(V_{4}\) symmetry. Moreover the Nahm data is given explicitly in terms of elliptic functions by (16), (19) and (22)._
A second approach to fixing the correct real period to give Nahm data is to invert the \(j\)-invariant (18) as done in the earlier \(D_{6}\) case. Though we are unable to invert in terms of a single rational \(\alpha\) as with the \(D_{6}\)-symmetric monopole, we may use [1, (4)] which gives
\[\tau=i\left[\frac{2\sqrt{\pi}}{\Gamma(7/12)\Gamma(11/12)}\frac{{}_{2}F_{1}(1/1 2,5/12,1/2;x)}{{}_{2}F_{1}(1/12,5/12,1;1-x)}-1\right],\]
where \(x=1-\frac{1728}{j}=\frac{(1-2\alpha-3\gamma)^{2}}{(1+\gamma)^{3}}\), with \(\alpha=-\frac{27\tilde{c}^{2}}{b^{3}}\), \(\gamma=\frac{12a^{2}}{b^{2}}\). One may then fix the real period of the lattice, which will give solutions consistent with the definition of \(x\) for some range of the parameters \(\alpha,\gamma\). We investigate one particular restriction of this kind in SS4.2. We remark that [11] solved the associated Nahm data only for the (one-parameter) case \(\Delta=0\) in which the elliptic curve degenerates and has trigonometric solutions.
### \(D_{4}\) Monopoles
In [10] a subfamily of (15) with \(D_{4}\) symmetry was studied. To the existing \(V_{4}\) symmetries is appended the order-4 element \((\zeta,\eta)\mapsto(i\zeta,-i\eta)\) (corresponding to the composition of inversion with a rotation of \(\pi/2\) in the \(xy\)-plane). This symmetry then requires \(a=0\). By a dimension argument we expect the \(j\)-invariant inversion to yield a geodesic 1-parameter family for the enlarged symmetry group, and this was the case considered in [10] where the \(C_{4}\) quotient yields an elliptic curve. Placing this curve in our \(V_{4}\) family allows us a different approach to this family of curves. The restriction \(a=0\) means that \(\frac{1728}{j}=4\alpha(1-\alpha)\) with \(\alpha=-\frac{27\tilde{c}^{2}}{b^{3}}\) and we can then fix the real period via the same approach as for the \(D_{6}\) monopole. The equations we get are
\[\frac{b^{2}}{3}=\frac{1}{4}g_{2}(1,\tau),\quad\frac{b^{3}}{27}+2\tilde{c}^{2}= \frac{1}{8}g_{3}(1,\tau),\]
with these being consistent with the definition of \(\alpha\) provided \(\operatorname{sgn}(g_{3}(1,\tau))=\operatorname{sgn}(b)\operatorname{sgn}( 1-2\alpha)\). To also have that \(\tilde{c}\) is real, we must have \(\operatorname{sgn}(b)=-\operatorname{sgn}(\alpha)\) and hence our consistency condition is \(\operatorname{sgn}(g_{3}(1,\tau))=-\operatorname{sgn}(\alpha)\operatorname {sgn}(1-2\alpha)\). We thus have solutions in the region \(\alpha\in(0,1/2)\) if \(\operatorname{sgn}(g_{3}(1,\tau))<0\), which requires \(\tau=-1/\tau(\alpha)\). We can extend this to \(\alpha\in(1/2,1)\) still taking \(\tau=-1/\tau(\alpha)\). Moreover, for \(\alpha<0\), we require \(g_{3}(1,\tau)>0\), which can be achieved taking \(\tau=\tau(\alpha)\). Finally, for \(\alpha>0\), we require \(\operatorname{sgn}(g_{1}(1,\tau))>0\), achievable with \(\tau=-1/\tau(\alpha)\). As such the parameter region in this case is the whole of \(\mathbb{R}\). A case-by-case consideration shows that \(G\), \(G-a_{12}\), \(G+a_{31}\) are always positive on the interval \([0,2]\), so we do indeed get Nahm data as desired.
As with the \(D_{6}\)-symmetric monopoles we may identify special values of \(\alpha\) and the curves they give. A similar analysis gives those found in [10], namely
* \(\alpha=\pm\infty\) gives the tetrahedrally-symmetric monopole,
* \(\alpha=0^{+},0^{-}\) gives three well-separated 1-monopoles and the axially-symmetric monopole respectively,
* \(\alpha=1/2\) gives the "twisted figure-of-eight" monopole. Note \(\alpha=1/2\) corresponds to the square lattice we saw as distinguished for the \(D_{6}\) monopole.
We additionally see the curve with \(\alpha=1\) as distinguished in our parametrisation, which gives the curve
\[\eta^{3}-\pi^{2}\eta\zeta^{2}\pm\frac{i}{\sqrt{27}}\pi^{3}\zeta(\zeta^{4}-1)=0.\]
In terms of the parameters \(a,\epsilon\) of [10], this curve is given by \(a=2\sqrt{2}\), \(\epsilon=-1\).
#### 4.2.1. Scattering
As such we can now understand our scattering as starting at \(\alpha=0^{+}\) with three well-separated 1-monopoles. As \(\alpha\) increases to \(\infty\) we have to pick a choice of \(\tilde{c}\) continuously (though there is no specific choice at \(\alpha=0^{+}\) as the map \(\zeta\mapsto-\zeta\) which swaps the choice of \(\tilde{c}\) is a symmetry of our well separated configuration), and we pass through two distinguished curves, arriving at the tetrahedrally-symmetric monopoles in one orientation. We match that to \(\alpha=-\infty\) taking the tetrahedrally-symmetric monopole with the same orientation there, allowing \(\alpha\) to then increase up to \(0^{-}\) where it takes the configuration of the axially-symmetric monopole. Here the two branches of \(\tilde{c}\) coalesce, we change branch and do the process in reverse.
## 5. Conclusion
In this letter we have begun systematising the classification of charge-3 monopole spectral curves with automorphisms, providing an exhaustive list of candidate curves; we nevertheless expect this list to contain curves that do not correspond to monopole spectral curves. We have also identified how one may use group theory to identify the subset of these candidates
that quotient to an elliptic curve. This was done because such curves are amenable to the construction of Nahm matrices in terms of elliptic functions using the procedure of [12]. Here the imposition of Hitchin's conditions (or equivalently those of Ercolani-Sinha) reduces to questions about the real periods of elliptic functions. Having provided new candidate spectral curves we solved for the Nahm data in two new cases, those of \(D_{6}\) and \(V_{4}\) symmetry. The latter led us to an integrable system (14) that may be viewed as the complexified Euler equations. Given Nahm matrices and the corresponding group action what is not yet clear is how to methodically extract from the resulting coupled ODE's the relevant elliptic equations; providing such an understanding would simplify the construction of the solutions to Nahm's equations from the spectral data. This is the reason for our not treating the \(C_{2}\)-symmetric monopoles here: in this case we have 13 coupled ODE's with 7 conserved quantities.
One can generalise to higher charge several of the viewpoints put forward in this paper. We have seen that compactifying mini-twistor space in \(\mathbb{P}^{1,1,2}\) and then looking at its image in \(\mathbb{P}^{3}\) a possible charge-\(k\) spectral curve is represented by the intersection of the cone and a degree-\(k\) hypersurface. There may be value in this viewpoint for providing a candidate list of monopole spectral curves in higher charge. Further, the methods used to calculate the group-signature pairs giving elliptic quotients in genus 4 extends to higher genus, and so may be used to provide candidate spectral curves potentially amenable to solutions in terms of elliptic functions at higher charges. At present, this data has not been computed in the LMFDB, and so a first step would be the tabulation of those results. In the event that such a computation produced too extensive a list we suggest restricting to the case where \(\delta(g,G,c)=1,2\), for which we expect any corresponding monopole spectral curves to be either isolated points in the moduli space or to correspond to geodesic motion respectively, as we conjectured.
Finally, the geometry we introduced here may have applications for the understanding of spectral curves of hyperbolic monopoles. Spectral curves corresponding to hyperbolic monopoles live in the mini-twistor space of hyperbolic space, which is isomorphic to \(\mathbb{P}^{1}\times\mathbb{P}^{1}\), and specifically charge-\(k\) hyperbolic monopoles are bidegree-\((k,k)\) curves in this surface [1]. As \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) is isomorphic to the non-singular quadric in \(\mathbb{P}^{3}\), and bidegree-\((3,3)\) curves in this correspond to the other class of non-hyperelliptic curves classified by Wiman, our work highlights the potential of classifying certain hyperbolic monopole spectral curves.
## Appendix A Hypergeometric Functions and Lattice Invariants
We gather here some of properties of elliptic and related functions used in the text and prove those statements noted in the text. We follow the conventions of [13, Chapter 23]. First we recall the Weierstrass-\(\wp\) function is defined by
\[\wp^{\prime 2}=4\wp^{3}-g_{2}\wp-g_{3}=4(\wp-e_{1})(\wp-e_{2})(\wp-e_{3}).\]
Here \(g_{k}=g_{k}(\omega_{1},\omega_{2})\) are defined by the lattice \(\Lambda:=2\omega\mathbb{Z}+2\omega^{\prime}\mathbb{Z}\). Let \(\tau=\omega^{\prime}/\omega\). We have
1. \(g_{k}(\lambda\omega,\lambda\omega^{\prime})=\lambda^{-2k}g_{k}(\omega,\omega^ {\prime})\),
2. given \(\left(\begin{smallmatrix}a&b\\ c&d\end{smallmatrix}\right)\in\operatorname{SL}(2,\mathbb{Z})\), \(g_{k}(1,(a\tau+b)(c\tau+d)^{-1})=(c\tau+d)^{2k}g_{k}(1,\tau)\),
3. \(\lim_{\operatorname{Im}\tau\to\infty}g_{2}(1,\tau)=\frac{4\pi^{4}}{3}\), \(\lim_{\operatorname{Im}\tau\to\infty}g_{3}(1,\tau)=\frac{8\pi^{6}}{27}\).
4. When \(\tau=i\), \(g_{2}(1,\tau)=\frac{\Gamma(1/4)^{8}}{256\pi^{2}}\), \(g_{3}(1,\tau)=0\).
5. When \(\tau=e^{2\pi i/3}\), \(g_{2}(1,\tau)=0\), \(g_{3}(1,\tau)=\frac{\Gamma(1/3)^{18}}{(2\pi)^{6}}\).
Our parametrization of the Nahm matrices requires us to know the reality properties of the \(\wp\)-function. We have [13, Theorem 3.16.2]
\[g_{2},g_{3}\in\mathbb{R}\Leftrightarrow\forall z\in\mathbb{C},\ \wp(\bar{z};g_{2},g_{3})= \overline{\wp(z;g_{2},g_{3})}\Leftrightarrow\Lambda=\overline{\Lambda}.\]
Lattices for which \(\Lambda=\overline{\Lambda}\) are called real lattices and they fall into two classes; rectangular lattices (\(\omega\in\mathbb{R}\), \(\omega^{\prime}\in i\mathbb{R}\)), and rhombic lattices (\(\overline{\omega}=\omega^{\prime}\)). The rhombic lattices correspond to \(\tau\) being on the boundary of the fundamental domain of the \(\operatorname{SL}(2,\mathbb{Z})\) action on the upper half plane while the rectangular lattices correspond to \(\tau\) on the imaginary axis with \(\Im(\tau)\geq 1\). When restricted to rectangular or rhombic lattices we can say more about the values of \(g_{2}(1,\tau)\) and \(g_{3}(1,\tau)\). This is done by relating the \(g_{k}\) to the roots \(e_{i}\) of the corresponding cubic equation by
\[g_{2}(1,\tau)=2(e_{1}^{2}+e_{2}^{2}+e_{3}^{2}),\quad g_{3}(1,\tau)=-4e_{1}e_{2 }e_{3}.\]
1. On a rectangular lattice have \(e_{i}\in\mathbb{R}\) so \(g_{2}>0\); further, \(g_{3}>0\) if \(|\tau|>1\), \(g_{3}<0\) if \(|\tau|<1\).
2. On a rhombic lattice, \(e_{1}\in\mathbb{R}\), \(e_{2}=\bar{e}_{3}\), and \(\operatorname{sgn}(e_{1})=\operatorname{sgn}(g_{3})\).
### Properties of \(\tau(\alpha)\)
In order to use the process of \(j\)-invariant inversion to impose the correct periodicity constraints and to give the limiting behaviours noted in the text enabling us to find certain distinguished monopoles we require the properties of
\[\tau=\tau(\alpha)=i\frac{{}_{2}F_{1}(1/6,5/6,1;1-\alpha)}{{}_{2}F_{1}(1/6,5/6, 1;\alpha)}.\]
This is multi-valued when \(\alpha<0\)[1, 15.2.3], with a principal branch \(\tau_{p}\) and second branch \(\tau_{p}+1\), but for our purposes this difference will not not be important. The specific properties we require are that
1. \(\forall\alpha\in(0,1)\), \(\tau(\alpha)\in i\mathbb{R}_{>0}\),
2. \(\tau(0^{+})=+i\infty\), \(\tau(1/2)=i\), \(\tau(1^{-})=0\),
3. \(\forall\alpha<0\), \(\operatorname{Re}(\tau(\alpha))\equiv 1/2\) mod \(1\),
4. \(\tau(-\infty)=e^{2\pi i/3}\), \(\tau(0^{-})=\frac{1}{2}+i\infty\).
Evaluated at the specific \(\tau(\alpha)\) above we find that
\[\operatorname{sgn}(g_{3}(1,\tau(\alpha)))=\left\{\begin{array}{cc}1&\alpha< 1/2,\\ -1&\alpha\in(1/2,1).\end{array}\right.\]
Here we provide the necessary definitions and proofs.
We can understand the behaviour of \(\tau\) using known results about hypergeometric functions (see for example [1, SS15]). First, in the region \(\alpha\in(0,1)\), we may use the series expression for \({}_{2}F_{1}(a,b,c;z)\) when \(|z|<1\):
\[{}_{2}F_{1}(a,b,c;z)=\sum_{n=0}^{\infty}\frac{(a)_{n}(b)_{n}}{(c)_{n}}\frac{z^ {n}}{n!},\]
where \((a)_{n}\) is the rising Pochhammer symbol
\[(a)_{n}=\left\{\begin{array}{cc}1&n=0,\\ a(a+1)\ldots(a+n-1)&n\geq 1.\end{array}\right.\]
This means we have (i) that for all \(\alpha\in(0,1)\), \(\tau(\alpha)\in i\mathbb{R}_{>0}\). This is important as it makes the lattice rectangular, which forces the Weierstrass \(\wp\)-function to be real on the real axis
[DLMF, SS23.5]. Moreover, as \({}_{2}F_{1}(a,b,c;z)\) is increasing in \(z\in(0,1)\), \(\operatorname{Im}\tau(\alpha)\) is strictly decreasing in \(\alpha\). We can calculate the limits to be
\[\tau(0^{+})=+i\infty,\quad\tau(1^{-})=0,\]
so giving (ii). We may use [1, 15.3.10] which says that when \(|1-z|<1\), \(|\arg(1-z)|<\pi\),
\[{}_{2}F_{1}(a,b,a+b;z)=\frac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)}\sum_{n=0}^{ \infty}\frac{(a)_{n}(b)_{n}}{(n!)^{2}}\left[2\psi(n+1)-\psi(a+n)-\psi(n+b)-\log (1-z)\right](1-z)^{n},\]
where \(\psi\) is the digamma function, to understand exactly this limiting behaviour, namely that the divergence is logarithmic. We can also highlight a special value in this regions, namely \(\tau(1/2)=i\).
For \(\alpha\not\in[0,1]\) we no longer have that \(\tau\) lies on the imaginary axis, and we would thus need to get a rhombic lattice (that is \(\operatorname{Re}\tau=1/2\)) for the reality of \(\wp\). Numerical tests suggest that while this happens for \(\alpha<0\) for \(\alpha>1\) we instead get \(\operatorname{Re}(-1/\tau)=1/2\). Indeed we may use [1, 15.10.29] to say8
Footnote 8: We are very grateful to Adri Olde Daalhuis for this argument.
\[{}_{2}F_{1}(1/6,5/6,1;1-\alpha) =e^{5\pi i/6}\frac{\Gamma(1)\Gamma(1/6)}{\Gamma(1)\Gamma(1/6)}{}_ {2}F_{1}(1/6,5/6,1;\alpha)\] \[\quad+e^{-\pi i/6}\frac{\Gamma(1)\Gamma(1/6)}{\Gamma(5/6)\Gamma(1 /3)}{\alpha^{-1/6}}{}_{2}F_{1}(1/6,1/6,1/3;1/\alpha),\] \[=e^{5\pi i/6}{}_{2}F_{1}(1/6,5/6,1;\alpha)\] \[\quad+(-\alpha)^{-1/6}\frac{\Gamma(1/6)}{\Gamma(5/6)\Gamma(1/3)}{ }_{2}F_{1}(1/6,1/6,1/3;1/\alpha),\]
and hence when \(\alpha<0\) (and taking the principal branch of the hypergeometric function) we get
\[\tau(\alpha)=i\left[e^{5\pi i/6}+T(\alpha)\right]\]
with
\[T(\alpha)=(-\alpha)^{-1/6}\frac{\Gamma(1/6)}{\Gamma(5/6)\Gamma(1/3)}\frac{{}_ {2}F_{1}(1/6,1/6,1/3;1/\alpha)}{{}_{2}F_{1}(1/6,5/6,1;\alpha)}\in\mathbb{R}.\]
This means \(\operatorname{Re}(\tau(\alpha))\equiv 1/2\mod 1\), which yields (iii).
To get the asymptotics as \(\alpha\to-\infty\), we use [1, 15.3.7]
\[{}_{2}F_{1}(a,b,c;z) =\frac{\Gamma(c)\Gamma(b-a)}{\Gamma(b)\Gamma(c-a)}(-z)^{-a}{}_{2}F _{1}(a,a+1-c,a+b-1;z^{-1})\] \[\quad+\frac{\Gamma(c)\Gamma(a-b)}{\Gamma(a)\Gamma(c-b)}(-z)^{-b}{ }_{2}F_{1}(b,b+1-c,b+a-1;z^{-1}).\]
Taking \(\alpha=-\epsilon^{-1}\), this gives that as \(\epsilon\to 0^{+}\),
\[{}_{2}F_{1}(a,b,c;-\epsilon^{-1})\sim\frac{\Gamma(2/3)}{\Gamma(5/6)^{2}} \epsilon^{1/6},\quad{}_{2}F_{1}(a,b,c;1+\epsilon^{-1})\sim\frac{\Gamma(2/3)}{ \Gamma(5/6)^{2}}(-\epsilon)^{1/6},\]
and so \(\tau(-\infty)=e^{2\pi i/3}=\frac{-1}{2}+\frac{i\sqrt{3}}{2}\). To get the remaining asymptotics of (iv), as \(\alpha\to 0^{-}\) we write \(\alpha=-\epsilon\). Then
\[{}_{2}F_{1}(a,b,c;-\epsilon)\sim 1,\quad{}_{2}F_{1}(a,b,c;1+\epsilon)\sim\frac{- \Gamma(1)}{\Gamma(1/6)\Gamma(5/6)}\log(-\epsilon)=\frac{-1}{2\pi}(i\pi+\log \epsilon),\]
and so \(\tau(0^{-})=\frac{1}{2}+i\infty\).
To get the asymptotics as \(\alpha\to 1^{+}\) we recognise that \(\tau(1-\alpha)=-1/\tau(\alpha)\) and so \(-1/\tau(1^{+})=\frac{1}{2}+i\infty\). Finally to get the asymptotics as \(\alpha\to\infty\) we do the same, so \(-1/\tau(\infty)=\frac{1}{2}+\frac{i\sqrt{3}}{2}\).
### Check of Consistency
We see that \(\alpha_{2}\) must be the same sign as \(\alpha\) to get \(\beta\in\mathbb{R}\). Moreover, as \(g_{2}(1,\tau)>0\) because \(\alpha_{2}\) is real, we can check that
\[\frac{1}{2}\left[\frac{1}{27}\alpha_{2}^{3}-\frac{1}{8}g_{3}(1, \tau)\right] =\frac{1}{2}\left[\frac{\operatorname{sgn}(\alpha_{2})}{27}\left( 3g_{2}(1,\tau)/4\right)^{3/2}-\frac{1}{8}g_{3}(1,\tau)\right],\] \[=\frac{\operatorname{sgn}(\alpha_{2})g_{2}(1,\tau)^{3/2}}{16 \sqrt{27}}\left[1-\frac{\operatorname{sgn}(g_{3}(1,\tau))}{\operatorname{ sgn}(\alpha_{2})}\sqrt{\frac{27g_{3}^{2}}{g_{2}^{3}}}\right],\] \[=\frac{\operatorname{sgn}(\alpha_{2})\left(4\alpha_{2}^{2}/3 \right)^{3/2}}{16\sqrt{27}}\left[1-\frac{\operatorname{sgn}(g_{3}(1,\tau))}{ \operatorname{sgn}(\alpha_{2})}\left(1-\frac{1728}{j}\right)^{1/2}\right],\] \[=\frac{\alpha_{2}^{3}}{2\times 27}\left[1-\frac{\operatorname{sgn}(g_ {3}(1,\tau))}{\operatorname{sgn}(\alpha_{2})}\left(1-4\alpha(1-\alpha)\right) ^{1/2}\right],\] \[=\frac{\alpha_{2}^{3}}{2\times 27}\left[1-\frac{\operatorname{sgn}(g_ {3}(1,\tau))}{\operatorname{sgn}(\alpha_{2})\operatorname{sgn}(1-2\alpha)}(1- 2\alpha)\right],\] \[=\frac{\alpha\alpha_{2}^{3}}{27}\quad\text{ if }\quad \operatorname{sgn}(g_{3}(1,\tau))=\operatorname{sgn}(\alpha_{2})\operatorname{ sgn}(1-2\alpha).\]
Hence the two equations are consistent, provided the stated sign condition holds, or if \(\alpha=0\).
### The Theta Integration
We note that \(|f_{j}(s)|^{2}=\wp(s)-c_{j}:=\wp(s)-\wp(v_{j})\) doesn't fix the sign of \(v_{j}\) for \(\wp(\pm v_{j})=c_{j}\). We fix the sign as follows. First observe that
\[(\tilde{G}^{\prime}(s))^{2}=(G^{\prime}(s))^{2}=\frac{1}{4}c^{2}+4[\wp(s)- \wp(v_{1})][\wp(s)-\wp(v_{2})][\wp(s)-\wp(v_{3})],\]
and so \(\wp^{\prime\,2}(v_{i})=c^{2}/4\); we fix the sign so that \(\wp^{\prime}(v_{i})=c/2=i\tilde{c}/2\). Further consider the elliptic function \(\wp^{\prime}(s)-c/2\) with three zeros (at \(s\in\{v_{1},v_{2},v_{3}\}\)) and three poles (at \(s=0\)). Then with the base of the Abel-Jacobi map at \(s=0\) (as is standard) we have that \(\sum_{i}v_{i}\) is a lattice point. Also observe that
\[\zeta(v_{i})+\zeta(v_{j})=\zeta(v_{i}+v_{j}).\]
We find from (20,21) that
\[\theta_{i}(s):=\theta_{i}(1)+i\left[s\zeta(v_{j})+\frac{1}{2}\ln\frac{\sigma(s -v_{j})\sigma(1+v_{j})}{\sigma(s+v_{j})\sigma(1-v_{j})}\right], \tag{22}\]
where \(\theta_{i}(1)\) is a constant of integration and chosen as described in the text. Then \(\theta_{i}(-s)-\theta_{i}(1)=-\left[\theta_{i}(s)-\theta_{i}(1)\right]\) is anti-symmetric as required. Using the Legendre relation we find that \(\sin(\theta_{1}+\theta_{2}+\theta_{3})\) is periodic in \(s\) as required for consistency.
### Restrictions on Elliptic Function Parameters
Here we prove Theorem 4.3. Given the discriminant \(\Delta\) for the cubic defining \(\wp\)[1, SS18.9] gives equations for the lattice invariants in terms of complete elliptic functions. With our earlier definitions, \(g_{2}=12\left(K(m)^{2}/3\right)^{2}q_{1}(m)\) and \(g_{3}=4\left(K(m)^{2}/3\right)^{3}(2m-1)q_{2}(m)\) then with \(\operatorname{sgn}=\operatorname{sgn}(\Delta)\) we have for \(\Delta>0\)
\[g_{2}=12\left(\frac{K^{2}}{3\omega_{1}}\right)^{2}\left(1-m+m^{2}\right),\quad g _{3}=4\left(\frac{K^{2}}{3\omega_{1}}\right)^{3}(m-2)(2m-1)(m+1);\]
whereas for \(\Delta<0\)
\[g_{2}=12\left(\frac{K^{2}}{3\omega_{2}}\right)^{2}\left(1-16m+16m^{2}\right), \quad g_{3}=8\left(\frac{K^{2}}{3\omega_{2}}\right)^{3}(2m-1)(32m^{2}-32m-1).\]
Here \(m=k^{2}\in(0,1)\) is the argument of \(K\), the underlying lattice has periods \(2\omega\), \(2\omega^{\prime}\), \(\omega_{1}=\omega\) and \(\omega_{2}=\omega+\omega^{\prime}\). Fixing \(2\) as a period of the lattice, and that the lattice is real, sets \(\omega_{1}=1\) for \(\Delta>0\) and \(\omega_{2}=1\) for \(\Delta<0\). Observe that for \(\operatorname{sgn}(\Delta)=\pm 1\) that \(g_{2}(m)\) takes it's minimum value at \(m=1/2\) while for \(m\in(0,1/2)\) we have \(g_{3}(m)>0\).
Our elliptic curve gave the equations \(a^{2}+\frac{b^{2}}{12}=g_{2}\), \(\frac{b(b^{2}-36a^{2})}{216}+\frac{1}{4}\tilde{c}^{2}=g_{3}\). These equations are undetermined, but we may substitute for \(a^{2}\) and take \(\alpha=-27\tilde{c}^{2}/b^{3}\) to find
\[(4-2\alpha)\tilde{b}^{3}-g_{2}\tilde{b}-g_{3}=0, \tag{23}\]
where \(\tilde{b}=\frac{b}{6}\). The discriminant of this cubic is
\[\Delta_{\alpha}(m)=4(4-2\alpha)g_{2}^{3}-27(4-2\alpha)^{2}g_{3}^{2}=4(4-2 \alpha)\left[g_{2}^{3}-27(1-\alpha/2)g_{3}^{2}\right].\]
Note \(\Delta_{0}=\Delta\). For a given generic value of \(\alpha\) in some region we may solve (23), determining \(b\), \(\tilde{c}\) and \(a\) in turn.
In order to get Nahm data, we require that \(b\), \(\tilde{c}\), and \(a\) are real. We know that this cubic has real coefficients, and so there will always be a real root of the cubic. To get reality of \(\tilde{c}\), we need that this real root \(\tilde{b}\) satisfies \(\operatorname{sgn}(\tilde{b})=-\operatorname{sgn}(\alpha)\), and for reality of \(a\) we need \(\left|\tilde{b}\right|\leq g_{2}/3\). Necessary conditions to find such solutions are as follows.
First consider \(\Delta>0\). Then \(g_{2}>0\) and \(g_{3}\) is monotonically decreasing for \(m\in(0,1)\) with \(\operatorname{sgn}(g_{3})=-\operatorname{sgn}(m-1/2)\). We have the following properties:
1. If \(\alpha>2\) the discriminant \(\Delta_{\alpha}(m)<0\). Then (23) has one real root whose sign is opposite that of \(g_{3}\). Now \(\operatorname{sgn}(\tilde{b})=-\operatorname{sgn}(\alpha)<0\) is opposite that of \(g_{3}\); hence we require \(g_{3}>0\) and so \(m\in(0,1/2)\).
2. For \(\alpha\in(0,2)\) the discriminant \(\Delta_{\alpha}(m)>0\) upon comparison with \(\Delta=g_{2}^{3}-27g_{3}^{2}>0\). Then, because the sum of the roots is zero, they cannot all be the same sign.
3. When \(\alpha<0\), from the derivative of the cubic we know it will have a local maxima and minima at \(\tilde{b}=\pm\sqrt{\frac{g_{2}}{3(4-2\alpha)}}\); it is the minima when the sign is positive. Recalling that we require a root with sign \(\operatorname{sgn}\tilde{b}=-\operatorname{sgn}\alpha=1\), the local minima must be non-positive, and the value at this \(\tilde{b}\) is \(-\frac{2}{3}\tilde{b}g_{2}-g_{3}\). As the value at this minima is monotonically increasing for \(m>1/2\), and negative at \(m=1/2\), then the value at the minima is negative for all \(m<m_{*}\), the value for which is it zero. Solving, one gets the condition \(\Delta_{\alpha}(m_{*})=0\), taking the root greater than \(1/2\).
Therefore necessary conditions for a real root of the right sign to exist for \(\Delta>0\) are that
* if \(\alpha>2\), \(m<1/2\),
* if \(\alpha\in(0,2)\), any \(m\) is valid
* if \(\alpha<0\), \(m<m_{2}(\alpha)\), where \(m_{2}\) is the root of \(\Delta_{\alpha}(m)=0\) in \((1/2,1)\).
To get Nahm data we require that this real root is bounded in magnitude by \(\sqrt{g_{2}/3}\), with the case that it is equal corresponding to \(a=0\), i.e. to the \(D_{4}\) monopoles of [10]. Figures showing these parameter regions are given in Figure 4.
In the case \(\Delta<0\), in order to get real roots of the right sign one analogously gets restrictions on \(m\) relative to \(\alpha\) such that
* if \(\alpha<0\), \(m<1/2\),
* if \(\alpha\in(0,2)\), \(m>1/2\) or \(m<m_{1}(\alpha)\), defined to be the root \(<1/2\) of the polynomial \(\Delta_{\alpha}(m)=0\).
* if \(\alpha>2\), \(m<m_{2}(\alpha)\), now defined to be the root \(>1/2\) of the polynomial \(\Delta_{\alpha}(m)=0\).
Fixing the size of the root in this case requires more work, complicated by the fact that \(g_{2}\) is real only if \(|m-1/2|>\sqrt{3}/4\). Using explicitly formulas for the roots \(\tilde{b}\) from Cardano's formula one can achieve explicit bounds, but here we omit these. In practice, when using this approach to plot monopoles, numerical methods can be used to find the appropriate \(m\) region for a given \(\alpha\), as done to generate Figure 4.
Note that, for certain admissible \(\alpha,m\) there may be two possible monopoles because two roots of the cubic defining \(b\) satisfy the required conditions. Numerical investigations indicates that this phenomenon only occurs for \(\Delta>0\). We plotted two examples of this, seen in Figure 5 to investigate the difference in the associated monopoles.
## Appendix B Initial Computation of \(D_{6}\) Nahm Matrices
Here we will take the procedure introduced in [13] and developed in [11, 12, 13] and apply this to the \(C_{k}\) symmetric monopole. We describe the procedure for general \(k\) before applying this in the \(k=3\) context. The steps of the procedure are:
1. Take the input polynomials to be \(Q_{i}=\zeta_{0}^{i}\zeta_{1}^{i},i=1,\ldots,k\) and \(Q_{k+1}=\zeta_{0}^{k}-\zeta_{1}^{k}\).
2. Construct the invariant vectors \((\rho_{i}),(S_{i}^{(j)})\), and scale them so they are all anti-Hermitian. The degree of the input polynomial \(d\) determines how many direct summands of the \((d+1)\)-dimensional irreducible representation space of \(SO(3)\) there are in \(\mathbb{R}^{3}\otimes\mathfrak{gl}(k)\) when decomposed into irreducibles. There are \(3k-2\) invariant \(S\)-vectors and so variables \(y_{j}\) associated with them; together with \((\rho_{i})\) and the associated variable \(x\) we have \(3k-1\) variables.
3. Calculate the (now anti-Hermitian) \(T_{i}\). Note the variables \(x,y_{j}\) are now always real-valued.
4. Diagonalise the matrix \(T_{3}\) with a unitary matrix \(U\) whose columns are the normalised eigenvectors of \(T_{3}\), that is construct \(U^{-1}T_{3}U\). As \(T_{3}\) is anti-hermitian and linear in the invariant vectors, the diagonal entries which are the eigenvalues will be pure-imaginary, linear in \(\{x,y_{j}\}\).
Fig. 4: Valid parameter regions for \(V_{4}\) monopoles, with the subset corresponding to \(D_{4}\) monopoles highlighted.
5. Now from [1], conjugating by the same unitary matrix will give \[U^{-1}(T_{1}+iT_{2})U=\sum_{j=1}^{k}\alpha_{j}E_{j,j+1},\quad U^{-1}(T_{1}-iT_{2} )U=\sum_{j=1}^{k}-\bar{\alpha}_{j}E_{j+1,j},\] for some \(\alpha_{1},\ldots\alpha_{n}\in\mathbb{C}\).
6. Writing \(\alpha_{j}=r_{j}e^{i\phi_{j}}\) (as generically \(\alpha_{j}\neq 0\)) and solving \[\phi_{j}+\theta_{j+1}-\theta_{j}=0,j=1,\ldots k-1,\quad\sum_{j}\theta_{j}=0,\]
for \(\theta_{1},\ldots,\theta_{k}\in\mathbb{R}\), then conjugating by the unitary matrix \(D=\operatorname{diag}(e^{i\theta_{1}},\ldots,e^{i\theta_{k}})\) preserves the form of \(T_{3}\), but acts to make each \(\alpha_{j}\) real for \(j=1,\ldots,k-1\) (as it multiplies \(\alpha_{j}\) by \(e^{i(\theta_{j+1}-\theta_{j})}\)). The effect on \(\alpha_{k}\) is to multiply this by \(e^{i(\theta_{1}-\theta_{k})}=e^{i(\phi_{1}+\cdots+\phi_{k-1})}=\prod_{j=1}^{k -1}(\alpha_{j}/r_{j})\). After quotienting by this action the number of independent variables we have is \(3k-2+1-(k-1)=2k\).
We now apply the algorithm in the case \(k=3\). This yields Nahm matrices
\[T_{1} =\left(\begin{array}{ccc}0&iy_{0}&iy_{1}+y_{5}+iy_{6}\\ iy_{0}&0&-2x-y_{2}+iy_{3}\\ iy_{1}-y_{5}+iy_{6}&2x+y_{2}+iy_{3}&0\end{array}\right),\] \[T_{2} =\left(\begin{array}{ccc}iy_{0}&0&2x+y_{2}-iy_{3}\\ 0&-iy_{0}&iy_{1}+y_{5}+iy_{6}\\ -2x-y_{2}-iy_{3}&iy_{1}-y_{5}+iy_{6}&0\end{array}\right),\] \[T_{3} =\left(\begin{array}{ccc}iy_{1}+iy_{4}-\frac{2}{3}iy_{6}&-2x+2y _{2}&0\\ 2x-2y_{2}&iy_{1}+iy_{4}-\frac{2}{3}iy_{6}&0\\ 0&0&-2iy_{1}+iy_{4}+\frac{4}{3}iy_{6}\end{array}\right).\]
with accompanying ODEs in 8 real variables
\[x^{\prime} =2x^{2}-\frac{1}{3}y_{0}^{2}-\frac{5}{6}y_{1}^{2}-\frac{1}{2}y_{ 2}^{2}+\frac{1}{6}y_{3}^{2}+\frac{1}{6}y_{5}^{2}+\frac{5}{6}y_{6}^{2},\] \[y_{0}^{\prime} =-4xy_{0}+4y_{0}y_{2},\] \[y_{1}^{\prime} =-4xy_{1}-\frac{16}{5}y_{1}y_{2}-\frac{6}{5}y_{3}y_{5}-\frac{6}{5 }y_{2}y_{6},\] \[y_{2}^{\prime} =\frac{2}{3}y_{0}^{2}-\frac{4}{3}y_{1}^{2}-2xy_{2}-y_{2}^{2}- \frac{1}{3}y_{3}^{2}-\frac{1}{3}y_{5}^{2}-y_{1}y_{6}+\frac{1}{3}y_{6}^{2},\] \[y_{3}^{\prime} =2xy_{3}-2y_{2}y_{3}-3y_{1}y_{5}+2y_{5}y_{6},\] \[y_{4}^{\prime} =0,\] \[y_{5}^{\prime} =-3y_{1}y_{3}+2xy_{5}-2y_{2}y_{5}+2y_{3}y_{6},\] \[y_{6}^{\prime} =-\frac{9}{5}y_{1}y_{2}+\frac{6}{5}y_{3}y_{5}+6xy_{6}+\frac{6}{5}y _{2}y_{6}.\]
The associated spectral curve is
\[\eta^{3}+\alpha_{1}\eta^{2}\zeta+\alpha_{2}\eta\zeta^{2}+\alpha_{3}\zeta^{3}+ \beta\zeta^{6}-\bar{\beta}=0\]
where
\[\alpha_{1} =-6y_{4},\] \[\alpha_{2} =4y_{0}^{2}-8y_{1}^{2}+48xy_{2}-12y_{2}^{2}+4y_{3}^{2}+12y_{4}^{2 }+4y_{5}^{2}+24y_{1}y_{6}-\frac{4}{3}y_{6}^{2},\] \[\alpha_{3} =-160x^{2}y_{1}+16y_{0}^{2}y_{1}+8y_{1}^{3}+128xy_{1}y_{2}-40y_{1} y_{2}^{2}-8y_{1}y_{3}^{2}-8y_{0}^{2}y_{4}\] \[\quad+16y_{1}^{2}y_{4}-96xy_{2}y_{4}+24y_{2}^{2}y_{4}-8y_{3}^{2}y_ {4}-8y_{4}^{3}-32xy_{3}y_{5}+32y_{2}y_{3}y_{5}\] \[\quad-8y_{1}y_{5}^{2}-8y_{4}y_{5}^{2}-\frac{32}{3}y_{0}^{2}y_{6}- \frac{128}{3}y_{1}^{2}y_{6}-32xy_{2}y_{6}+80y_{2}^{2}y_{6}+\frac{16}{3}y_{3}^ {2}y_{6}\] \[\quad-48y_{1}y_{4}y_{6}+\frac{16}{3}y_{5}^{2}y_{6}+24y_{1}y_{6}^{ 2}+\frac{8}{3}y_{4}y_{6}^{2}+\frac{16}{27}y_{6}^{3},\] \[\beta =-16x^{2}y_{0}+4y_{0}y_{1}^{2}-16xy_{0}y_{2}-4y_{0}y_{2}^{2}+8jy_ {0}y_{1}y_{3}-4y_{0}y_{3}^{2}-16ixy_{0}y_{5}\] \[\quad-8iy_{0}y_{2}y_{5}+4y_{0}y_{5}^{2}+8y_{0}y_{1}y_{6}+8iy_{0}y _{3}y_{6}+4y_{0}y_{6}^{2}.\]
In order to make the variables real we have imposed the anti-Hermiticity condition required of the Nahm matrices at the beginning, by making the invariant vectors corresponding to each variable anti-Hermitian.
We may consistently set \(y_{3}=0=y_{5}\), which we may view as using the conjugation action of diagonal matrices \(\operatorname{diag}(e^{i\theta_{1}},e^{i\theta_{2}},e^{i\theta_{3}})\), \(\theta_{1}+\theta_{2}+\theta_{3}=0\). This leaves us with the \(2\times 3=6\) real variables we would expect to have from the corresponding Toda. Note that because \(\alpha_{1}^{\prime}=0\), the centre of mass of the Toda system is already fixed. Moreover, we may centre to consistently set \(y_{4}=0\), and so we now have the equations in the remaining 5 variables as
\[x^{\prime} =2x^{2}-\frac{1}{3}y_{0}^{2}-\frac{5}{6}y_{1}^{2}-\frac{1}{2}y_{2 }^{2}+\frac{5}{6}y_{6}^{2},\] \[y_{0}^{\prime} =-4xy_{0}+4y_{0}y_{2},\] \[y_{1}^{\prime} =-4xy_{1}-\frac{16}{5}y_{1}y_{2}-\frac{6}{5}y_{2}y_{6},\] \[y_{2}^{\prime} =\frac{2}{3}y_{0}^{2}-\frac{4}{3}y_{1}^{2}-2xy_{2}-y_{2}^{2}-y_{1 }y_{6}+\frac{1}{3}y_{6}^{2},\] \[y_{6}^{\prime} =-\frac{9}{5}y_{1}y_{2}+6xy_{6}+\frac{6}{5}y_{2}y_{6},\]
with conserved quantities
\[\alpha_{2} =4y_{0}^{2}-8y_{1}^{2}+48xy_{2}-12y_{2}^{2}+24y_{1}y_{6}-\frac{4}{ 3}y_{6}^{2},\] \[\alpha_{3} =-160x^{2}y_{1}+16y_{0}^{2}y_{1}+8y_{1}^{3}+128xy_{1}y_{2}-40y_{1} y_{2}^{2}\] \[\quad-\frac{32}{3}y_{0}^{2}y_{6}-\frac{128}{3}y_{1}^{2}y_{6}-32 xy_{2}y_{6}+80y_{2}^{2}y_{6}\] \[\quad+24y_{1}y_{6}^{2}+\frac{16}{27}y_{6}^{3},\] \[\beta =-16x^{2}y_{0}+4y_{0}y_{1}^{2}-16xy_{0}y_{2}-4y_{0}y_{2}^{2}\] \[\quad+8y_{0}y_{1}y_{6}+4y_{0}y_{6}^{2}.\]
At this stage the resulting ODEs are somewhat opaque and we may use the connection to Toda to clarify. Following the steps of the procedure from [1] outlined earlier we may put the Nahm Lax pair in Toda form, namely with
\[T_{1}+iT_{2} =\left(\begin{array}{ccc}0&-2\sqrt{2}x-\sqrt{2}y_{1}-\sqrt{2}y_ {2}-\sqrt{2}y_{6}&0\\ 0&0&2\sqrt{2}x-\sqrt{2}y_{1}+\sqrt{2}y_{2}-\sqrt{2}y_{6}\\ 2y_{0}&0&0\end{array}\right),\] \[T_{1}-iT_{2} =\left(\begin{array}{ccc}0&0&-2y_{0}\\ 2\sqrt{2}x+\sqrt{2}y_{1}+\sqrt{2}y_{2}+\sqrt{2}y_{6}&0&0\\ 0&-2\sqrt{2}x+\sqrt{2}y_{1}-\sqrt{2}y_{2}+\sqrt{2}y_{6}&0\end{array}\right),\] \[-2iT_{3} =\left(\begin{array}{ccc}-4x+2y_{1}+4y_{2}-\frac{4}{3}y_{6}&0&0 \\ 0&-4y_{1}+\frac{8}{3}y_{6}&0\\ 0&0&4x+2y_{1}-4y_{2}-\frac{4}{3}y_{6}\end{array}\right).\]
This gives us variables
\[a_{0} =2y_{0},\quad a_{1}=-2\sqrt{2}x-\sqrt{2}y_{1}-\sqrt{2}y_{2}- \sqrt{2}y_{6},\quad a_{2}=2\sqrt{2}x-\sqrt{2}y_{1}+\sqrt{2}y_{2}-\sqrt{2}y_{6},\] \[b_{1} =4x-2y_{1}-4y_{2}+\frac{4}{3}y_{6},\quad b_{2}=4y_{1}-\frac{8}{3} y_{6},\quad b_{3}=-4x-2y_{1}+4y_{2}+\frac{4}{3}y_{6}.\]
These variables are the Flaschka coordinates for the periodic Toda system. (Any 6-tuple satisfying \(\sum_{i}b_{i}=0\) gives valid \(x,y_{j}\).) In these new variables we have
\[a_{0}^{\prime}= \frac{1}{2}a_{0}(b_{3}-b_{1}),\quad a_{1}^{\prime}=\frac{1}{2}a_{1 }(b_{1}-b_{2}),\quad a_{2}^{\prime}=\frac{1}{2}a_{2}(b_{2}-b_{3}),\] \[b_{1}^{\prime}= a_{1}^{2}-a_{0}^{2},\quad b_{2}^{\prime}=a_{2}^{2}-a_{1}^{2}, \quad b_{3}^{\prime}=a_{0}^{2}-a_{2}^{2}, \tag{24}\]
together with the constants
\[\alpha_{2}=b_{1}b_{2}+b_{1}b_{3}+b_{2}b_{3}+a_{0}^{2}+a_{1}^{2}+a_{2}^{2},\quad \alpha_{3}=b_{1}b_{2}b_{3}+b_{1}a_{2}^{2}+b_{2}a_{0}^{2}+b_{3}a_{1}^{2},\quad \beta=a_{0}a_{1}a_{2}.\]
At this stage we have 6 variables and 3 constraints. One could in principle solve these explicitly using the fact that the flow linearises on the Jacobian of the associated hyperelliptic curve as in [13, Theorem 5.1]. Are simplifications possible? We may use Grobner bases in Sage to utilise the constants \(\alpha_{2},\alpha_{3},0=\sum b_{i}\) to eliminate the \(b_{i}\), and we get the equations described in the text,
\[0 =\sum_{i=0}^{2}a_{i}^{2}-\alpha_{2}-\frac{1}{3}(d_{1}^{2}+d_{1}d_ {2}+d_{2}^{2}),\] \[0 =a_{1}^{2}d_{2}-a_{2}^{2}d_{1}+\alpha_{3}+\frac{1}{3}\alpha_{2}(d _{1}-d_{2})+\frac{1}{27}(d_{1}-d_{2})^{3},\]
where we have introduced \(d_{i}=\frac{2a_{i}^{\prime}}{a_{i}}\). This in principle is the maximal reduction one can achieve with the variables provided when the \(\alpha_{i}\) and \(\beta\) are generic.
One simplification which can be achieved is by attempting to make the second equation a polynomial in \(d_{1}-d_{2}\). To do this we would need \(a_{1}^{2}=a_{2}^{2}\). We can calculate that
\[\frac{d}{ds}(a_{1}^{2}-a_{2}^{2}) =2\left[a_{1}\left(\frac{1}{2}a_{1}(b_{1}-b_{2})\right)-a_{2} \left(\frac{1}{2}a_{2}(b_{2}-b_{3})\right)\right],\] \[=a_{1}^{2}(b_{1}-b_{2})-a_{2}^{2}(b_{2}-b_{3}),\] \[=a_{1}^{2}(b_{1}-2b_{2}+b_{3})+(a_{1}^{2}-a_{2}^{2})(b_{2}-b_{3}),\] \[=-3b_{2}a_{1}^{2}+(a_{1}^{2}-a_{2}^{2})(b_{2}-b_{3}).\]
Hence we can consistently set \(a_{1}^{2}-a_{2}^{2}=0\) provided \(b_{2}a_{1}^{2}=0\). As \(b_{2}^{\prime}=a_{2}^{2}-a_{1}^{2}\), this means we can consistently set \(a_{1}^{2}=a_{2}^{2}\) and \(b_{2}=0\). Making these restrictions we can now eliminate the one remaining equation to find
\[0=a_{0}^{2}+2a_{1}^{2}-\alpha_{2}-d_{1}^{2}\Rightarrow a_{1}^{2}\left(2\frac {da_{1}}{ds}\right)^{2}=\beta^{2}+2a_{1}^{6}-\alpha_{2}a_{1}^{4}.\]
## Appendix C Initial Computation of \(V_{4}\) Nahm Matrices
Taking the polynomials \(\zeta_{0}\zeta_{1}(\zeta_{0}^{4}-\zeta_{1}^{4})\), \(\zeta_{0}^{2}\zeta_{1}^{2}\), and \(\zeta_{0}^{4}+\zeta_{1}^{4}\) as the inputs to the procedure of outlined in Appendix B gives the ODES in the six real-valued variables
\[x^{\prime} =2x^{2}-\frac{1}{6}y_{0}^{2}+\frac{1}{2}y_{1}^{2}-\frac{1}{2}y_{ 2}^{2}+\frac{1}{6}y_{3}^{2}-\frac{1}{2}y_{4}^{2}, y_{2}^{\prime} =\frac{1}{3}y_{0}^{2}+y_{1}^{2}-2xy_{2}-y_{2}^{2}-\frac{1}{3}y_{3}^ {2}-y_{1}y_{4},\] \[y_{0}^{\prime} =-2xy_{0}+2y_{0}y_{2}+2y_{1}y_{3}+y_{3}y_{4}, y_{3}^{\prime} =2y_{0}y_{1}+2xy_{3}-2y_{2}y_{3}+y_{0}y_{4},\] \[y_{1}^{\prime} =2xy_{1}+2y_{1}y_{2}+\frac{2}{3}y_{0}y_{3}-y_{2}y_{4}, y_{4}^{\prime} =-2y_{1}y_{2}+\frac{2}{3}y_{0}y_{3}-4xy_{4},\]
with the corresponding spectral curve
\[\mathcal{C}:\quad\eta^{3}+\eta\left[a(\zeta^{4}+1)+b\zeta^{2}\right]+c\zeta( \zeta^{4}-1)=0,\]
where
\[a =8xy_{0}+4y_{0}y_{2}-4y_{1}y_{3}+4y_{3}y_{4},\] \[b =4y_{0}^{2}-12y_{1}^{2}+48xy_{2}-12y_{2}^{2}+4y_{3}^{2}-24y_{1}y_{4},\] \[c =-8iy_{0}^{2}y_{1}-8iy_{1}^{3}+48ixy_{1}y_{2}+24iy_{1}y_{2}^{2}-16ixy _{0}y_{3}+16iy_{0}y_{2}y_{3}\] \[\quad+8iy_{1}y_{3}^{2}+48ix^{2}y_{4}-4iy_{0}^{2}y_{4}+12iy_{1}^{2} y_{4}-12iy_{2}^{2}y_{4}+4iy_{3}^{2}y_{4}-4iy_{4}^{3}.\]
The full Nahm matrices are
\[T_{1}=\begin{pmatrix}0&0&0\\ 0&0&-\bar{f}_{1}\\ 0&f_{1}&0\end{pmatrix},\qquad T_{2}=\begin{pmatrix}0&0&f_{2}\\ 0&0&0\\ -\bar{f}_{2}&0&0\end{pmatrix},\qquad T_{3}=\begin{pmatrix}0&-\bar{f}_{3}&0\\ f_{3}&0&0\\ 0&0&0\end{pmatrix}.\]
where the \(f_{i}\) are given by
\[f_{1} =2x+y_{0}-iy_{1}+y_{2}+iy_{3}+iy_{4},\] \[f_{2} =2x-y_{0}-iy_{1}+y_{2}-iy_{3}+iy_{4},\] \[f_{3} =2x+2iy_{1}-2y_{2}+iy_{4},\]
One can check that setting \(y_{1}=y_{3}=y_{4}=0\) is consistent, and corresponds to the inversion symmetric case. Note the condition on the residues of the Nahm data now become that the residue of each \(f_{i}\) at the poles is \(1\).
|
2305.00448 | General construction scheme for geometrically nontrivial flat band
models | A singular flat band(SFB), a distinct class of the flat band, has been shown
to exhibit various intriguing material properties characterized by a geometric
quantity of the Bloch wave function called the quantum distance. We present a
general construction scheme for a tight-binding model hosting an SFB, where the
quantum distance profile can be controlled. We first introduce how to build a
compact localized state(CLS), a characteristic eigenstate of the flat band,
providing the flat band with a band-touching point, where a specific value of
the maximum quantum distance is assigned. Then, we develop a scheme designing a
tight-binding Hamiltonian hosting an SFB starting from the obtained CLS,
satisfying the desired hopping range and symmetries by applying the
construction scheme. While the scheme can be applied to any dimensions and
lattice structures, we propose several simple SFB models on the square and
kagome lattices. Finally, we establish a bulk-boundary correspondence between
the maximum quantum distance and the boundary modes for the open boundary
condition, which can be used to detect the quantum distance via the electronic
structure of the boundary states. | Hyeongseop Kim, Chang-geun Oh, Jun-Won Rhim | 2023-04-30T11:14:01Z | http://arxiv.org/abs/2305.00448v1 | # General construction scheme for geometrically nontrivial flat band models
###### Abstract
A singular flat band(SFB), a distinct class of the flat band, has been shown to exhibit various intriguing material properties characterized by a geometric quantity of the Bloch wave function called the quantum distance. We present a general construction scheme for a tight-binding model hosting an SFB, where the quantum distance profile can be controlled. We first introduce how to build a compact localized state(CLS), a characteristic eigenstate of the flat band, providing the flat band with a band-touching point, where a specific value of the maximum quantum distance is assigned. Then, we develop a scheme designing a tight-binding Hamiltonian hosting an SFB starting from the obtained CLS, satisfying the desired hopping range and symmetries by applying the construction scheme. While the scheme can be applied to any dimensions and lattice structures, we propose several simple SFB models on the square and kagome lattices. Finally, we establish a bulk-boundary correspondence between the maximum quantum distance and the boundary modes for the open boundary condition, which can be used to detect the quantum distance via the electronic structure of the boundary states.
+
Footnote †: These authors contributed equally to this work
## I Introduction
When a band has a macroscopic degeneracy, we call it a flat band [1; 2]. Flat band systems have received great attention because their van Hove singularity is expected to stabilize various many-body states when the Coulomb interaction is introduced. Examples of such correlated states induced by flat bands are unconventional superconductivity [3; 4; 5; 6; 7; 8; 9; 10; 11], ferromagnetism [12; 13; 14; 15; 16; 17; 18], Wigner crystal [19; 20; 21], and fractional Chern insulator [22; 23; 24; 25; 26; 27; 28; 29; 30; 31]. Recently, it was revealed that the flat band could be nontrivial from the perspective of geometric notions, such as the quantum distance, quantum metric, and cross-gap Berry connection [32; 33; 34; 35; 36]. The quantum distance is related to the resemblance between two quantum states defined by
\[d^{2}=1-|\langle\psi_{1}|\psi_{2}\rangle|^{2}, \tag{1}\]
which is positive-valued and ranging from 0 to 1 [37; 38; 39]. If a flat band has a band-touching point with another parabolic band and the maximum value of the quantum distance, denoted by \(d_{\text{max}}\), between eigenvectors around the touching point is nonzero, we call it a singular flat band(SFB) [40]. The singular flat band hosts non-contractible loop states featuring exotic topological properties in real space [41; 42]. The Landau level structure of the singular flat band is shown to be anomalously spread into the band gap region [32; 33], and the maximum quantum distance determines the magnitude of the Landau level spreading. Moreover, if we introduce an interface in the middle of a singular flat band system by applying different electric potentials, an interface mode always appears, and the maximum quantum distance determines its effective mass [43]
Diverse unconventional phenomena characterized by quantum distance are expected to occur in the singular flat band systems. However, we lack good tight-binding models hosting the singular flat band where one can control the quantum distance, although numerous flat band construction methods have been developed [44; 45; 46; 47; 48; 49; 50; 51; 52]. This paper suggests a general construction scheme for the tight-binding Hamiltonians with a singular flat band and the controllable maximum quantum distance. The construction process's essential part is designing a compact localized state(CLS), which gives the desired maximum quantum distance. The CLS is a characteristic eigenstate of the flat band, which has finite amplitudes only inside a finite region in real space [40]. The CLS can be transformed into the Bloch eigenstate, and any Hamiltonian having this as one of the eigenstates must host a flat band [40]. Among infinitely many possible tight-binding Hamiltonians for a given CLS, one can choose several ones by implementing the wanted symmetries and hopping range into the construction scheme. Using the construction scheme, we suggest several simple tight-binding models hosting a singular flat band and characterized by the maximum quantum distance on the square and kagome lattices. Using the obtained tight-binding models, we propose a bulk-boundary correspondence of the flat band system from the maximum quantum distance to address a question of how to measure the maximum quantum distance in experiments. The previous work established the bulk-interface correspondence for the interface between two domains with different electric potentials in the same singular flat band system, where the maximum quantum distance of the bulk determines the interface mode's effective mass [43]. We show that the same correspondence applies to open boundaries if a boundary mode exists.
The paper is organized as follows. In Sec. II, we introduce a general flat band construction scheme, which starts from a given CLS. In Sec. III, we present how to construct a CLS characterized by a desired value of \(d_{\text{max}}\). Combining these two methods, we build two tight-binding models hosting a singular flat band characterized by \(d_{\text{max}}\) in the kagome lattice and square lattice bilayer in Sec. IV. Then, in Sec. V, we propose the bulk-boundary correspondence characterized by the quantum distance. Finally, we summarize and discuss our results in Sec. V.
## II General flat band construction scheme
Since the key ingredient of the flat band construction scheme is designing a CLS, we begin with a brief review of it. The general form of the Bloch wave function of the \(n\)-th band with momentum \(\mathbf{k}\) is given by
\[|\psi_{n,\mathbf{k}}\rangle=\frac{1}{\sqrt{N}}\sum_{\mathbf{R}}\sum_{q=1}^{Q}e^{i\mathbf{k }\cdot\mathbf{R}}v_{n,\mathbf{k},q}\,|\mathbf{R},q\rangle, \tag{2}\]
where \(N\) is the number of unit cells in the system, \(\mathbf{R}\) represents the position vectors of the unit cells, \(|\mathbf{R},q\rangle\) corresponds to the \(q\)-th orbital among \(Q\) orbitals in a unit cell, and \(v_{n,\mathbf{k},q}\) is the \(q\)-th component of the eigenvector \(\mathbf{v}_{n,\mathbf{k}}\) of the \(Q\times Q\) Bloch Hamiltonian [53]. Then it was shown that if the \(n_{0}\)-th band is flat, one can always find a linear combination of the Bloch wave functions resulting in the CLS of the form:
\[|\chi_{\mathbf{R}}\rangle=c_{\chi}\sum_{\mathbf{k}\in\text{BZ}}\sum_{\mathbf{R}^{\prime}} \sum_{q=1}^{Q}\alpha_{\mathbf{k}}v_{n_{0},\mathbf{k},q}e^{i\mathbf{k}\cdot(\mathbf{R}^{\prime }-\mathbf{R})}|\mathbf{R}^{\prime},q\rangle, \tag{3}\]
where \(c_{\chi}\) is the normalization constant and \(\alpha_{\mathbf{k}}\) is a mixing coefficient of the linear combination [40]. It is important to note that \(\alpha_{\mathbf{k}}v_{n_{0},\mathbf{k},q}\) is a finite sum of exponential factors \(e^{i\mathbf{k}\cdot\mathbf{R}}\) so that the range of \(\mathbf{R}^{\prime}\) in (3) with the nonzero coefficient of \(|\mathbf{R}^{\prime},q\rangle\) is finite. If \(\alpha_{\mathbf{k}}v_{n_{0},\mathbf{k},q}=0\) at \(\mathbf{k}=\mathbf{k}_{0}\) for all kinds of \(\alpha_{\mathbf{k}}\) satisfying the above properties, we call the band the singular flat band because \(v_{n_{0},\mathbf{k},q}\) becomes discontinuous at \(\mathbf{k}_{0}\) in this case. From (3), one can note that the constants in front of each exponential factor of \(\alpha_{\mathbf{k}}v_{n_{0},\mathbf{k},q}\) becomes the amplitude of the CLS.
We construct a flat band Hamiltonian from a CLS arbitrarily designed on a given lattice. This part corresponds to the third and fourth stages of the construction scheme sketched in Fig. 1. By using the correspondence between the CLS and Bloch eigenvector in (3), one can obtain \(\alpha_{\mathbf{k}}v_{n_{0},\mathbf{k},q}\) in the form of the finite sum of exponential factors from the designed CLS. Then, by normalizing \(\alpha_{\mathbf{k}}v_{n_{0},\mathbf{k},q}\), one can have the flat band's eigenvector \(v_{n_{0},\mathbf{k},q}\) corresponding to the CLS. Our purpose is to find a tight-binding Hamiltonian of the form
\[H_{ij}^{\text{lattice}}(\mathbf{k})=\sum_{\Delta\mathbf{R}}t_{ij}(\Delta\mathbf{R})e^{-i \mathbf{k}\cdot\Delta\mathbf{R}}, \tag{4}\]
which satisfies
\[\left[H_{ij}^{\text{lattice}}(\mathbf{k})-E_{\text{flat}}\right]\alpha_{\mathbf{k}} \mathbf{v}_{n_{0},\mathbf{k}}=0, \tag{5}\]
where \(E_{\text{flat}}\) is the flat band's energy and \(\mathbf{v}_{n_{0},\mathbf{k}}\) is a column vector with components \(v_{n_{0},\mathbf{k},q}\). Here, \(t_{ij}(\Delta\mathbf{R})\) represents the hopping parameter between the \(i\)-the and \(j\)-th orbitals in unit cells separated by \(\Delta\mathbf{R}=\sum_{\nu=1}^{d}n_{\nu}\mathbf{a}_{\nu}\), where \(n_{\nu}\) is an integer, \(d\) is spatial dimension, and \(\mathbf{a}_{\nu}\) is the primitive vector. For convenience, we denote \(t_{ij}^{n_{1},n_{2}\ldots n_{\nu}}\equiv t_{ij}(\Delta\mathbf{R})\) and \(e_{\nu}\equiv e^{-i\mathbf{k}\cdot\mathbf{a}_{\nu}}\). We use a bar notation for the complex conjugate such that \(\overline{t_{ij}^{n_{1},n_{2}\ldots n_{\nu}}}=(t_{ij}^{n_{1},n_{2}\ldots n_{ \nu}})^{*}\) and \(\overline{e_{\nu}}=(e_{\nu})^{*}\). Then, the matrix element of the tight-binding Hamiltonian is rewritten as
\[H_{ij}^{\text{Lattice}}(\mathbf{k})=\sum_{n_{1},n_{2}\ldots n_{\nu}}\sum_{ij}t_{ij }^{n_{1},n_{2}\ldots n_{\nu}}\prod_{\nu^{\prime}}e_{\nu^{\prime}}^{n_{\nu^{ \prime}}}. \tag{6}\]
Here, the hopping parameters \(t_{ij}^{n_{1},n_{2}\ldots n_{\nu}}\) can be considered complex unknowns determined by the matrix equation in (5). One can encode some wanted hopping range and symmetries by manipulating the number of unknown hopping parameters and setting relations between them, respectively. Noting that \(\alpha_{\mathbf{k}}\mathbf{v}_{n_{0},\mathbf{k}}=\sum_{n_{1},n_{2},\ldots,n_{\nu}}c_{n_ {1},n_{2},\ldots,n_{\nu}}\prod_{\nu}e_{\nu^{\prime}}^{n_{\nu^{\prime}}}\) as described above, the matrix equation (5) leads to a system of linear equations obtained from the coefficients of the independent exponential factors.
Let us consider a simple example, the flat band Hamiltonian on the checkerboard lattice, which is illustrated in Fig. 2(a). We design a CLS in the shape of a square rep
Figure 1: A scheme for the construction of a tight-binding model hosting a singular flat band(SFB) characterized by the maximum quantum distance(\(d_{\text{max}}\)). First, we find two vectors with complex components to yield the desired \(d_{\text{max}}\). Then, we build a CLS from the two vectors in the second and third steps. Finally, we obtain an SFB Hamiltonian from the CLS using the general flat band construction scheme given in Sec. II.
resented by a gray region in Fig. 2(a), having amplitudes \(a\) and \(b\) on the A and B sites, respectively. From the CLS, one can obtain the flat band's eigenvector \(\alpha_{\mathbf{k}}\mathbf{v}_{n_{0},\,\mathbf{k}}\) in momentum space such that the CLS's amplitude in the unit cell \(\Delta\mathbf{R}=\sum_{\nu=1}^{d}n_{\nu}\mathbf{a}_{\nu}\) becomes the coefficient of the exponential factor \(\prod_{\nu}e_{\nu}^{\nu}\). As a result, we have
\[\alpha_{\mathbf{k}}\mathbf{v}_{n_{0},\mathbf{k}}=\begin{pmatrix}a+ae_{1}\\ b+b\overline{e_{2}}\end{pmatrix}. \tag{7}\]
The next step is to design the tight-binding Hamiltonian (6). We seek one with real-valued hopping parameters up to the next-nearest hopping range. Then, the matrix elements of \(H^{\text{CB}_{1}}\) are of the form
\[H^{\text{CB}_{1}}_{11} =t_{11}^{0,0}+t_{11}^{0,-1}\overline{e_{2}}+t_{11}^{0,1}e_{2}, \tag{8}\] \[H^{\text{CB}_{1}}_{12} =t_{12}^{0,0}+t_{12}^{1,0}e_{1}+t_{12}^{0,1}e_{2}+t_{12}^{1,1}e_{ 1}e_{2},\] (9) \[H^{\text{CB}_{2}}_{22} =t_{22}^{0,0}+t_{22}^{-1,0}\overline{e_{1}}+t_{22}^{1,0}e_{1}, \tag{10}\]
From the flat band condition (5) and by enforcing the hermicity, one can find relationships between the tight-binding parameters, which lead to the following form of the Hamiltonian:
\[H^{\text{CB}_{1}}=\begin{pmatrix}-2(1+\cos k_{y})&\frac{a}{b}(1+e_{1})(1+e_{2} )\\ \frac{a}{b}(1+\overline{e_{1}})(1+\overline{e_{2}})&-\frac{2a^{2}}{b^{2}}(1+ \cos k_{x})\end{pmatrix}, \tag{11}\]
where we further assume that \(a\) and \(b\) are real constants and \(t_{11}^{0,0}=-2\) for convenience. This Hamiltonian yields a zero-energy flat band and lower parabolic with a singular band-touching point at \(\mathbf{k}=(\pi,\pi)\) as plotted in Fig. 2(b). In fact, this band-crossing is already designed at the construction stage of the CLS in (7) by assigning a simultaneous zero of all the components of \(\alpha_{\mathbf{k}}\mathbf{v}_{n_{0},\,\mathbf{k}}\) at \(\mathbf{k}=(\pi,\pi)\).
## III Maximum quantum distance
In this section, we discuss how to endow the band-crossing of the flat band with the wanted value of the maximum quantum distance \(d_{\text{max}}\) when we construct a flat band model. Specifically, the quantum distance between two Bloch eigenstates with momenta \(\mathbf{k}\) and \(\mathbf{k}^{\prime}\) is denoted as \(d(\mathbf{k},\mathbf{k}^{\prime})^{2}=1-|\mathbf{v}_{\mathbf{k}^{\prime}}^{*}\cdot\mathbf{ v}_{\mathbf{k}}|^{2}\) and \(d_{\text{max}}\) is defined as
\[d_{\text{max}}^{2}=\lim_{r_{D}\to 0}\max\left[d(\mathbf{k},\mathbf{k}^{\prime})^{2} \right]\Big{|}_{\mathbf{k},\mathbf{k}^{\prime}\in D(\mathbf{k}_{0})}, \tag{12}\]
where \(\mathbf{v}_{\mathbf{k}}\) is the flat band's eigenvector and \(D(\mathbf{k}_{0})\) is a closed disk with radius \(r_{D}\) centered at the band-crossing point \(\mathbf{k}_{0}\)[32]. In the previous study, \(d_{\text{max}}\) was proposed to measure the strength of the singularity at \(\mathbf{k}_{0}\). Note that if there is a singularity at \(\mathbf{k}_{0}\), the quantum distance between the Bloch eigenstates can remain finite even if the momenta of them are very close to \(\mathbf{k}_{0}\)[32]. For the well-known singular flat band models, such as the kagome and checkerboard lattice models, \(d_{\text{max}}\) is found to be unity. While \(0\leq d_{\text{max}}\leq 1\) in general [32], there have been almost no examples of the tight-binding models hosting \(d_{\text{max}}\) smaller than 1.
One can design \(\mathbf{v}_{\mathbf{k}}\) of the flat band to have a specific value of \(d_{\text{max}}\) by manipulating the form of the linear expansion of \(\alpha_{\mathbf{k}}\mathbf{v}_{\mathbf{k}}\) around the band-crossing point. Denoting \(q_{\mu}=k_{\mu}-k_{0,\mu}\), where \(\mathbf{k}_{0}\) is the band-crossing point, the eigenvector \(\alpha_{\mathbf{k}}\mathbf{v}_{\mathbf{k}}\) can be written as
\[\alpha_{\mathbf{k}}\mathbf{v}_{\mathbf{k}}\simeq q_{1}\mathbf{u}_{1}+q_{2} \mathbf{u}_{2}, \tag{13}\]
in the vicinity of \(\mathbf{k}_{0}\) up to the linear order of \(\mathbf{q}\). Here, \(\mathbf{u}_{1}\) and \(\mathbf{u}_{2}\) are \(Q\times 1\) constant normalized vectors. Then, one can show that
\[d_{\text{max}}^{2}=\frac{1-|\mathbf{u}_{1}^{*}\cdot\mathbf{u}_{2}|^{2}}{1-( \text{Re}\mathbf{u}_{1}^{*}\cdot\mathbf{u}_{2})^{2}}. \tag{14}\]
See Appendix A for the detailed derivations. By using
Figure 2: (a) The checkerboard flat band model with \(d_{\text{max}}=1\), denoted by \(\text{CB}_{1}\). A red box represents the unit cell. The hopping amplitudes are 1 for the dashed lines along the \(y\)-axis, \(-a/b\) for black solid lines along diagonal directions, and \(a^{2}/b^{2}\) for the blue solid lines along the \(x\)-axis. The CLS corresponding to the flat band is drawn by a gray region. The CLS’s amplitudes are \(a\) at the A-sites and \(b\) at the B-sites. (b) The band structure of the checkerboard model for \(a=1\) and \(b=2\).
Figure 3: (a) The checkerboard flat band model with \(d_{\text{max}}=1/\sqrt{2}\), denoted by \(\text{CB}_{2}\). A red box represents the unit cell. The hopping parameters are given below the figure. For the complex hopping processes, the hopping direction is represented by the arrow. (b) The band structure of the checkerboard model \(\text{CB}_{2}\).
this relationship, one can choose two constant vectors \(\mathbf{u}_{1}\) and \(\mathbf{u}_{2}\), giving the desired value of \(d_{\text{max}}\). Then, performing a regularization of (13) by applying transformations, such as \(q_{i}\rightarrow\sin q_{i}\) and \(q_{i}\to 1-e^{iq_{i}}\), one can obtain \(\alpha_{\mathbf{k}}\mathbf{v}_{\mathbf{k}}\), the Fourier transform of a CLS, in the form of a finite sum of exponential factors \(e_{\nu}\) and \(\overline{e_{\nu}}\). In this stage, corresponding to the first to third steps in Fig. 1, one can control the size of the CLS, which is closely related to the hopping range of the tight-binding model obtained from this CLS. Once we obtain \(\alpha_{\mathbf{k}}\mathbf{v}_{\mathbf{k}}\), the tight-binding Hamiltonian with the desired \(d_{\text{max}}\) can be built by using the construction scheme in the previous section.
From the \(d_{\text{max}}\)-formula (14), one can note that \(d_{\text{max}}\) can be less than one and larger than zero only when \(\mathbf{u}_{1}^{*}\cdot\mathbf{u}_{2}\) is not real or pure imaginary. Namely, \(u_{1,m}^{*}u_{2,m}\) should be imaginary at least for one \(m\), where \(u_{i,m}\) is the \(m\)-th component of \(\mathbf{u}_{i}\). Let us denote such an index \(m\) by \(m_{0}\). Then, the \(m_{0}\)-th component of \(\alpha_{\mathbf{k}}\mathbf{v}_{\mathbf{k}}\), given by \(\alpha_{\mathbf{k}}\mathbf{v}_{\mathbf{k}}|_{m_{0}}=u_{1,m_{0}}q_{1}+u_{2,m_{0}}q_{2}\), must be regularized into a form, where the coefficients of the exponential factors contain both the real and imaginary values. This implies that the CLS corresponding to the singular flat band with \(0<d_{\text{max}}<1\) cannot be constructed only with the real amplitudes. Note that the CLS of the flat band of the kagome lattice can be represented by only real amplitudes because the corresponding \(d_{\text{max}}\) is unity. However, the CLS should consist of different complex amplitudes in at least two atomic sites for generic flat bands with \(0<d_{\text{max}}<1\). The tight-binding Hamiltonian stabilizing such a CLS usually requires complex hopping parameters. Moreover, it is shown in Appendix B that we need more that two exponential factors for at least one component of \(\alpha_{\mathbf{k}}\mathbf{v}_{\mathbf{k}}\). This implies that we usually need hopping processes between atoms at a longer distance than the nearest neighbor ones.
Let us consider the checkerboard lattice example again. We assume that the touching point is at \(\mathbf{k}=(0,0)\). First, to obtain a model with \(d_{\text{max}}=1\), we can choose \(\mathbf{u}_{1}=(i,0)^{\text{T}}\) and \(\mathbf{u}_{2}=(0,-i)^{\text{T}}\) in (13), using the formula (14). Then, we apply the regularization \(ik_{1}\to 1-e^{-ik_{1}}\) and \(ik_{2}\to 1-e^{ik_{2}}\) to obtain the CLS's Fourier transform. Second, on the other hand, one can let the CLS have \(d_{\text{max}}=1/\sqrt{2}\) by choosing \(\mathbf{u}_{1}=(i,-1)^{\text{T}}/\sqrt{2}\) and \(\mathbf{u}_{2}=(0,-i)^{\text{T}}\). In this case, an example of the regularization gives \(\alpha_{\mathbf{k}}\mathbf{v}_{\mathbf{k}}=(1-e^{-ik_{1}},1+i-ie^{-ik_{1}}-e^{ik_{2}})^ {\text{T}}\). The CLS corresponding to this eigenvector is drawn in Fig. 3(a). An example of the flat band tight-binding Hamiltonian obtained from this choice of the CLS is given by
\[H^{\text{CB}_{2}}=\begin{pmatrix}v_{2}v_{2}^{*}&-v_{1}v_{2}^{*}\\ -v_{2}v_{1}^{*}&v_{1}v_{1}^{*}\end{pmatrix}, \tag{15}\]
where \(v_{1}=1-e^{-ik_{1}}\) and \(v_{2}=1+i-ie^{-ik_{1}}-e^{ik_{2}}\). The band structure of this model is shown in Fig. 3(b). One can note that the band has non-zero slopes at X and M points due to the broken time-reversal, mirror, and inversion symmetries. As discussed above, the CLS contains both the real and imaginary amplitudes and the Hamiltonian possesses imaginary hopping processes in the \(d_{\text{max}}=1/\sqrt{2}\) case.
## IV Flat band models characterized by the quantum distance
### Kagome lattice model
We construct a simple tight-binding model hosting a SFB characterized by \(d_{\text{max}}\) in the kagome lattice. When we consider only the nearest neighbor hopping processes in the kagome lattice, which is the most popular case, the flat band already has a quadratic band-touching, but the corresponding \(d_{\text{max}}\) is fixed to \(1\)[32]. We generalize this conventional kagome lattice model so that \(d_{\text{max}}\) can vary by adding some next-nearest neighbor hopping processes.
We begin with two vectors \(\mathbf{u}_{1}=c_{1}(-i,-2\alpha,-i)^{\text{T}}\) and \(\mathbf{u}_{2}=c_{2}(0,-i-\alpha,-i)^{\text{T}}\), where \(c_{1}=(2+4\alpha^{2})^{1/2}\) and
Figure 4: The kagome lattice model hosting a flat band characterized by the quantum distance. (a) The nearest and the next nearest neighbor hopping processes are denoted by the black solid and green dashed lines, respectively. The CLS corresponding to the flat band of this model is represented by the gray region. (b) The phase parts of the hopping parameters are highlighted. The magnetic fluxes for the complex hopping parameters are given by \(\phi_{A}=\pi/2-\theta\), \(\phi_{B}=\theta\), and \(\phi_{C}=-\pi\). (c-e) Band dispersions for \(\alpha=0\), \(\alpha=0.766\), and \(\alpha=3.464\). (f) \(d_{\text{max}}\) as a function of \(\alpha\). The formula (16) drawn by a black curve is compared with the numerically calculated \(d_{\text{max}}\) from the lattice model, represented by circles.
\(c_{2}=(2+\alpha^{2})^{1/2}\). This set of vectors yields
\[d_{\rm max}=\sqrt{\frac{3+2\alpha^{2}}{3+6\alpha^{2}}}, \tag{16}\]
where \(\alpha\) can take any real number from \(-\infty\) to \(\infty\). As shown in Fig. 4(f), \(d_{\rm max}\) of the constructed SFB model can take values from \(1/\sqrt{3}\) to \(1\). Then, we regularize the linearized vector \(\mathbf{v}_{\rm fb}=\mathbf{u}_{1}k_{1}+\mathbf{u}_{2}k_{2}\) to
\[\mathbf{v}_{\rm fb}=\begin{pmatrix}1-\overline{e}_{1}\\ -1+i\alpha\overline{e}_{1}+\underline{e}_{2}-i\alpha e_{3}\\ e_{1}-\overline{e}_{2}\end{pmatrix}, \tag{17}\]
where \(e_{3}=e_{1}e_{2}\). The CLS corresponding to this eigenvector of the flat band is drawn in Fig. 4(a) by the gray region. From this choice of the CLS, we construct a tight-binding Hamiltonian as follows:
\[H_{\rm kag}(\mathbf{k})=\begin{pmatrix}g_{1}&g_{2}^{*}&g_{3}^{*}\\ g_{2}&2&g_{4}^{*}\\ g_{3}&g_{4}&g_{1}\end{pmatrix}, \tag{18}\]
where \(g_{1}=2|t|^{2}\), \(g_{2}=t(1+\overline{e_{3}})\), \(g_{3}=t(1+e_{2}+i\alpha t(\overline{e_{1}}+e_{3})\), \(g_{4}=t(1+\overline{e_{1}})\), \(t=e^{i\theta}\sqrt{1+\alpha^{2}}\), and \(\theta=\cos^{-1}(1/\sqrt{1+\alpha^{2}})\). Note that when \(\alpha=0\), where \(d_{\rm max}=1\), the model reduces to the kagome lattice model with only nearest neighbor hopping processes. As the parameter \(\alpha\) grows, the nearest neighbor hopping parameters become complex-valued, and the next nearest neighbor hopping processes are developed as represented by green dashed lines in Fig. 4(a). One can assign threading magnetic fluxes corresponding to the complex hopping parameters as illustrated in Fig. 4(b), similar to the Haldane model in graphene. In Fig. 4(c) to (e), we plot band dispersions for various values of \(\alpha\), where we have a zero-energy flat band at the bottom. Fig. 4(c) is the well-known band diagram of the kagome lattice with the nearest neighbor hopping processes. If \(\alpha\) is nonzero, the Dirac point is gapped out due to the broken \(C_{6}\) symmetry, but the quadratic band-crossing at the \(\Gamma\) point is maintained. We calculate \(d_{\rm max}\) of this model directly using (12) and check that the continuum formula (16) works well as shown in Fig. 4(f).
### Square lattice bilayer model
We also construct an SFB tight-binding model in the square lattice bilayer, where one can adjust \(d_{\rm max}\). The lattice structure is illustrated in Fig. 5(a) and (b). As in the kagome lattice case, the construction scheme starts from setting two constant vectors. Our choice is \(\mathbf{u}_{1}=c(i\alpha+\gamma,-\alpha-i\gamma)^{\rm T}\), and \(\mathbf{u}_{2}=\overline{\mathbf{u}}_{1}\), where \(c=(2\alpha^{2}+2\gamma^{2})^{1/2}\). One can show that \(d_{\rm max}\) calculated from these vectors is given by
\[d_{\rm max}=\sqrt{1-\frac{4\alpha^{2}\gamma^{2}}{(\alpha^{2}+\gamma^{2})^{2}}}, \tag{19}\]
where \(\alpha\) and \(\gamma\) can take any real values. \(d_{\rm max}\) can vary from \(0\) to \(1\). If \(\alpha\) or \(\gamma\) is zero
As shown in Fig. 5(d), \(d_{\rm max}\) of the constructed SFB model can take values from \(0\) to \(1\). Then, we regularize a vector \(\mathbf{v}_{\rm fb}=\mathbf{u}_{1}k_{1}+\mathbf{u}_{2}k_{2}\) to
\[\mathbf{v}_{\rm fb}=\begin{pmatrix}-i\gamma(1-e_{1}e_{2})-\alpha(e_{1}-e_{2} )\\ i\alpha(1-e_{1}e_{2})+\gamma(e_{1}-e_{2})\end{pmatrix}. \tag{20}\]
The CLS corresponding to this eigenvector of the flat band is drawn in Fig. 5(e) by a gray region. From this choice of the CLS, we construct a tight-binding Hamiltonian as follows:
\[H_{\rm sq}(\mathbf{k})=\begin{pmatrix}|f_{2}|^{2}&f_{3}\\ \overline{f_{3}}&|f_{1}|^{2}\end{pmatrix}, \tag{21}\]
where \(f_{1}=-i\gamma(1-e_{1}e_{2})-\alpha(e_{1}-e_{2})\), \(f_{2}=i\alpha(1-e_{1}e_{2})+\gamma(e_{1}-e_{2})\) and \(f_{3}=-f_{1}\overline{f_{2}}\). When \(\alpha=\gamma\) or \(\alpha\) and \(\gamma\) are zero, \(d_{\rm max}=1\). As parameters \(\alpha\) and \(\gamma\) grows, interlayer and intralayer hopping appears and if \(\alpha\neq\gamma\), complex-value hopping process is developed as represented by blue arrow in Fig. 5(a). Unlike kagome lattice model, this model has an isotropic band dispersion. As show in Fig. 5(c), we plot the zero-energy flat band at the bottom and a band dispersion that is independent of variables \(\alpha\) and \(\gamma\). Fig. 5(d) shows \(d_{\rm max}\) of this model which is calculated by continuum formula (14) and directly using (12).
Figure 5: The square lattice bilayer model hosting a flat band characterized by the quantum distance. We plot the interlayer and intralayer hopping processes in (a) and (b), respectively. In (c), we plot the band structure, where the energy is scaled by \(\alpha^{2}+\gamma^{2}\). The relation between \(d_{\rm max}\) and the band parameter \(\gamma\) is presented in (d).
## V Bulk-boundary correspondence
The bulk-boundary correspondence is the essential idea of the topological analysis of materials [54; 55; 56; 57; 58; 59; 60; 61]. Based on this, one can detect the topological information of the bulk by probing the electronic structure of the boundary states. Recently, a new kind of bulk-interface correspondence from the quantum distance for the flat band systems was developed [43]. Here, a specific type of interface is considered, which is generated between two domains of a singular flat band system with different onsite potentials \(U_{R}\) and \(U_{L}\). Note that the two domains are characterized by the same geometric quantity \(d_{\rm max}\), unlike the topological bulk-boundary correspondence, where the boundary is formed between two regions with different topologies. In the case of the singular flat band systems, an interface state is guaranteed to exist if the value of \(d_{\rm max}\) is nonzero, and the corresponding band dispersion around the band-crossing point is given by
\[E_{\rm IF}(k)\approx\frac{d_{\rm max}^{2}}{2m_{b}}k^{2}+U_{0}, \tag{22}\]
where \(k\) and \(m_{b}\) are the crystal momentum and the bulk mass along the direction of the interface, respectively, and \(U_{0}=\min(U_{R},U_{L})\). This formula implies that the effective mass of the interface mode is \(m^{*}=m_{b}/d_{\rm max}^{2}\).
Now, we examine the formula (22) for the finite systems satisfying the open boundary condition. In the previous work, (22) could be obtained by presuming an exponentially decaying edge mode, and the existence of such a state was guaranteed for the specific interface of the step-like potential. While the open boundaries are naturally induced when we prepare a sample, the application of the step-like potential is not usually straightforward in experiments. Therefore, it is worthwhile to investigate the bulk-boundary correspondence for the open boundary systems. In the case of the open boundary, the bulk-boundary correspondence states that if edge-localized modes exist, their energy spectrum is given by (22). Note that the edge modes are not guaranteed to appear within the open boundary condition.
We first consider the kagome lattice model. We note that boundary modes exist for the ribbon geometry of this system illustrated in Fig. 6(a), which respects the translational symmetry along \((1/2,\sqrt{3}/2)\) while terminated along the \(x\)-axis. The width \(W\) of the kagome ribbon is defined as the number of the unit cells along the \(x\)-axis. For example, the width of the kagome ribbon shown in Fig. 6(a) is 4. We plot the band dispersions of the kagome ribbons with \(W=20\) for \(d_{\rm max}=1\) and \(d_{\rm max}=0.8\) in Fig. 6(b) and (c), respectively. The red and blue lines represent the boundary modes stemming from the band-crossing point at \(k_{y}=0\). While the band dispersions of the left- and right-localized modes are precisely the same for the \(d_{\rm max}=1\) case, it is not for \(0<d_{\rm max}<1\) case due to the broken time-reversal symmetry. For this reason, we distinguish the left- and right-localized modes by the red and blue colors in Fig. 6(c). We check that the blue and red curves, although they look asymmetric with respect to \(k_{y}=0\), they follow the same parabolic equation (22) in the vicinity of the
Figure 6: The Bulk-boundary correspondence of singular flat band systems. The upper(lower) panels correspond to the results of the kagome lattice(square lattice bilayer) model. (a) and (e) illustrate the lattice structures of the ribbon geometries of the kagome lattice and square lattice bilayer, respectively. The red boxes are the unit cells. The system is terminated along the \(x\)-axis, and there is a finite number of unit cells, denoted by \(W\), along this direction. (b,c) and (f,g) are the band structures of the kagome lattice and square lattice bilayer with \(W=20\), respectively. Red and blue lines represent the boundary modes. In (d) and (h), we plot the effective mass of the boundary modes around \(k_{y}=0\) as a function of \(d_{\rm max}\) and compare it with the continuum result in (22).
touching-point at \(k_{y}=0\). We numerically calculate the effective mass of the boundary modes from the kagome lattice model and compare it with the analytic result of the effective mass \(m^{*}=m_{b}/d_{\rm max}^{2}\) in (22). As plotted in Fig. 6(d), the formula (22) describes the numerical results perfectly for any values of \(d_{\rm max}\). Second, we also investigate the edge state of the square lattice bilayer ribbon shown in Fig. 6(e). As in the kagome model, the width \(W\) of this system is defined as the number of unit cells along the \(x\)-axis. In Fig. 6(f) and (g), we plot the band structures of the square lattice bilayer ribbon with \(W=20\). The red curves, which are doubly degenerate, correspond to the boundary modes. We confirm that the effective mass of the boundary modes obeys the continuum formula (22) well as plotted in Fig. 6(h).
## VI Conclusions
In summary, we propose a construction scheme for tight-binding Hamiltonians hosting a flat band whose band-touching point is characterized by \(d_{\rm max}\), the maximum value of the quantum distance between Bloch eigenstates around the touching point. Based on the scheme, we built several flat band tight-binding models with simple hopping structures in the kagome lattice and the square lattice bilayer, where one can control \(d_{\rm max}\). We note that complex and long-range (at least the next nearest ones) hopping amplitudes are necessary to change \(d_{\rm max}\) between \(0\) and \(1\). This implies that the candidate materials hosting a SFB with \(0<d_{\rm max}<1\) could be found among the materials with strong spin-orbit coupling. We believe that our construction scheme could inspire the material search for the geometrically nontrivial flat band systems. If we extend the category of the materials to the artificial systems, our lattice models with the fine-tuned complex hopping parameters are expected to be realized in the synthetic dimensions [62; 63; 64; 65; 66; 67; 68] and circuit lattices [69; 70]. Then, we propose a bulk-boundary correspondence between the bulk number \(d_{\rm max}\) and the shape of the low-energy dispersion of the boundary modes within the open boundary condition. The information of \(d_{\rm max}\) is embedded in the effective mass of the band dispersion of the edge states. This correspondence provides us with a tool to detect \(d_{\rm max}\) from the spectroscopy of the finite SFB systems. Notably, the bulk-boundary correspondence is obtained from the continuum Hamiltonian around the band-crossing point. This implies that even if the flat band obtained from our construction scheme is slightly deformed in real systems, one can investigate the geometric properties of the singular flat band.
## Appendix A Derivation of \(\mathbf{d}_{\rm max}\) formula
Let us consider a linearized quantum state of the form
\[\alpha_{\mathbf{k}}\mathbf{v}_{\mathbf{k}}\approx q_{1}\mathbf{u}_{1}+q_{2} \mathbf{u}_{2}, \tag{10}\]
where \(\mathbf{u}_{\mu}\) is represented by a column vector of size equal to the number orbitals in a unit cell and \(\alpha_{\mathbf{k}}\) is a factor introduced in Sec. II. Here, \(\mathbf{u}_{1}\) and \(\mathbf{u}_{2}\) can take complex numbers as their components and do not have to be orthogonal with each other. Without loss of generality, one can express \(q_{\mu}\) as \(q_{1}=q_{x}=q\cos\theta\) and \(q_{2}=q_{x}\sin\alpha+q_{y}\cos\alpha=q\sin(\theta+\alpha)\). After normalization, we have
\[\mathbf{v}_{\mathbf{k}}\approx c(\alpha,\theta)\left(\cos\theta\mathbf{u}_{1}+ \sin(\theta+\alpha)\mathbf{u}_{2}\right), \tag{11}\]
where
\[\begin{split} c(\alpha,\theta)=&\big{[}\mathbf{u}_{ 1}^{*}\cdot\mathbf{u}_{1}\cos^{2}(\theta+\alpha)+\mathbf{u}_{1}^{*}\cdot \mathbf{u}_{2}\cos(\theta+\alpha)\sin\theta\\ &+\mathbf{u}_{2}^{*}\cdot\mathbf{u}_{1}\cos(\theta+\alpha)\sin \theta+\mathbf{u}_{2}^{*}\cdot\mathbf{u}_{2}\sin^{2}\theta\big{]}^{-\frac{1}{ 2}}.\end{split} \tag{12}\]
Then, the quantum distance between two states at \(\theta_{1}\) and \(\theta_{2}\) is given by
\[d_{\alpha}^{2}(\theta_{1},\theta_{2})=1-\left|\frac{\cos\theta_{1} \left[\mathbf{u}_{1}^{*}\cdot\mathbf{u}_{1}\cos\theta_{2}+\mathbf{u}_{1}^{*} \cdot\mathbf{u}_{2}\sin(\theta_{2}+\alpha)\right]+\sin(\theta_{1}+\alpha)\left[ \mathbf{u}_{2}^{*}\cdot\mathbf{u}_{2}\cos\theta_{2}+\mathbf{u}_{2}^{*}\cdot \mathbf{u}_{2}\sin(\theta_{2}+\alpha)\right]}{c(\alpha,\theta_{1})c(\alpha, \theta_{2})}\right|^{2}. \tag{13}\]
One can show that the maximum value of \(d_{\alpha}^{2}(\theta_{1},\theta_{2})\) is independent of \(\alpha\) and \(\theta_{1}\). Therefore, we assume that \(\alpha=\theta_{1}=0\), which leads to
\[d_{0}^{2}(0,\theta)=\frac{\left(||\mathbf{u}_{1}||^{2}||\mathbf{u}_{2}||^{2}-| \mathbf{u}_{1}^{*}\cdot\mathbf{u}_{2}|^{2}\right)\sin^{2}\theta}{||\mathbf{u} _{1}||^{2}||\mathbf{u}_{1}\cos\theta+\mathbf{u}_{2}\sin\theta||^{2}}, \tag{14}\]
where \(||\mathbf{v}||^{2}=\mathbf{v}^{*}\cdot\mathbf{v}\). From \(dd_{0}^{2}(0,\theta)/d\theta=0\), we obtain
\[\tan\theta_{c}=-2\frac{||\mathbf{u}_{1}||^{2}}{\mathbf{u}_{1}^{*}\cdot \mathbf{u}_{2}+\mathbf{u}_{2}^{*}\cdot\mathbf{u}_{1}}, \tag{15}\]
at which the quantum distance shows an extremum. Then, the maximum quantum distance is evaluated as
\[d_{\rm max}^{2}= d_{0}^{2}(0,\theta_{c}), \tag{16}\]
\[= \frac{||\mathbf{u}_{1}||^{2}||\mathbf{u}_{2}||^{2}-|\mathbf{u}_{1}^{*} \cdot\mathbf{u}_{2}|^{2}}{||\mathbf{u}_{1}||^{2}||\mathbf{u}_{2}||^{2}-(\text{ Re}\mathbf{u}_{1}^{*}\cdot\mathbf{u}_{2})^{2}}, \tag{24}\] \[= \frac{1-|\mathbf{u}_{1}^{*}\cdot\mathbf{u}_{2}|^{2}}{1-(\text{ Re}\mathbf{u}_{1}^{*}\cdot\mathbf{u}_{2})^{2}}, \tag{25}\]
where \(\mathbf{u}_{1}\) and \(\mathbf{u}_{1}\) are assumed to be normalized.
## Appendix B A condition for the CLS to have a noninteger \(d_{\text{max}}\)
In this section, we show that at least one component of \(\alpha_{\mathbf{k}}\mathbf{v}_{\mathbf{k}}\), the Fourier transform of the CLS, should contain more than two different exponential factors \(e^{-i(mq_{1}+n_{2})}\). Here, \(q_{i}\) is the momentum with respect to the band-crossing point, and \(m\) and \(n\) are integer numbers. To this end, we verify that if all the components of \(\alpha_{\mathbf{k}}\mathbf{v}_{\mathbf{k}}\) have two or less than two exponential factors, \(d_{\text{max}}\) of the corresponding flat band is one or zero. The \(q\)-th component of such an eigenvector can be written as
\[\alpha_{\mathbf{k}}\mathbf{v}_{\mathbf{k}}|_{q}=A_{m_{1},n_{1}}e^{-i(m_{1}q_{ 1}+n_{1}q_{2})}+A_{m_{2},n_{2}}e^{-i(m_{2}q_{1}+n_{2}q_{2})}. \tag{26}\]
Since we assume that the flat band is singular at the band-touching point, the coefficients satisfy
\[A_{m_{1},n_{1}}+A_{m_{2},n_{2}}=0. \tag{27}\]
As a result, the linear expansion of \(\alpha_{\mathbf{k}}\) becomes
\[\alpha_{\mathbf{k}}\mathbf{v}_{\mathbf{k}}|_{q}\approx-iA_{m_{1},n_{1}}\left[( m_{1}-m_{2})q_{1}-(n_{1}-n_{2})q_{1}\right], \tag{28}\]
leading to \(u_{1,q}^{*}u_{2,q}=|A_{m_{1},n_{1}}|^{2}(m_{1}-m_{2})(n_{1}-n_{2})\), where \(u_{i,q}\) is the \(q\)-th component of \(\mathbf{u}_{i}\) defined in (25). Therefore \(\mathbf{u}_{1}^{*}\cdot\mathbf{u}_{2}=\sum_{q}u_{1,q}^{*}u_{2,q}\) is a real number, which proves the statement at the beginning of this section. Namely, we need at least three different exponential factors in at least one component of \(\alpha_{\mathbf{k}}\).
|
2309.11690 | Explosive growth from AI automation: A review of the arguments | We examine whether substantial AI automation could accelerate global economic
growth by about an order of magnitude, akin to the economic growth effects of
the Industrial Revolution. We identify three primary drivers for such growth:
1) the scalability of an AI "labor force" restoring a regime of increasing
returns to scale, 2) the rapid expansion of an AI labor force, and 3) a massive
increase in output from rapid automation occurring over a brief period of time.
Against this backdrop, we evaluate nine counterarguments, including regulatory
hurdles, production bottlenecks, alignment issues, and the pace of automation.
We tentatively assess these arguments, finding most are unlikely deciders. We
conclude that explosive growth seems plausible with AI capable of broadly
substituting for human labor, but high confidence in this claim seems currently
unwarranted. Key questions remain about the intensity of regulatory responses
to AI, physical bottlenecks in production, the economic value of superhuman
abilities, and the rate at which AI automation could occur. | Ege Erdil, Tamay Besiroglu | 2023-09-20T23:45:14Z | http://arxiv.org/abs/2309.11690v3 | # Explosive growth from AI automation: A review of the arguments
###### Abstract
We examine whether substantial AI automation could accelerate global economic growth by about an order of magnitude, akin to the economic growth effects of the Industrial Revolution. We identify three primary drivers for such growth: 1) the scalability of an AI labor force restoring a regime of increasing returns to scale, 2) the rapid expansion of an AI labor force, and 3) a massive increase in output from rapid automation occurring over a brief period of time. Against this backdrop, we evaluate nine counterarguments, including regulatory hurdles, production bottlenecks, alignment issues, and the pace of automation. We tentatively assess these arguments, finding most are unlikely deciders. We conclude that explosive growth seems plausible with AI capable of broadly substituting for human labor, but high confidence in this claim seems currently unwarranted. Key questions remain about the intensity of regulatory responses to AI, physical bottlenecks in production, the economic value of superhuman abilities, and the rate at which AI automation could occur.
###### Contents
* 1 Introduction
* 2 Arguments in favor of the explosive growth hypothesis
* 2.1 Increasing returns to scale in production gives rise to explosive growth
* 2.2 The stock of digital workers could grow fast
* 2.3 AI automation could have massive transitory effects
* 3 Arguments against the explosive growth hypothesis
* 3.1 Regulations can slow down the economic impact of AI
* 3.2 Output is bottlenecked by other non-accumulable factors of production
* 3.3 Technological progress and task automation by AI will be slow
* 3.4 Alignment difficulties could reduce the economic impact of AI
* 3.5 R&D may be harder than expected
* 3.6 AI automation will fail to show up in the productivity statistics
* 3.7 Human preferences for human-produced goods will bottleneck growth
* 3.8 Previous technological revolutions did not lead to growth acceleration
* 3.9 Fundamental physical limits restrict economic growth
* 4 Discussion
* 4.1 Open questions
Introduction
Artificial intelligence (AI) possesses enormous potential to transform the economy by automating a large share of tasks performed by human labor. There has been growing interest in the possibility that advanced artificial intelligence (AI) systems could drive explosive economic growth, meaning growth an order of magnitude faster than current rates. (Davidson 2021) AI could rekindle the increasing returns dynamics that have historically led to super-exponential growth. This report aims to build on Davidson's analysis by providing a comprehensive assessment of the key arguments for why AI that can meaningfully substitute for human labor may or may not produce an acceleration of economic growth by as much as we understand the Industrial Revolution as having done, a factor of ten or more.
The idea of AI's potential to automate many or even all tasks presently undertaken by labor has drawn considerable interest from economists. Various mechanisms, through which this transformation may or may not happen, have been proposed (for a review, see Trammell and Korinek 2020). While current scholarship provides valuable qualitative insights and intuitions about the limitations of accelerated economic growth due to AI (e.g. Aghion, B. F. Jones, and C. I. Jones 2018), our work aims to extend this foundation by offering a more detailed quantitative analysis. We focus on identifying the specific conditions under which significant growth accelerations may or may not occur. For example, arguments involving economic bottlenecks, including so-called 'Baumol effects' that are invoked to suggest accelerations might be unlikely, often permit substantial level-effects from AI automation--effects which could easily produce accelerated growth if automation were to take place in a sufficiently short period of time. Our approach attempts to dig into the precise quantitative ranges of growth rates that might blocked by such previously identified mechanisms. We take stock of some of the key arguments as to why or why not we might expect explosive growth from AI--growth an order of magnitude greater than is typical in today's frontier economies. We spell out these arguments in detail and tentatively assess their force. We focus on providing a quantitative basis to key arguments (such as arguments involving bottlenecks to automation, preferences for human-produced outputs, technical and regulatory difficulties in automation, among others) to better understand the plausible range of growth rates.
We describe three related arguments in favor of explosive growth that rest on the idea that the development of AI offers an accumulable substitute to human labor--the first relying on increasing returns to scale due to technological progress, the second based on fast expansions on the amount of total effective labor force (including AI) and the third from transitory growth effects due to rapid automation.
The bulk of this work analyzes arguments against explosive growth from AI, which we articulate and tentatively assess (see Figure 1). Overall, our assessment highlights four themes:
**Growth theory models predict explosive growth**. Growth theory models often predict explosive growth by default when AI is able to substitute for human labor on most or all tasks in the economy. These predictions are quantitatively robust under various modeling assumptions, such as whether we assume increasing returns to scale or constant returns to scale and whether or not we consider delays to investment, if realistic parameter values are used.
**Regulation could restrict AI-driven growth**. Regulation of AI and various restraints (arising from political economy, or risk-related concerns) could be sufficient to keep growth from increasing by an order of magnitude. However, such paths generally require lasting global coordination and potentially exerting control over many distributed actors, which might be infeasible given both the strengths of relevant incentives to develop and deploy advanced AI and the falling costs of AI training stemming from algorithmic and hardware technology advances.
**Many arguments against explosive growth lack quantitative specificity or are otherwise weak**. There are numerous arguments against explosive growth from AI that falter in providing quantitative specifics. For instance, some posit that fundamental physical limits or non-accumulable factors of production will rapidly bottleneck growth post AI automation, yet they fall short in quantitatively bounding the growth accelerations permitted by such constraints in a compelling manner. Other objections, such as that humans might strongly disprefer consuming AI-produced goods or services, may also fail to thoroughly take seriously "good AI" that is actually able to flexibly substituting for human labor across a wide range of tasks.
**It is difficult to rule out explosive growth from AI, but that this should happen is far from certain**. We think that the odds of widespread automation and subsequent explosive growth by the end of this century are about even. Yet, high confidence in this claim seems unwarranted, given numerous plausible counterarguments and the fact that the prediction of explosive growth involves the extrapolation of models beyond the regime in which they have been observed to work.
In this work, we will refer to "explosive growth" as growth an order of magnitude greater than what is typical in today's frontier economies. Specifically, we define this as annual real gross world product (GWP) exceeding \(130\%\) of its maximum value over all previous years. This definition is consistent with prior definitions (e.g. Davidson 2021), and it precludes scenarios in which the level of GWP crashes (due to, e.g., some disaster) and then recovers quickly. Moreover, by economic output, we refer to the measured output figures produced by relevant statistical agencies operating in at least as favorable measurement conditions as those today in frontier economies, i.e. incorporating new product varieties and adequate sampling intervals, etc. at least as adequately as the Bureau of Labor Statistics (BLS), Office of National Statistics (ONS), and so on.
We analyze a dozen key arguments for and against explosive growth from AI capable of substantially automating economically valuable tasks. Each argument is first summarized concisely before a deeper examination aims to give a quantitative sense of how it might permit or rule out certain growth rates. After thoroughly assessing each argument, we evaluate its importance in assessing the probability of AI-induced explosive growth. To ground our quantitative estimates, appendices provide relevant economic growth models and data. We offer calibrated probability estimates for each argument being decisive in determining if explosive growth occurs. These judgments are based on a defined likelihood scale that we introduce in Appendix A.
It is worth noting a few key limitations upfront. This work is perhaps not very balanced in two ways. Firstly, we searched perhaps more thoroughly for reasons why substantial growth accelerations could not happen compared to arguments in favor. As a result, it contains a much larger treatment of all the ways in which explosive growth could not happen, relative to the ways in which it could. The fact that many such counterarguments are featured in our work might then inadvertently give the impression that there are many more plausible ways for the overall hypothesis to fail than to succeed. On the other hand, we have become more partial towards the idea that explosive growth looks highly plausible, likely more so than informal polls suggest economists are. Partly in light of this, our evaluation of some counterarguments may come across as succinct, reflecting our updated perspective on their relative merit.
## 2 Arguments in favor of the explosive growth hypothesis
In this section, we delve into three reasons to expect explosive economic growth driven by the advent of artificial general intelligence (AGI). Firstly, we demonstrate that increasing returns to scale in semi-endogenous growth models generally produces explosive growth when labor is accumulable (in the sense that the stock can be increased by reinvestment of production). Secondly, we extend our analysis to exogenous growth models, highlighting how explosive growth can emerge even without increasing returns to scale and while considering current hardware prices. Thirdly we argue substantial automation happening in a brief window in time could raise the level of output sufficiently high to give rise to explosive growth. All throughout, we emphasize that the rapid expansion of the total labor force, which encompasses human and AI workers, likely leads to explosive growth.
### Increasing returns to scale in production gives rise to explosive growth
One argument for explosive growth from AI invokes the increasing-returns production implied by standard R&D-based growth models. In such models, when AI suitably substitutes for human labor, all factors of production become
"accumulable" so that these can be increased through investment. Notably, this gives rise to a feedback mechanism where greater output gives rise to an increase in inputs that give rise to a greater-than-proportional increase in output. Hence, such models generically predict super-exponential growth conditional on AI that suitably substitutes for human labor. The striking feature of endogenous growth models to produce explosive growth was previously pointed out, by among others, Trammell and Korinek 2020.
If AI offers a suitable substitute for human labor, standard R&D-based growth models with increasing returns to scale predict super-exponential growth as long as the diminishing returns to R&D are not very steep. Consider a generalized version of R&D-based growth model, which-- due to the nonrivalry of ideas--gives rise to increasing returns:
\[Y(A,K)=AK^{\beta} \tag{1}\]
where \(A\) represents total factor productivity and \(K\) is the stock of capital (machines, computers, etc.). Capital accumulates in line with dedicated investment, as does total factor productivity. However, total factor productivity investment have diminishing marginal returns as ideas get "harder to find" (as is well-documented in, for example, Bloom et al. 2020). Formally:
\[\frac{1}{A}\frac{dA}{dt}\propto A^{-\phi}I_{A}^{\lambda} \tag{2}\]
Standardly, motivated by the so-called "replication argument", we might suppose that \(\beta=1\). However, this assumption is not at all needed for our conclusion. Indeed, \(\beta<1\) still produces increasing returns to scale as long as the returns to idea-production diminish sufficiently slowly.
In particular, we show that as long as \(\lambda/\phi+\beta>1\), the economy exhibits increasing returns, which implies that such an economy will grow hyperbolically, i.e. as described by the differential equation \(\frac{dY}{dt}\sim Y^{c}\), where \(c\) is the returns to scale parameter (which is \(>1\) whenever \(\lambda/\phi+\beta>1\)).
Bloom et al. 2020, which provides us perhaps with the best estimates of the extent to which ideas get "harder to find", we find that hyperbolic growth occurs with values of \(\beta\) as low as 0.68. Hence, while standard economic arguments suggest we might expect \(\beta\approx 1\), with highly conservative assumptions, hyperbolic growth with AI is predicted by R&D-based growth models. This point has been noted elsewhere, notably by Davidson 2021: avoiding increasing returns to scale is difficult to avoid even when ideas get "harder to find" over time. Indeed, this outcome is consistent with fairly conservative assumption on there being decreasing returns on inputs to final goods production.
Why might we take the conclusions from these models seriously? R&D-based growth models, and in particular, the semi-endogenous version, offer adequate explanations of recent and distant economic history, as has been noted in the literature. As such, the fact that it robustly predicts explosive growth from AI that suitably substitutes for human labor should be considered a relatively strong argument. Although obtaining high-quality empirical evidence to decide between competing growth theories remains challenging, the semi-endogenous account predicting explosive growth performs relatively well (see Table 1).
\begin{table}
\begin{tabular}{p{142.3pt} p{142.3pt}} \hline \hline
**Prediction** & **Explanation and References** \\ \hline Economic growth acceleration under Malthusian conditions & An acceleration of economic growth when the size of the population is limited by the available technology (see also Kremer 1993). This prediction is in line with the observed acceleration over recorded economic history, such as that of Bolt and Van Zanden 2020. While there is reasonable debate about how closely models that predict gradual economic acceleration fit distant economic data, and the extent to which such data is reliable (see, e.g. Garfinkel 2020; Roodman 2020), this model arguably captures key dynamics of the data. \\ \hline Non-increasing growth in global output & Non-increasing growth in global output in the mid-20th century concurrent with the observed slowing rates of population growth in middle and high-income countries (C. I. Jones 2022). The model predicts that slowing rates of population growth produce slowing rates of output growth, all things equal, and therefore does a decent job accounting for the general pattern of 20th-century growth. There is furthermore evidence that the semi-endogenous growth model fits recent empirical data on output, multifactor productivity, research intensity better than other models (see, e.g. Kruse-Andersen 2017; Herzer 2022). \\ \hline Maximum observed rate of long-term economic growth & The maximum observed rate of long-term economic growth should be on the order of the maximum rate of population growth.1 Semi-endogenous growth theory predicts that growth in output should be close to the rates of population growth (see Appendix C), and should therefore be on the order of 3\% per year, which is consistent with historical data. \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of key predictions from the semi-endogenous growth theory with corresponding explanations and references.
There are alternative accounts of economic history that put more weight on culture and institutions compared to scale effects from population, capital, and ideas. We agree that these factors matter, but think they are best viewed as corrections on top of the semi-endogenous model. For instance, just as culture and institutions influenced which countries were the first to undergo the Industrial Revolution, we also believe that they will influence which countries will be the first ones to start experiencing explosive growth.
Nevertheless, it might still be appropriate to put some weight on such alternative explanations and appropriately be less confident that simply scaling the labor force will lead to explosive growth. However, the basic picture here still seems persuasive to us even if for some reason we believe this is not a good account of what happened in economic history, and so we still think this argument is strong in the absence of more specific critiques.
Semi-endogenous growth theory offers a comprehensive framework for understanding historical trends and patterns of economic growth. Although obtaining high-quality empirical evidence on growth theories remains challenging, the semi-endogenous account predicting explosive growth from AI systems that provide suitable substitutes for human labor presents moderately strong evidence supporting the explosive growth hypothesis. There is a possibility that current growth rates are shaped by additional bottlenecks beyond the fact that the current labor-stock is non-accumulable or that new bottlenecks may emerge shortly after AI substitutes for human labor. The exact nature of such a bottleneck remains uncertain, which warrants a cautious approach when evaluating future growth prospects (for a more in-depth discussion on this topic, see Section 3.2).
### The stock of digital workers could grow fast
The stock of AI systems that substitute for human workers could grow very fast once such systems have become technically feasible, which by itself could potentially expand output massively. Relaxing our earlier assumption of increasing returns to scale, we can show that even a simple exogenous growth model predicts explosive growth from AI because the stock of AI systems performing tasks that human labor previously did could grow sufficiently rapidly. Consider an exogenous growth model without technological progress:
\[Y(t)=AL(t)^{\alpha}K(t)^{1-\alpha},\]
Here \(L\) refers to workers: either digital workers in the form of AI systems or human workers. With the development of AI that presents a suitable substitute for human labor, we can suppose that the stocks of labor and capital grow as a result of investment:
\[\frac{dL(t)}{dt}=sfY(t)/\bar{c}-\delta_{L}L,\;\frac{dK(t)}{dt}=s(1-f)Y(t)-\delta _{K}K, \tag{3}\]
where \(f\) is the fraction of investment channelled towards AI, \(s\) is the saving rate of the economy. \(\bar{c}\) denote the average dollar-costs (on compute and electricity) of building an AI system that performs the same amount of work as a human laborer. \(\delta_{L},\delta_{K}\) are the depreciation rates for the effective labor and capital stocks, respectively. Assuming that \(A\) is constant, some algebra (see Appendix D) combined with the parametric assumptions presented in Table 2 reveals that the steady-state rate of growth in this model exceeds 30% per year if
\[\bar{c}\leq s^{10/7}\cdot 150,000\,\$/\text{worker}. \tag{4}\]
Hence if the cost of running an AI that substitutes for a human worker (\(\bar{c}\) whose units are \(\$/\text{worker}\)) is sufficiently low, exogenous growth models predict that the effective labor stock should grow sufficiently fast to give rise to explosive growth. A similar argument to the effect that a rapidly expanding "digital workforce" can result in massive expansions in output has previously been made by Karnofsky 2021.
We can provide a rough estimate of AGI runtime costs by relying on estimates of the cost of computation and the estimated cost of running the human brain. Right now, machine learning hardware costs around \(2\times 10^{18}\,\text{FLOP}/(\$\cdot year)\),2 and Carlsmith 2020 provides a best-guess estimate of \(10^{15}\,\text{FLOP}/\text{s}\approx 3\times 10^{22}\,\text{FLOP}/\text{year}\) for the rate
\begin{table}
\begin{tabular}{c c} \hline \hline
**Parameter** & **Value** \\ \hline Value of US capital stock & \$70T \\ US Labor Force & 165M \\ \(\alpha\) & 0.7 \\ \(\delta_{L},\delta_{K}\) & \(\ll\) 30\% \\ \hline \hline \end{tabular}
\end{table}
Table 2: Summary of key parameters and their values. \(\delta_{L},\delta_{K}\) denote the depreciation rates for effective labor and capital stocks, respectively.
of computation done by the human brain. Combining these two estimates suggests a value of around \(\bar{c}=1.5\times 10^{4}\,\$/\)worker. In our model, this is consistent with explosive growth if
\[(1.5\times 10^{4}) \leq s^{10/7}\cdot(1.5\times 10^{5}) \tag{5}\] \[0.1 \leq s^{10/7}\] (6) \[0.2 \leq s. \tag{7}\]
In other words, this would hold if savings rates are in line with saving rates that have been historically observed in Western countries and significantly lower than saving rates that have been observed in East Asian countries such as Japan, China, and Singapore. In addition, saving could be higher under AI-driven growth, given that AI could increase the productivity of capital investments, and result in concentrating wealth to those with a high propensity to save (Trammell and Korinek, 2020).
Overall, this calculation suggests even if we very conservatively assume that hardware technology stops improving, that we operate in a constant-returns to scale regime, and that AIs are only as productive as the average worker in the US, explosive growth is still a plausible outcome of labor becoming accumulable if our AGI software can match the performance of the human brain.
It should be noted that we assume that the depreciation rates of the stock of compute and capital (\(\delta_{K}\) and \(\delta_{L}\)) are assumed to be neglible compared to the growth rate (see table 2). If we were to relax this assumption then we need precise estimates of these numbers and in general need higher saving rates. However, even with depreciation rates \(\sim 30\%/\)year, we only need to double the savings rate to \(s=0.4\) to still get explosive growth, which has historically been observed in e.g. East Asian countries.
Moreover, we might need to account for the cost of robotic systems in addition to the computational costs of running the software. While state-of-the-art industrial robotic systems are currently, for e.g. spot welding, are on the order of $100k per unit (Sirkin, Zinser, and Rose, 2015), it is difficult to predict how much this would add to the cost basis, \(\bar{c}\). This is because there could be substantial reductions in prices as we proceed along a learning curve as robotics usage expands (Korus, 2019).
While the model above is only a toy model, it nevertheless illustrates the key importance of the parameter \(\bar{c}\) or something else fulfilling the same role for any endogenous growth model involving AI-driven automation.
The preceding analysis relies on a static calculation that does not account for potential price effects. In other words, it overlooks how demand could influence the price of computation. Additionally, the model's conclusions rest heavily on estimates of the computational requirements of the human brain, which are marked by considerable uncertainty. If we were to consider the higher-end estimates of the computation costs of running the human brain in Carlsmith 2020 of \(1e16\,\text{FLOP}/\)s, explosive growth looks unlikely with current prices.
Remarkably, this result holds even if we assume that there are delays to investment, in the sense that "realized investment" is an exponential moving average of past inputs to investment. In other words, we model investment to move more gradually, thereby avoiding short and perhaps unrealistic bursts of capital accumulation and output growth (see Appendix E).
However, it is important to note that hardware prices are expected to decrease considerably over time, with a current halving time of roughly 2.5 years (Hobbhahn and Besiroglu, 2022). This indicates that the cost of running a human-equivalent AI is likely to become more affordable in the future. Therefore, the argument presented in the analysis becomes more persuasive if one anticipates that AGI will take around 10 to 20 years to develop, a period during which computer hardware could become one or two orders of magnitude more cost-effective. This dynamic could potentially amplify the economic growth impact of labor substitution by AI. In addition to this, it's also plausible that \(\bar{c}\) is lower because AIs could be more capable than humans at runtime compute parity. Here are a few reasons why we might expect this to be the case:
1. A single AI system trained only once can be deployed in many different settings in the economy given a sufficient runtime compute budget, while this is impossible to do for humans. In other words, it's much easier to copy AI systems than it is to copy humans. This has many beneficial effects. It allows us to amortize the cost of training large systems over a vast number of runtime instances, something impossible to do with human lifetime learning. In addition, it means we can pick the best-performing systems at a given runtime compute level and simply copy those, instead of sampling from a wide distribution of conscientiousness, intelligence, communication skills, etc. that we must do when the labor force is made up of humans.
2. Software progress on AI capabilities might not stop at human levels. Indeed, there's no particularly good reason to suppose that human brains are optimal from the point of view of converting runtime compute into capabilities, given that humans are evidence that previous species were not optimal. Even one or two orders of magnitude of decrease in \(\bar{c}\) from software progress would strengthen the argument in this section considerably.
An important criticism of this argument is that scaling GWP along the _intensive margin_ and the _extensive margin_ might be meaningfully different. For instance, it might very well be true that doubling the world population over a sufficiently long period of time leads to a doubling in gross world product, but without this increase in population leading to faster technological progress, per capita income would stay the same. If we do not count the consumption of AIs as part of GWP in our model, then our thesis is that increasing the number of AIs will lead to higher per capita consumption among humans, and perhaps it is more difficult to get explosive growth this way without being able to scale the quality of the services in the economy.
We think there is some kernel of truth in this argument, and we expect it to make explosive growth significantly more difficult in worlds where AI-driven automation is unable to meaningfully accelerate R&D, but some scaling along the intensive margin is possible even without technological advances. There are already substantial differences in personal income across the world, and even within rich countries. In most countries, simply raising the average standard of living in the country to the standards enjoyed by the wealthiest residents would lead to orders of magnitude increase in gross domestic product, and we know that if resource constraints are sufficiently loose, doing this requires no new technology. Resource constraints could of course pose obstacles, but those are no more binding when we're talking about an increase along the intensive margin than they are when the increase happens along the extensive margin instead.
Even without the assumption of increasing returns to scale, standard economic growth models predict substantial acceleration in economic growth rates if we assume substitutes for human labor at realistic costs in the model. While we do not strongly endorse the conclusions of this calculation due to the many simplifications we make throughout, we think the argument still provides evidence that explosive growth is more likely than we might think, as it occurs even in the absence of endogenous technological progress and hardware efficiency growth.
### AI automation could have massive transitory effects
In growth theory, there is an important qualitative distinction between _growth effects_ and _level effects_ (Lucas Jr 1988). A growth effect is assumed to be either permanent or last for a long time (e.g. changes to the steady state or balanced growth path), while a level effect is a one-time, transitory increase in the level of economic output that does not translate into higher growth in the future.
It might be the case that even if AI fails to lead to a long-term _growth effect_, there might still be a _level effect_ from human-level artificial intelligence being deployed throughout the economy, and a change in the level of gross world product that happens over a sufficiently short window of time could lead to transitory growth rates that clear the threshold of "explosive growth".
To quantify these effects, consider a toy model in which output is produced by a CES production function over a unit continuum of tasks:
\[Y=A\left(\int_{0}^{1}I_{i}^{\rho}\,di\right)^{1/\rho} \tag{8}\]
where \(A>0\) is a measure of productivity. We will not be explicit about what the inputs \(I_{i}\) represent for the sake of generality, but we assume that there is some total stock of inputs \(I\) available in the economy that can be allocated across different tasks. Moreover, \(\rho<0\) so that tasks are complements, thereby giving rise to 'bottlenecks' in production: the larger negative values of \(\rho\), the more severe the bottlenecks.
Let \(f\) denote the fraction of tasks that _cannot_ be cheaply automated. We show that when \(f\) is relatively large (e.g. 10%), the level effect from AI automation is very substantial, even despite substantial bottlenecks in production. Given that there is some total stock of inputs \(I\) available in the economy that can be allocated across different tasks, we have the constraint:
\[\int_{0}^{1}I_{i}\,di=I,\ \ \forall i\,I_{i}\geq 0 \tag{9}\]
When \(\rho<0\) so that the tasks are complements, \(Y\) is optimized when \(I_{i}=I\) for all \(i\) and therefore \(Y=AI\).
To see the impact of automation, let's suppose that for some \(0\leq f\leq 1\), a fraction \(1-f\) of tasks are "cheaply automated". In practice, this is likely to mean that we get many orders of magnitude more inputs on these tasks after the automation than before. When complementarity effects are strong (so when \(\rho\ll 0\)) we can approximate by assuming infinite input on automated tasks instead, as this simplifies the calculation without making much of a difference to the
final result. In this case, our optimization problem becomes
\[Y=A\left(\int_{0}^{f}I_{i}^{\rho}\,di\right)^{1/\rho},\text{ subject to }\int_{0}^{f}I_{i}\,di=I,\ \ \forall i\,I_{i}\geq 0. \tag{10}\]
This problem, as before, is solved by setting the inputs of all tasks equal to each other: \(I_{i}=I/f\) for all \(i\). In this case, we get that \(Y=AIf^{(1-\rho)/\rho}\), so GWP is higher by a factor of \(f^{(1-\rho)/\rho}\). Figure 1 contains the values of this function evaluated at some plausible values of \(f\) and \(\rho\).
Indeed, the level effects from partial AI automation is substantial, even when the assumptions are relatively pessimistic. For instance, Knoblach, Roessler, and Zwerschke 2020 looks at the elasticity of substitution between capital and labor in the US economy and finds a plausible range from \(0.45\) to \(0.87\), which corresponds to values of \(\rho=(\sigma-1)/\sigma\) ranging from \(-1.2\) to \(-0.14\). \(\rho=-2\) is perhaps below the standard range that is considered plausible, implying stronger complementarities between tasks than we currently believe exist between capital and labor.
Overall, even if the other arguments fail to go through and we cannot even attain AI that perfectly substitutes for humans across all tasks, there's still room for explosive growth if AI can automate e.g. \(90\%\) of tasks in the economy and it can do so in a period of less than \(10\) years. We can relax these assumptions even further if we do not make the pessimistic assumption of \(\rho=-2\).
The argument we present above can fail to hold for many different reasons:
1. The approximation that AI will be infinitely productive on automated tasks makes the numbers look more impressive than they should when \(\rho,f\) are close to zero. For instance, if we believe AI will only ever contribute nine times the input intensity on any automated task, the most we can get out of AI automation is an order of magnitude increase in gross world product, even assuming full automation. Indeed, if we rely on our estimates of the computational cost of running the human brain from Section 2.2., we can estimate that all the computing hardware available in the world today can perhaps run 100 million simulated workers at most. If that's the best we can do, human-level AI software will fall far short of increasing input intensities at all, let alone setting them to infinity. In the future, this can be overcome by manufacturing more chips, improving hardware efficiency, etc. but these are out of the scope of this argument.
2. We do not have good information about what value for \(f\) should be considered plausible. In a world where full automation is attained, we can turn our model into a coarse approximation and interpret \(f\) as the fraction of tasks where running human-equivalent AIs is comparable to or more expensive than employing humans to perform the same tasks, but there's no obvious reason within the framework of this argument why this quantity should be e.g. less than \(25\%\). Our previous calculations based on the cost of human brains are suggestive that this number should be small, but conditional on the increasing returns to scale and digital worker cost arguments failing, we might not want to put substantial weight on this argument either.
3. Even if the argument gives us correct values for the factor increase in GWP we should expect, this increase can simply drag out over a sufficiently long period of time such that we do not get an explosive rate of growth
Figure 1: Level effects of partial AI automation for various different values the substitution parameter \(\rho\) of the CES aggregator function.
at any point. We address some objections which argue for such a possibility Section 3, but our responses are not decisive and we have to concede that long delays are indeed possible.
This argument suggests that explosive growth remains possible even if AI does not result in full automation and even if humans continue to occupy roles in the economy that bottleneck production. As such, it's a "worst case argument" which leads us to put some probability on explosive growth even in such worlds. However, we consider it to be on shakier ground than the other arguments we present for explosive growth, and recommend against taking this argument too seriously in worlds where our other arguments for explosive growth have failed.
## 3 Arguments against the explosive growth hypothesis
In this section, we provide accounts of arguments against AI-driven explosive growth. For each, we assess their plausibility and, where possible, attempt to estimate the permitted growth rates the argument implies. While several of the arguments initially seem concerning, upon closer analysis most do not appear decisive. However, a few remain non-trivial objections that could plausibly reduce the probability of explosive growth, especially in conjunction. We examine each argument in turn and aim to draw tentative conclusions about their effects on the likelihood of explosive growth.
### Regulations can slow down the economic impact of AI
This objection states that the training or deployment of AI systems will be sufficiently impeded by regulation to reduce the economic growth effects of AI. The possibility of the growth effects from AI automation being curtailed by regulation features, for example, in B. Jones 2021; Yudkowsky 2021; Garfinkel 2021. Presumably, there are many reasons this might happen: generic fear or reluctance regarding powerful new technologies, concerns about privacy or intellectual property leading to a shortage of training data, unwillingness to let AI systems perform tasks that can be automated without human supervision due to concerns about legal liabilities, etc. Such regulations may very well be appropriate and prudent, and their negative growth effects could plausibly be outweighed by other considerations around safety and social welfare. The basic argument, though, is that even if AI would indeed produce explosive growth if this were allowed to happen by governments or relevant international bodies, this possibility may not be realized due to dedicated efforts coordinating to slow this process down.
Within the deep learning paradigm that has been dominant in AI research over the past decade, what seems to matter most for the performance of AI systems are the number of examples they see during training and the number of parameters they have - which is in turn dictated by the amount of compute developers have at their disposal. Quantitative support for this statement is provided by the growing literature on _scaling laws_, which describe the performance of a deep learning model in terms of a few macroscopic properties of the model such as the parameter count and the training dataset size. For more on this in the context of large language models, see Kaplan et al. 2020 and Hoffmann et al. 2022.
In light of this, the regulation objection looks more plausible than it did a decade ago because it seems that AI development will be largely driven by access to vast amounts of data and computation. The large physical footprint of the computation capacity required for training and deploying advanced AI would likely make the process easier to regulate, and intellectual property laws can be a significant impediment to the data scaling part of the equation if they were to be interpreted in a manner unfavorable to AI labs. So we do not think we can rule out this scenario as it stands, especially if in the future there are large and visible alignment failures of AI systems that scare people into action.
However, there are effects pushing in the opposite direction. Insofar as being in possession of better AI systems becomes a matter of national security, we can expect any coordination by governments across the world to slow down AI development to be imperfect. Furthermore, the scale of the potential economic value that AGI can create is enormous: it's orders of magnitude beyond any other recent innovation we can think of, mainly because of its credible potential to restore the historical trajectory of accelerating growth. These factors create strong incentives for governments to allow the widespread deployment of AGI systems.
We also have to consider algorithmic progress and improving hardware efficiency. While scaling laws give a good description of the performance of ML systems at a particular level of algorithmic efficiency, over time we develop better software and this means we need fewer resources to achieve the same level of performance. Hernandez and Brown 2020 estimates the pace of algorithmic efficiency improvements in computer vision as one doubling every 16 months and Erdil and Besiroglu 2022 estimates one doubling every 9 months, though with wide confidence intervals. If these rates of progress are at least within the right ballpark and hold up across many orders of magnitude of progress, eventually AGI systems could become quite cheap to train.
In addition, the falling price of computation over time due to hardware efficiency progress means this represents an increasingly smaller fraction of global spending on computation. To keep up with these two effects, increasingly strict regimes of surveillance could eventually be required. The theoretical lower bound on the resource needs of AGI set by
the human brain should loom large in our thoughts here: the existence of the human brain means that in principle we do not need more energy or data than is used by a human to achieve human-level performance, and tracking every human born in the world would require a surveillance regime the likes of which we have never seen so far. We think the first AGI systems will require substantially more computation and data than the human brain does, but over time there's no reason why these costs should not fall to the level of the human brain or even further below.
In light of the above discussion, we think our baseline scenario here for AI regulation should be more like nuclear arms control and less like the regulation of nuclear energy: coordination on nuclear arms control does happen, but it is quite imperfect and hasn't stopped nuclear proliferation from taking place. This is because we think the incentives for AI adoption are more similar to the incentives for nuclear proliferation than the incentives for using nuclear energy, as the economic value that would be unlocked by AGI is far greater and this also has the potential to directly translate into overwhelming military advantage against adversaries.
Here are some concrete ways in which regulation could be used to slow down the economic impact of AI:
1. Place restrictions or otherwise impose additional costs on large training runs, similar to the restrictions that now exist on nuclear power. The large resource footprint of training runs past the \(10^{27}\) FLOP scale or so should make these enforceable for some time.
2. Prohibit the use of AI for certain economic activities. For instance, laws could be created or interpreted to bar the use of AI in courtrooms or at hospitals without adequate human supervision. This would introduce an artificial bottleneck that would stop AI from fully automating some tasks.
3. Use intellectual property laws to prevent the use of certain kinds of data for the training of AI systems. A sufficiently expansive interpretation of existing intellectual property legislation could prevent AI from being usefully monetized, reducing the incentive for private actors to invest resources into developing better AI systems.
While implementing such regulations may hinder the development or deployment of AI, the feasibility of enacting and enforcing them remains uncertain. Firstly, it is unclear whether such policies can reliably remain enforced over a sufficiently large, possibly global, jurisdiction for multiple decades or longer. The potential value of AI deployment could be immense, with the prospect of increasing output by several orders of magnitude. Consequently, this would likely create formidable disincentives for imposing restrictions, as well as powerful incentives for eliminating or bypassing any existing constraints. Secondly, the difficulties with enforcing such restrictions might become large as software improvements bring the capital costs of AI training down. Over time, enforcing such restrictions will require increasingly ubiquitous global surveillance.
The historical record of regulating technologies that could boost output tenfold is sparse because few, if any, such technologies have previously been developed. Perhaps the closest possible analogs are a cluster of agricultural technologies that were introduced during the Nolithic Revolution or the manufacturing technologies that contributed to the Industrial Revolution (the steam engine, the spinning jenny, cotton gin). While England attempted to forestall the diffusion of some key Industrial Revolution technologies by prohibiting the emigration of skilled workers and the export of machinery, these protectionist policies proved largely ineffective (Jeremy 1977). From the 1780s to 1840s, skilled workers, machines, and blueprints were frequently smuggled out of the country despite the bans, and by the 1840s, with industrialization advancing rapidly, the policies were seen as futile and repealed (Ibid.). In summary, England failed to meaningfully slow the international diffusion of its industrial technologies. The experience highlights the challenges of restricting technologies that offer major economic gains.
As far as we can tell, there is no compelling evidence to suggest that technologies involved in prior shocks to production technologies could have been effectively regulated with the effect of not just delaying such shocks but also substantially dampening their growth effects.
Overall, we conclude that regulating the training and deployment of AI may delay its economic impact, but there is no compelling reason to be confident that its development and application would be sufficiently prolonged to maintain historical economic growth rates for an extended period of several decades. We do not rule out the possibility, but we would judge it to be **unlikely** that regulation of the training and deployment of AI will block explosive growth.
### Output is bottlenecked by other non-accumulable factors of production
The endogenous growth theory argument for explosive growth from the Section 2.1 only implies that we should expect constant returns to scale on all physically embodied inputs jointly. Labor and capital are physically embodied inputs, but they might not be the only important ones: other inputs such as energy or land could be just as important, and if they cannot be accumulated through better technology, perhaps this means AGI-driven growth can get short-circuited by its dependence on these non-accumulable factors before reaching the threshold of "explosive growth". In addition,
just like population, there might be intrinsic timescales that block currently accumulable inputs such as physical capital from being accumulated arbitrarily quickly. If true, this could be a strong objection against the explosive growth view.
Some version of this argument is certainly sound: there must eventually be some resource constraints that prevent output from growing arbitrarily large. The important question about this argument is not whether it holds _eventually_, but whether it holds quickly enough to preclude explosive growth.
We estimate that the diminishing returns structure on idea production implied by Bloom et al. 2020 means that we need the returns to scale on accumulable inputs to be at least around \(d\approx 0.68\) for explosive growth to occur (see Appendix B). Theoretically, we have reason to believe that \(d=1\) (i.e. we have constant returns to scale) if we consider _all_ physically embodied inputs. However, not all such inputs may be accumulable: as a naive example, if empty space becomes a valuable resource, then regardless of how much output we invest the speed at which we can grow our access to space might be bounded by the speed of light. There's no _a priori_ argument which can settle the question of the returns to scale on accumulable inputs, and we must also consider the possibility that there might be strong complementarity between presently accumulable inputs such as capital and non-accumulable inputs such as land or empty space. We must examine the argument in greater detail to make a judgment about its strength.
The outside view consideration is that economic growth has accelerated by many orders of magnitude in the past: indeed, this is the empirical regularity for which the semi-endogenous growth theory provides an explanation. Factors which bottleneck this acceleration do not seem commonplace. We might look at the \(1.5\) order of magnitude increase in growth rates since the agricultural era as evidence that new bottlenecks factors such as population growth appear at a rate of roughly once every \(1.5\) orders of magnitude of acceleration, suggesting a probability of \(\sim 1-1/(1+1/1.5)=40\%\) using the time-invariant version of Laplace's rule from Sevilla and Erdil 2022 that one such bottleneck appears before world economic growth accelerates by one order of magnitude.
When we get down to specifics, the most plausible bottleneck factors that we can think of are land, energy and capital. On the energy front; on average, \(4.4e16\)W hit the Earth (National Aeronautics and Space Administration 2005), while global yearly energy consumption is about \(4e13\)W (Ritchie, Roser, and Rosado 2022), suggesting that energy consumption could expand by 3 orders of magnitude. Similarly, only around 1.5m km\({}^{2}\) out of the Earth's 100m km\({}^{2}\) of habitable land is urban and built-up land, which suggests that there are around 2 orders of magnitude of land that could be urbanized or built-up. Even if these are strong constraints that cannot be overcome, considering just these constraints, we have at least \(2\) orders of magnitude of room to scale up gross world product. If we assume no improvements in efficiency, so that resource consumption needs to be scaled up proportionally to output, such constraints would still permit explosive growth if the transition to full automation took 20 years or less. Clearly, then such constraints do not block growth accelerations at least when AI automation occurs swiftly.
We examine the prospect that some form of physical capital could end up being a bottlenecking factor quantitatively and come to the conclusion that for the argument to block explosive growth, we need adjustments in investment to be significantly slower than the growth rate of the broader economy (see Appendix E). In particular, if we can double the worldwide stock of physical capital at a rate of \(30\%/\)year, there's no reason to suppose that explosive growth would be prevented due to investment delays or adjustment costs.
The assessment of the likelihood of physical capital becoming a bottlenecking factor, therefore, comes down to the quantitative question of how long we can expect fundamental delays to investment to be. The experience of Chinese catch-up growth shows that sustained growth rates on the order of \(10\%/\)year and one-time growth rates on the order of \(15\%/\)year have precedent in economic history. To reach the threshold of explosive growth, we only need a doubling of this final rate of increase in a world where AI will also be capable of assisting with the process of capital stock adjustment, which does not seem sufficiently far out of distribution for us to seriously doubt its feasibility.
Overall, the inside view here seems somewhat ambiguous and it's difficult to know in which direction we should update given the above paragraph. The fact that the joint returns to scale on labor and capital right now seem to be well over the threshold of \(d=0.68\) required for explosive growth is reasonably good evidence that we should expect at least some period of growth acceleration after full automation, but this period might be short and it might stop before we actually reach \(30\%/\)year in gross world product growth rates.
Still, we think the threshold of \(0.68\) might be really low, much lower than where existing empirical evidence places it (e.g. Kariel and Savagar 2022; Basu and Fernald 1997). In addition, even in a world where this objection is valid, the argument from accumulating digital workers from Section 2.2 could still produce explosive growth for some time as a consequence of the transition from human labor to AI labor. As a consequence, our estimate of the probability that this objection blocks explosive growth is substantially smaller than the naive outside view figure of \(40\%\).
Our final conclusion is that this argument is plausible on the outside view and the inside view evidence makes the argument seem somewhat less compelling, though is by no means sufficient to rule it out. Our final judgment is that it's **unlikely** this objection blocks explosive growth.
### Technological progress and task automation by AI will be slow
This argument posits that the requirements for automating different tasks in the economy span a wide range in computation, data or both. As these resources can only be accumulated in a gradual fashion, it will take a long time to get from the point where AI starts to have a large economic impact by automating tasks that are the easiest to automate to the point where AI is able to fully automate the economy, and this long waiting period will spread out the economic impact sufficiently that we end up not observing explosive growth.
The effect of this argument is similar to the argument from regulation, but the underlying driver is different. Here, the reason is a physical property of AI systems as such, and not a property of how human civilization will react to the prospect of full automation of the world economy by AI. In both cases, however, there is some force that causes the large impact of full automation to be spread out over a long period of time, and this is what precludes explosive growth.
This objection rests on an empirical claim about the relative difficulty and resource requirements of automating different tasks in the economy, specifically that the distribution of the amount of computation, data, etc. required to use AI to automate different tasks in the economy is wide and/or fat-tailed. In other words, we need _some_ tasks to be easy and automated early on, and _some_ tasks to be very difficult and to take many orders of magnitude more resources to automate.
If this objection holds, it could indeed be why explosive growth does not occur: a 4 order of magnitude (\(4\,\)OOM hereafter) increase in gross world product spread out evenly over \(80\) years would not produce explosive growth, for instance. A specific plausible story here is that "physically embodied" tasks such as general-purpose robotics will be quite difficult to automate - solving them will require large amounts of computation, data, and researcher effort.
This is among the more compelling reasons why we might not get explosive growth. However, on the inside view, it still seems rather unlikely to be correct. There are two main reasons for this:
1. Slow deployments and automation require large gaps in compute and data requirements between the point where AI starts to accelerate economic growth and the point AI is able to fully automate the world economy. However, inside-view investigations into AI (such as Cotra 2020; Davidson 2023) do not usually support such large gaps. The largest plausible gap in training computation between AI starting to have a noticeable macroeconomic impact and full automation that has been suggested in such inside-view investigations is around 10 orders of magnitude, and even this gap would be crossed fairly quickly if we add up the effects of hardware scaling, improving hardware and software efficiency, etc. It's implausible that we could get a delay that's as long as 80 years, and delays on the order of 30-40 years seem like the slowest that takeoff could end up being.
Figure 2: The distribution of growth effects from automation on gross world product across difference values of the substitution parameter \(\rho\). The figure illustrates how output growth becomes increasingly back-loaded as \(\rho\) becomes more negative, indicating stronger complementarity effects. For instance, \(\rho=-5\) corresponds to an elasticity of substitution of \(\sigma=1/6\) (extremely strong complementarity), and \(\rho=-0.5\) corresponds to \(\sigma=2/3\) (moderately complementarity). All scenarios assume a 100x total level effect from full automation.
2. Even if the delay period is much longer than 40 years, in a straightforward constant elasticity of substitution (CES) world, where tasks performed in the economy are gross complements so that automated 'outputs' are imperfect substitutes for non-automated 'outputs', the final tasks to get automated are substantially more valuable than earlier tasks. This means that a constant _rate_ of task automation (say, automate \(1\%\) of tasks that humans can do in the year \(2020\) every year) leads to initially slow growth that becomes extremely fast towards the end, as can be seen in Figure 2. This intuition seems compelling: the final tasks to get automated remove the final bottlenecks in production, so if we believe full automation is actually possible it's difficult to construct a scenario in which we do not get explosive growth here.
While we consider this objection unlikely to be correct, it's internally coherent and more compelling compared to most of the other objections. We expect it to be **unlikely** that this objection blocks explosive growth.
### Alignment difficulties could reduce the economic impact of AI
If AI alignment--the challenge of steering artificial intelligence (AI) systems to behave according to intended goals and avoid unintended harmful behaviors--turns out to be so difficult that it's hard to get AI systems to reliably do what we want in real-world deployment, then aside from these systems being regulated more strictly, it could also simply not be in the private interest of any actor to deploy such systems at large scale. The capabilities of an AI system may seem impressive in the lab, but if private actors are unable to confidently align these systems to accomplish the tasks they want done safely, it's hard to foresee such unaligned AI generating major economic impact before these alignment problems are solved.
It might also be challenging for AI systems to be deployed to perform certain tasks without human supervision. For instance, as outlined in Ji et al. 2023, a common alignment problem of modern large language models is their tendency to hallucinate facts that are wrong: when asked to provide references for a claim they have made, they will often respond with references formatted according to the proper guidelines but referring to papers that do not exist. If the tendency for models to hallucinate facts cannot be entirely fixed, it might be necessary for a human in the loop to be present in any application where strict agreement with facts is highly important, which would mean there are limits to how far poorly aligned AI systems are able to automate such tasks.
There are many other paths to alignment problems leading to AI performing below the economic potential we might attribute to it based strictly on capabilities. As another example, if humans are concerned about misaligned AI systems having too much agency, they might deliberately try to engineer AI systems to be less independent in their decision-making than humans. This would then require humans to play critical decision-making roles in the economy, and then human decision-making capabilities could end up being a bottleneck in the way of explosive growth.
While the motivation behind the alignment difficulty argument is quite different from the other arguments we consider, formally its effects are likely going to be equivalent to limiting the fraction of tasks AIs are able to perform in the economy. For instance, if humans must occupy key decision-making roles in the economy, this means that effectively these tasks cannot be automated from an economic point of view. The effect of some tasks requiring human supervision to perform is similar.
This means our quantitative basis for assessing the above argument should be similar to what we outline in Section 2.3. If we think misalignment is likely to be so bad that e.g. \(f=25\%\) of tasks are likely to remain unautomated, and the elasticity of substitution across tasks \(\sigma\approx 1/3\), then it's quite plausible that this argument blocks explosive growth. However, as discussed in the aforementioned section; \(25\%\) is a large fraction of the tasks in the economy, and \(\sigma\approx 1/3\) is a high degree of complementarity across tasks. As before, we consider both of these parameter choices to be rather unfavorable, but the alignment difficulty argument pushes us to think that perhaps a \(25\%\) lower bound on the fraction of unautomated tasks in the economy is not as implausible as it might seem.
It is rather unclear what probability distribution is implied by this argument over the parameter \(f\) of tasks that AIs won't be able to automate "early on". However, it seems likely to us that this distribution puts significant probability mass on values that are rather small and would block explosive growth even with moderate values of \(\sigma\), with potentially long delays in the R&D process that would push this fraction down, echoing the arguments from the Section 3.2. Overall, our assessment is that this argument is most likely not going to block explosive growth, but its influence cannot be ruled out, especially in worlds where \(\sigma\) turns out to be smaller than we expect.
Overall, our conclusion is that alignment difficulties are **unlikely** to block explosive growth. Furthermore, this argument is in a family of arguments whose plausibility is correlated with one another due to the confounding influence of the elasticity of substitution parameter \(\sigma\), and therefore it's important to take care when aggregating the probabilities: the disjunction of these arguments is less likely than would be implied if we simply treated them as independent events and blithely multiplied their individual probabilities of not being blockers to get a final answer.
### R&D may be harder than expected
One argument for why we might not see explosive growth is that R&D may be simply too hard. More precisely, the idea production function for total factor productivity (TFP) may have such unfavorable diminishing returns that it blocks the whole model from exhibiting explosive growth. As we have shown in Section 3.4, to make this work, the rate of the returns to R&D and the returns to scale in economic production jointly need to be small enough so that the the feedback between economic inputs and output is too decoupled to give rise to accelerating growth. If we take existing estimates of the returns to R&D in for US TFP, from Bloom et al. 2020, this argument works if the homogeneity of the production function for other non-idea outputs is no greater than \(d=0.68\).
We unfortunately do not have much evidence to evaluate how plausible the premise of this objection is. Although we have estimates related to \(d\) for the advanced economies today, it is unclear how much these should inform us about the returns to scale in economies that are possibly bottlenecked by other factors of production. It is also appropriate to put some probability mass on the possibility that present estimates of the returns to R&D are too aggressive, or that returns might fall over time as we make more progress in R&D. However, even if we assume the premise that returns to R&D are less favorable than present estimates suggest, this argument isn't sufficient to rule out explosive growth because of the argument based on the cost of computation we advance in the Section 2.2. Indeed, even in a world with exogenous technological progress and diminishing returns to scale on labor, explosive growth still remains a plausible outcome.
Because of the uncertainty about the premises of this argument and that it does not seem easy for this effect to block explosive growth even if the premises of the argument are assumed to be valid, this argument seems rather weak. We accordingly estimate a low probability that this argument is a decisive blocker. In light of the above assessment, we conclude that it is **very unlikely** that unexpected difficulties in R&D that result in stagnating TFP growth will end up blocking explosive growth.
### AI automation will fail to show up in the productivity statistics
Even if substantial AI automation causes explosive growth in some intuitive sense, it is possible that economic measurement will be flawed in some respects and fail to capture a possible growth acceleration. Therefore, we could end up seeing a world of rapid economic transformation in which GDP growth statistics nevertheless fall far short of the threshold of \(30\%/\)year we set for explosive growth.
There are at least two related arguments for why substantial AI automation will fail to show up in productivity. The first is that economic output will be inaccurately measured and that this measurement error will result in a downward bias in the estimated rate of economic growth. The second related objection is that there are well-known issues with measured growth in economic output failing to capture growth in consumer surplus, so even if the measurement of output was highly reliable, estimated economic growth would fall short of growth in consumer surplus. The second objection also contends that consumer surplus is, in some sense, the more important metric.
The first argument that economic growth will be imperfectly measured and suffer attenuation bias is indeed plausible. There are many reasons why this might happen, such as:
* Lag in incorporating new product varieties: Official economic agencies often fail to promptly incorporate new types of products into their metrics. For instance, the advent of electric vehicles took years to be accurately reflected in GDP calculations.
* Inadequate sampling intervals: Current sampling intervals may be too long to capture short bursts of rapid economic growth.
* Random measurement errors: Factors like imperfect quality adjustments introduce random errors into growth estimates. Such error could introduce attenuation bias into the estimates of growth.
The first of these is in part the reason why the productivity effects in IT have been relatively meager (see, e.g. Brynjolfsson 1993), and the same measurement issues might similarly result in the underestimation of the effects of AI. On the other hand, the existing literature on the accuracy of GDP estimates suggests that these are not usually statistically biased. For example, the difference between preliminary estimates and later estimates derived from the comprehensive economic census tend not to be systematically different, at least in G7 countries (York and Atkinson 1997) or in the US (Landefeld, Seskin, and Fraumeni 2008; Mankiw and Shapiro 1986). Moreover, using data from six comprehensive revisions--in 2009, 2003, 1999, 1995, 1991, and 1985, Fixler, Greenaway-McGrevy, and Grimm 2011 finds that the size of BEA revisions of advance GDP estimates are not correlated much at all with preliminary GDP estimates. This suggests that historical growth accelerations are not likely to be systematically underestimated, at least in the United States.
This leaves us with conflicting insights regarding the economic implications of AI. On one hand, GDP estimates in leading economies have generally proven to be unbiased and reliable. On the other hand, the economic contributions of past technological innovations like IT have been historically under-reported due to measurement issues.
However, as discussed in Sections 2 and go on to discuss in 3.8, the economic impact of a technology that can widely substitute for human labor could far exceed that of past technological innovations like IT. Given this, it is reasonable to expect that statistical agencies, operating under conditions at least as favorable as today's, will more accurately estimate the economic gains from AI, akin to how they track overall GDP. Relevant agencies might adapt to a faster rate of change. In an AI-automation world, agencies could face pressures to ensure that tracking and monitoring are commensurate with the pace of change. Their budgets are likely to expand in line broadly with the size of the economy, and relevant technologies used for monitoring with the sophistication of extant technology.
We think this argument is somewhat implausible, mostly because it relies strongly on the notion that output measurements will make predictable and large errors that we can anticipate but competent statistical agencies will predictably fail to address. Even with limited knowledge of these agencies' operations, we find the assumption hard to believe.
A weaker version in which we do not claim to predict the sign of the error in advance is somewhat more convincing. In light of this objection, one's expectations of growth rates under AI automation should be more spread out. The net effect of this is depends on one's expectations of growth rates from AI automation: if one were confident in explosive growth, one should shade their probability estimates in light of additional noise. On the other hand, if one were confident that explosive growth would not occur, one should assign a greater credence to statistical agencies reporting 30% growth rates. In the end, we consider it **unlikely** that GDP measurements will make errors sufficiently large and systematic for their measures to not show explosive growth occurring.
The second argument is based on the recognition that there are well-known issues with measured growth in economic output failing to capture growth in consumer surplus, as the former fails to capture the value of 'free' IT goods, such as Wikipedia, Google search, OpenCourseWare, and so on. Perhaps, an AI-driven economy will produce a relatively larger share of goods that fail to show up on the usual output accounting.
Existing attempts to estimate the contributions from 'free' goods find the contribution of is relatively small, contributing roughly no more than one-tenth, in proportional terms, to GDP growth numbers. For example, Nakamura, Samuels, and Soloveichik 2017 estimate that including 'free' content would raise U.S. GDP growth by about 0.03 percentage points per year from 1995 to 2014. Relative to the average GDP growth rate of 2.5% over that period, this would represent a very small margin of error. Other attempts at similar accounting of the contributions from 'free' content like Facebook likely find slightly larger contributions (e.g. Brynjolfsson et al. 2019), but similarly suggest that this added growth adds on the order of tens of basis points to GDP growth, at least in the United States.
In addition, even if such errors did come to pass, at some level, we do not care about productivity statistics in any fundamental sense. They are simply a useful proxy for what we wish to discuss, and if they fail to be a good proxy in the future, that does not necessarily mean our thesis about explosive growth is mistaken or that we shouldn't take action to prepare for a world in which explosive growth will occur. We find it exceptionally unlikely that this argument blocks explosive growth in a sense that we would care about, as opposed to e.g. being a measurement artifact.
### Human preferences for human-produced goods will bottleneck growth
Humans may have a preference for human providers over AI counterparts even in economically significant service industries. Even if AI is physically capable of doing any task as well as a human can or better, there might be some tasks that are valued by humans only when they are performed by other humans. For example, we today have computer programs that can play chess better than any human player can, but top human chess players can still make money by winning tournaments. The fact that a tournament of computers would have a better quality of play is not important because part of what people want to watch is for _humans_ to be playing the game.
Humans might prefer to interact with human therapists, teachers, or other service providers that involve high symbolic value and expression of identity (Granulo, Fuchs, and Puntoni 2021). Although AI systems may one day replicate some social abilities of humans, people currently tend to prefer human interaction for certain services. If such intrinsic preferences apply to a sufficient range of tasks, full automation might be impossible simply because of human preferences and not because of any physical fact about what Als can or cannot do. This would limit gross world product as long as humans remain the ultimate consumers in the world economy and therefore the prices of goods and services are set according to their marginal utility.
This objection could in principle work assuming that all prices in the economy are set by humans, but there are two main problems with it.
1. While there might be good reasons to care about what happens to gross world product, we're fundamentally more interested in questions about the ability to manipulate the physical world to get desirable outcomes.
Importantly, scenarios in which AI poses a significant military risk or reshapes the physical environment around us in some substantial way can still be "explosive" in character even if humans are setting the prices of goods and services and therefore GWP ends up being bottlenecked by human preferences of one sort or another.
2. Even on the argument's own terms, the parameter values needed to make this story work seem quite implausible.
The first problem is relatively straightforward, so we focus on the second problem here. Suppose that consumer utility is some monotone transformation of the CES aggregator
\[U=\left(\int_{0}^{1}c_{i}^{\rho}\,di\right)^{1/\rho},\quad\rho=\frac{\sigma-1} {\sigma},\quad\rho<0 \tag{11}\]
over individual consumer goods \(c_{i}\). If markets clear in some underlying model such that goods prices are proportional to marginal utility, GDP growth would be given by
\[\frac{dY}{Y}=\frac{\int_{0}^{1}p_{i}dc_{i}\,di}{\int_{0}^{1}p_{i}c_{i}\,di}= \frac{\int_{0}^{1}U_{i}dc_{i}\,di}{\int_{0}^{1}U_{i}c_{i}\,di},\text{ where }U_{i}=\frac{\partial U}{\partial c_{i}}=c_{i}^{\rho-1}U^{1-\rho}, \tag{12}\]
so that the expression for GDP growth simplifies to
\[\frac{dY}{Y}=\frac{\int_{0}^{1}c_{i}^{\rho-1}dc_{i}\,di}{\int_{0}^{1}c_{i}^{ \rho}\,di}=\frac{1}{\rho}\frac{dU^{\rho}}{U^{\rho}}=\frac{dU}{U}. \tag{13}\]
This equation is solved by \(Y\propto U\). Therefore, for this particular specification, GDP perfectly tracks consumer utility, and we can reason about GDP growth by using the growth of \(U\) as a proxy for it.
If a fraction \(f\) of tasks can only be done by humans by definition, and initially the \(c_{i}\) are all equal, then setting the output tasks that cannot be done by humans to infinity should raise \(U\) by at least a factor \(f^{1/\rho}\), and this factor would increase if we could explicitly take human labor reallocation from automated tasks to human-only tasks into account - if the technology converting human labor to output on individual tasks is constant returns to scale, for instance, then we can get this up to \(f^{(1-\rho)/\rho}\).
This is just the same expression that we dealt with in the Section 2.3. We present a range of parameter values to analyze the plausibility of the argument here in Table 3:
The value \(\rho=-2\) corresponds to an elasticity of substitution \(\sigma=1/3\), which is conservative. Even under the pessimistic assumptions of \(f=25\%\) and \(\rho=-2\), AGI should produce at least \(\approx 1\) OOM increase in gross world product. If this happens in less than a decade, it would be sufficient to produce explosive growth.
We think this scenario is pessimistic because both parameter values seem unreasonable. We think \(f=5\%\) to \(f=10\%\) are more realistic values for the fraction of current economic tasks humans would only value if they were done by humans, and \(\sigma=0.7\) is a more realistic value for the elasticity of substitution in the human utility function. Combining these means we should expect around \(3-4\) OOM increase in GDP as a result of AGI even if we accept the argument that some tasks will not get automated as a result of human preferences for those tasks to be done by humans. This is, as mentioned previously, more than enough to produce explosive growth for an extended period of time.
As a reference point, note that \(3-4\) OOM likely matches how much gross world product has increased since the Industrial Revolution, and plenty of this came from increased task automation. So arguments based on intrinsic human preferences for some tasks being performed by humans seem like they would have made poor predictions if we had relied upon them in the past, and accordingly, we should be skeptical of them today as well.
We think that this argument will have some effect on economic growth, but do not consider it important for three main reasons:
\begin{table}
\begin{tabular}{c c c c} \hline \hline & \(\rho=-0.2\) & \(\rho=-0.4\) & \(\rho=-2\) \\ \hline \(f=5\%\) & \(6.4\times 10^{7}\) & \(3.6\times 10^{4}\) & 89 \\ \(f=10\%\) & \(10^{6}\) & \(3.2\times 10^{3}\) & 32 \\ \(f=25\%\) & \(4.1\times 10^{3}\) & 128 & 8 \\ \hline \hline \end{tabular}
\end{table}
Table 3: A table showing the scale-up factors we can get in GDP for various different values of the fraction of tasks that cannot be automated by AI, \(f\); and the substitution parameter \(\rho\) of the CES aggregator function.
1. It's not clear if all prices in the economy will actually be set by humans. If AIs can own property and are able to make consumption decisions as well, then gross world product would also take their preferences into account, and these preferences may not come with intrinsic demands that certain tasks must be performed by humans to be valuable.
2. Quantitatively, the magnitude of the complementarity in the utility function and the mass of tasks that humans wish to be intrinsically done by other humans have to be quite large for this argument to block explosive growth.
3. Even if explosive growth in gross world product is blocked, this does not necessarily mean that explosive growth is blocked in other physical variables that we might care about. These might include energy use, military strength, computer chip production, etc.
For all of these reasons, we consider this argument to be rather weak and do not think it should lead us to update our credence in explosive growth conditional on AGI downwards by a substantial amount. We consider it **very unlikely** that this argument blocks explosive growth.
### Previous technological revolutions did not lead to growth acceleration
We have seen many other technological innovations in the past that changed how we live our lives: computers, electricity, cars, airplanes, etc. Nevertheless, while these technologies allowed the trend growth rate of around \(2\%\) per year per person in the US and other developed economies to continue, they didn't lead to any noticeable growth acceleration. If this is the relevant reference class for evaluating the plausibility of AI-driven explosive growth, we ought to assign a low prior chance to the possibility of explosive growth driven by AI.
Our view is that this argument is sound in general and gives us some uninformative prior over whether any new technology is likely to lead to explosive growth. The probability of this happening for a generic technology is, indeed, quite small: for instance, while fusion reactors would no doubt be economically valuable, we do not expect them to lead to explosive growth even if they became viable and cost-effective. However, the evidence that AI that can match human performance on most or all economic tasks is likely to lead to explosive growth is strong enough to overcome this general argument.
The key reason is that almost every model in endogenous growth theory makes the prediction that AI that's capable of automating most or all economic tasks humans can perform at low cost (e.g. cost of human subsistence) has a substantial chance of leading to explosive growth. For some models, this prediction is robust to parameter choices; while in others it's sensitive, but in either case we cannot rule out the possibility. For example, Section 2.1 predict explosive growth robustly conditional on full automation from AI, while, as we show in Section 2.2, constant returns to scale models make this prediction for a substantial fraction of plausible parameter values.
There is no comparable situation with most other technologies, and the reason is the important role played by labor in growth economics. Labor is unique in that it's an input that's _both_ a key driver of economic production and growth _and_ cannot be increased by reinvestment of economic output the same way capital, compute, energy etc. production can be. In other words, labor is _non-accumalable_, while other factors of production that are of comparable importance to labor are _accumalable_.
This means the potential economic benefits of a technology that can turn labor into an accumulable input are enormous: we turn the currently most important factor of production from something that is difficult to scale to something that is easy to scale. If we also assume that the cost of producing or maintaining this stock of accumulable labor inputs is not prohibitively expensive, almost all conventional growth models will predict explosive growth in this situation.
While the generic argument outlined in this section is convincing about most technologies, we believe that in the specific case of AI that's capable of substituting for human workers, we have enough evidence to overcome the low prior that such an argument would assign to explosive growth conditional on AI. As a result, if the other objections to our argument (regulations, other bottlenecks, slow speed of automation, etc.) do not apply, we think this generic argument does not have any additional force. For this reason, we think it's **very unlikely** that this argument blocks explosive growth.
### Fundamental physical limits restrict economic growth
There might be fundamental physical limits to how much we can produce with a given amount of resources, or how quickly we can scale up production from current levels, regardless of how good our technology is. This objection may, for example, be found in (Aghion, B. F. Jones, and C. I. Jones 2018). If these limits are sufficiently tight, they might prevent explosive growth. Some examples of such limits include the speed of light, conservation of energy, the Landauer limit for irreversible computing, the Bekenstein bound for energy density, Bremermann's limit for reversible computing, Carnot's theorem, etc.
In principle, this argument is valid: there will be fundamental physical limits that block economic growth at some point. Many, if not all, of the bounds listed above will be relevant in constraining growth in the far future. However, we find the argument unconvincing insofar as it's meant to apply to explosive growth caused by AGI this century, because we're simply too far from the relevant fundamental physical limits for the constraints imposed by them to be binding.
For instance, humans use around \(0.01\%\) of the energy flux incident on Earth for production and consumption, and doing \(10^{40}\,\text{FLOP}/\text{year}\) of computation on Earth alone seems feasible based only on fundamental physical limits, which at the cost of \(\sim 10^{23}\,\text{FLOP}/\text{year}/\text{person}\) estimated in Carlsmith 2020 for running the human brain would be sufficient to simulate \(10^{17}\) virtual workers. This would be equivalent to scaling up the world population by 7 orders of magnitude. Even if every worker needs to be provided with amenities that match the current per capita energy consumption on the planet, there's still room for a scaling up of 3 to 4 orders of magnitude.
We simply cannot come up with any plausible scenario in which economic growth is blocked early on as a result of a fundamental physical limit, as opposed to e.g. limitations of our engineering capabilities. As a result, we think this argument is rather weak. We think the chance that this argument blocks explosive growth conditional on AGI is small and conclude that it is **very unlikely** to block explosive growth.
## 4 Discussion
Having gone through the above arguments for and against explosive growth, we think that explosive growth is plausible, both conditional on the deployment of AGI and purely unconditionally, by the end of this century. We didn't make the case for the unconditional view in this post, but it's based on our view that AGI deployment this century seems plausible based on estimates of how much resources would be needed for the creation of an AGI system and how much we can expect effective investment into AI to get scaled up by the end of this century. This case is made in greater detail elsewhere, e.g. Cotra 2020, Davidson 2023 and Barnett and Besiroglu 2023. We do not reproduce the detailed arguments here and direct the interested reader to these more comprehensive sources.
Due to the numerous arguments against this conclusion that we've discussed here and the prediction of explosive growth involving the extrapolation of models beyond the regime in which they have been observed to work, we think high confidence in explosive growth is unwarranted. However, we think low confidence is also unwarranted, especially conditional on the arrival of AGI. This is especially true as the distinct arguments are correlated with each other, so their disjunction is less likely than we might otherwise infer under an independence assumption.
We think the most plausible confounder that would induce such a correlation is the "overall level of problem-solving capabilities" of many human-level or superhuman intelligences running in parallel for long durations of subjective time. The more powerful intelligence turns out to be in general, the more easily we will be able to find ways to get around bottlenecks, and so conditional on one bottleneck not being serious, the likelihood of others being serious goes down as well.
To formally illustrate the point about correlations, we can compute:
\[\mathbb{P}\left(\bigcup_{i=1}^{n}A_{i}\right)=1-\mathbb{P}\left(\bigcap_{i=1 }^{n}A_{i}^{c}\right)=1-\mathbb{P}(A_{1}^{c})\prod_{i=2}^{n}\mathbb{P}(A_{i}^ {c}|A_{1}^{c},\ldots,A_{i-1}^{c}),\]
where the superscript \(c\) denotes taking complements and \(A_{1},\ldots,A_{n}\) are \(n\) events on a probability space. When the arguments are correlated with each other, \(\mathbb{P}(A_{i}^{c}|A_{1}^{c},\ldots,A_{i-1}^{c})>\mathbb{P}(A_{i}^{c})\), so the product is larger than it would be were the events jointly independent, and the probability of the disjunction is accordingly smaller.
There are other confounding influences as well. For instance, many of the arguments that function through the channel of ruling out the economic equivalent of full automation rely on the elasticity of substitution parameter \(\sigma\) being small, and many of them require it to be smaller than values often reported in the literature for the elasticity of substitution between capital and labor e.g. in the US economy, for instance in Knoblach, Roessler, and Zwerschke 2020. This influence similarly lowers the probability of the disjunction of all arguments that depend on this key parameter.
After taking both the individual strength of the arguments and their overall correlation structure into account, we end up thinking that credences of less than \(20\%\) for explosive growth conditional on AI that can do most or all tasks in the economy are unreasonably low. We estimate \(\mathbb{P}(\text{explosive growth this century }|\text{ AGI this century})\)**about as likely as not**.
### Open questions
There are several important questions that would make us more or less confident in explosive growth. Below is a non-exhaustive list of these questions:
1. Are there competing theories of economic history that are similarly plausible to the semi-endogenous growth story? What do these alternative theories have to say about the deployment of AGI?
2. How does the value-add of a technology affect the strength of regulation of coordination preventing its deployment? Do regulatory-induced delays follow a power law with respect to the value of relevant technologies? Is there strong evidence that innovations whose value is on the order of a ten times increase in the GDP of frontier economies are often blocked for a long time, i.e. many decades?
3. How expensive will it be to build robotic systems for AGIs with adequate motor control to do most or all embodied economic tasks humans are able to perform? Will robotics costs be of the same order of magnitude as compute costs, lower or higher? Note that economies of scale are likely to be quite important here, so looking at present robotics costs could be misleading.
4. Are early AI alignment failures going to make the deployment of otherwise capable AI systems by private actors unprofitable? While it's often assumed that misaligned AI would be deceptive and do what you want early on before it is sufficiently capable, leading to a situation in which actors who care about safety have to pay an "alignment tax", in our view this position is not supported by strong enough evidence for us to simply take it for granted. If AI becomes so unsafe that deployment is in expected value sufficiently costly even from a private actor's point of view, then slowing AI down becomes a matter of self-interest and not of global coordination, which is important for assessing the likelihood of large slowdowns actually occurring.
5. In our analysis, we consider both land and energy as physical factors which could bottleneck production. In both cases, we find that fundamental physical limits are at least some orders of magnitude away from our current use of these factors. Is this analysis flawed? If not, are there other factors that we have neglected which could similarly bottleneck output and prevent explosive growth?
6. What is the economic value of superhuman intelligence? To make this question quantitative in one way (though certainly not the only way), how much more economically valuable would a human be if they had a brain that was ten times larger or faster, and how much more overhead in energy and other costs would this incur in humans? How favorable is this scaling relationship once we take both economic benefits and costs into account? For instance, a rough intuition here could be that a brain twice as large is roughly four times as economically valuable, though this kind of scaling could be quite naive for many reasons.
All of these questions, if answered, could affect our views considerably. For instance, if superhuman intelligence is extremely powerful, then our credence in explosive growth this century should go up, as substantial expansions in our compute stock may not be needed for explosive growth. Some of these questions seem quite difficult to answer, while other questions seem amenable to progress. For instance, an in-depth investigation into the economics of robotics could plausibly answer (3), and a review of economic history from a quantitative lens could shed some light on (2).
## Appendix A Likelihood scale
To communicate our uncertainty appropriately, use the following likelihood scale in our assessment of the likelihood of explosive growth from AI occurring or being undercut by any of the obstacles discussed.
## Appendix B Semi-endogenous growth models and idea production
This appendix contains some technical details on the high-level argument laid out in the increasing returns to scale section.
First, let's see at a high level why we should expect hyperbolic growth to occur when accumulable inputs have increasing returns to scale in the production function.
Suppose that \(Y:(\mathbb{R}^{\geq 0})^{n}\rightarrow\mathbb{R}^{\geq 0}\) is a production function mapping factor inputs \((f_{1},f_{2},\ldots,f_{n})\) (which might be labor, capital, etc.) to economic output \(Y\). If all of these inputs are strictly accumulable, in the sense that they can be increased proportionally by reinvestment of output \(Y\), then if we assume a fraction of output \(\alpha_{k}\) is invested in the accumulation of input \(f_{k}\), these quantities will satisfy the differential equations
\[\frac{df_{k}}{dt}=\alpha_{k}Y\]
For technical reasons that will become apparent soon, we want to choose the saving rates \(\alpha_{k}\) such that the factor ratios \(f_{i}/f_{j}\) are held constant. This is equivalent to choosing \(\alpha_{i}\propto f_{i}\), so if the overall saving rate of our economy is \(0<\alpha<1\), we'll have
\[\alpha_{i}=\frac{\alpha\cdot f_{i}}{\sum_{j}f_{j}}\]
Using the chain rule on the production function \(Y\) gives
\[\frac{dY}{dt}=\sum_{k=1}^{n}\frac{\partial Y}{\partial f_{k}}\frac{df_{k}}{dt} =\sum_{k=1}^{n}\frac{\partial Y}{\partial f_{k}}\alpha_{k}Y\]
and a substitution of the above expression for \(\alpha_{i}\) to this expression yields
\[\frac{dY}{dt}=\alpha\times\sum_{k=1}^{n}\frac{\partial Y}{\partial f_{k}}\frac {f_{k}Y}{\sum_{j}f_{j}}\]
Now, we bring in the assumption that \(Y\) has increasing returns to scale. Suppose that \(Y\) is homogeneous of degree \(d>1\), so that it satisfies the homogeneity identity
\[Y(rf_{1},rf_{2},\ldots,rf_{n})=r^{d}Y(f_{1},f_{2},\ldots,f_{n})\]
for all nonnegative real numbers \(r\). Since we assume factor ratios are held constant, our factor vector will always be of the form \((rh_{1},rh_{2},\ldots,rh_{n})\) for some real number \(r\) and our initial factor endowments \(h_{1},h_{2},\ldots,h_{n}\). Since \(Y\propto r^{d}\) by the above identity and \(\sum_{i}f_{i}\propto r\) by assumption, in particular we deduce that \(\sum_{i}f_{i}\propto Y^{1/d}\). Substituting into the above relation for \(dY/dt\) gives
\[\frac{dY}{dt}\propto Y^{1-1/d}\sum_{k=1}^{n}\frac{\partial Y}{\partial f_{k}}f _{k}\]
\begin{table}
\begin{tabular}{c c} \hline \hline Term & Likelihood of outcome \\ \hline Virtually certain & \(>\)99\% probability \\ Very likely & 90\%-99\% probability \\ Likely & 66\%-90\% probability \\ About as likely as not & 33\%-66\% probability \\ Unlikely & 10\%-33\% probability \\ Very unlikely & 1\%-10\% probability \\ Exceptionally unlikely & 0\%-1\% probability \\ \hline \hline \end{tabular}
\end{table}
Table 4: Likelihood scale.
Finally, suppose we differentiate the homogeneity identity for \(Y\) with respect to \(r\) at \(r=1\), holding factor inputs fixed. This gives the relation
\[\sum_{k=1}^{n}f_{k}\frac{\partial Y}{\partial f_{k}}=d\times Y\]
Using this relation as the final ingredient, we get that \(Y\) satisfies a differential equation
\[\frac{dY}{dt}\propto Y^{2-1/d}\]
exactly as claimed in the increasing returns to scale section. When \(Y\) has increasing returns to scale, so that the homogeneity degree \(d>1\), \(Y\) exhibits hyperbolic growth and diverges in finite time. We also see why a transition in which a particular input shifts from being accumulable to not being accumulable can lower \(d\) and as a result shift us from a superexponential to a subexponential growth regime.
One important detail here is that the saving rule we chose for our economy, that \(\alpha_{k}\propto f_{k}\), is not necessarily optimal. However, the fact that _some_ saving rule can achieve hyperbolic growth is a sufficient condition for the economy to exhibit hyperbolic growth in the absence of severe market failures, so this is not an important issue.
**Diminishing returns in factor production**
This simple story is complicated when we consider more general laws of motion for the factors of production \(f_{i}\). Bloom et al. 2020 considers a general accumulation relationship
\[\frac{1}{f}\frac{df}{dt}\propto f^{-\phi_{i}}I^{\lambda}\]
where \(f\) denotes factor stock and \(I\) denotes investment into increasing this factor. In this formalism, the quantity \(r=\lambda/\phi\) (sometimes called the _returns on factor investment_) is of crucial importance, as it determines the relationship between the growth rate of \(I\) and the growth rate of \(f\) in an exponential growth equilibrium.
It turns out we can generalize the above argument to the case where each factor follows an individual law of motion
\[\frac{1}{f_{i}}\frac{df_{i}}{dt}\propto f_{i}^{-\phi_{i}}I_{i}^{\lambda_{i}}\]
but our result ends up being not quite as sharp. If we assume as before that factor ratios must stay constant, it follows that we must have
\[f_{i}^{-\phi_{i}}I_{i}^{\lambda_{i}}\propto f_{j}^{-\phi_{j}}I_{j}^{\lambda_{j}}\]
for all \(i,j\). It straightforwardly follows that we must have \(I_{i}\propto f_{i}^{\phi_{i}/\lambda_{i}}=f_{i}^{1/r_{i}}\) where \(r_{i}=\lambda_{i}/\phi_{i}\) is defined as above, and the budget constraint \(\sum_{i}I_{i}=\alpha Y\) once again gives
\[I_{i}=\alpha Y\times\frac{f_{i}^{1/r_{i}}}{\sum_{i}f_{i}^{1/r_{i}}}\]
As before, differentiating \(Y\) and using the chain rule gives us
\[\frac{dY}{dt}=\alpha\times\sum_{k=1}^{n}f_{k}\frac{\partial Y}{\partial f_{k} }\frac{Y}{\sum_{j}f_{j}^{1/r_{j}}}\]
The problem is that when the \(r_{j}\) are different and the ratios between the different \(f_{j}\) are fixed by assumption, the denominator here will be dominated by the factor with the least favorable returns to investment. In other words, the best we can do is to bound the denominator from above using the relation
\[\sum_{j}f_{j}^{1/r_{j}}=O(Y^{1/(d\min\{r_{1},r_{2},\ldots,r_{n}\})})\]
Denoting \(r_{\text{min}}=\min\{r_{1},r_{2},\ldots,r_{n}\}\), we can obtain a lower bound on the growth of \(Y\):
\[\frac{dY}{dt}>_{\text{up to a constant}}Y^{2-1/(dr_{\text{min}})}\]
As before, this is merely a sufficient condition, not a necessary one. However, if we make no further structural assumptions about \(Y\), this bound is the best we can do: assuming that \(Y\) is a Leontief production function, for instance, gives a concrete case in which we must keep factor endowments proportional to each other, so this worst-case bound
ends up being tight. To relax this worst-case bound, it's necessary for the factors to not be perfect complements to each other.
It's also necessary to relax this bound if we hope to get explosive growth out of the argument. In Bloom et al. 2020's formalism, the returns to idea production are by assumption equal to \(1\) (without loss of generality), so if accumulable inputs also have constant returns to scale we will have \(d=2\). In such a situation, we'll get explosive growth unconditionally if \(r_{\text{ideas}}>1/2\). However, Bloom et al. 2020 estimates \(r_{\text{ideas}}\approx 0.32\) for the whole US economy, so this weak sufficient condition alone is insufficient to deduce we will have explosive growth once labor becomes accumulable.
**Focusing on idea production**
Fortunately for us, the above calculation _is_ in fact too general, at least from the point of view of Bloom et al. 2020. This is because in their model, ideas enter the production function as a constant multiplier, meaning that we can narrow down the production function of the economy to a more specific form
\[Y(A,f_{1},\ldots,f_{n})=AY_{f}(f_{1},f_{2},\ldots,f_{n})\]
where \(A\) represents total factor productivity and \(f_{1},\ldots,f_{n}\) are accumulable factors as before. \(Y_{f}\) is also assumed to be homogeneous of degree \(d\). We have the laws of motion
\[\frac{df_{i}}{dt}=I_{i}\]
\[\frac{1}{A}\frac{dA}{dt}\propto A^{-\phi}I_{A}^{\lambda}\]
We now assume that we follow the previous investment allocation rule for \(Y_{f}\), so the ratios between the accumulable factors \(f_{i}\) are preserved, but unlike in the diminishing returns in factor production section we exclude \(A\) from the set of factors among which ratios must be preserved. Instead, we assume that a share \(\alpha_{A}\) of GDP is invested into idea research, and a share \(\alpha_{f}\) is invested in aggregate into accumulable factors. Treating these as constants, this yields
\[\frac{1}{Y}\frac{dY}{dt}=\frac{1}{A}\frac{dA}{dt}+\frac{1}{Y_{f}}\frac{dY_{f}} {dt}>_{\text{up to a constant}}A^{-\phi}Y^{\lambda}+YY_{f}^{-1/d}=A^{-\phi}Y^{ \lambda}+A^{1/d}Y^{1-1/d}\]
Now, let \(x=1/(1+d\phi)\). Note that \(0<x\leq 1\). Our idea is to simplify the expression using the weighted arithmetic-geometric mean inequality
\[xa+(1-x)b\geq a^{x}b^{1-x}\]
using \(x\) as our relative weight between the two terms, which holds whenever \(a,b\) are both positive and \(0\leq x\leq 1\). So we write
\[\frac{1}{Y}\frac{dY}{dt}>_{\text{up to a constant}}A^{-\phi}Y^{\lambda}+A^{1 /d}Y^{1-1/d}>xA^{-\phi}Y^{\lambda}+(1-x)A^{1/d}Y^{1-1/d}\]
and use the weighted arithmetic-geometric mean inequality mentioned above to obtain
\[\frac{1}{Y}\frac{dY}{dt}>_{\text{up to a constant}}A^{-\phi x+(1-x)/d}Y^{ \lambda x+(1-1/d)(1-x)}\]
By our choice of the value of \(x\), the exponent of \(A\) is equal to zero, so it drops out of the expression altogether. Substituting \(x=1/(1+d\phi)\) in the exponent of \(Y\), we can simplify the right hand side to obtain
\[\frac{1}{Y}\frac{dY}{dt}>_{\text{up to a constant}}Y^{(r+d-1)/(\phi^{-1}+d)}\]
As the denominator of the exponent is always positive, it follows that \(Y\) exhibits hyperbolic growth and diverges in finite time whenever \(r+d>1\). It's easy to see that there's also an equilibrium where \(Y\) grows exponentially when \(r+d=1\), so this condition is both necessary and sufficient for explosive growth in this model.
As we mentioned earlier, the data from Bloom et al. 2020 suggests \(r\approx 0.32\), which means that the returns to scale on accumulable inputs can be as small as \(d\approx 0.68\) while still leaving open the possibility of explosive growth.
**Appendix C: Bounds on human population growth might explain limits of historical growth**
Human population growth is likely bounded from above by biological constraints on human reproduction. That is, \(L\) cannot grow faster than some rate \(\bar{n}\). If so, semi-endogenous growth theory predicts a bound on economic growth that of a similar order as \(n\). To see this, consider a semi-endogenous growth model described by the following equations:
\[Y(t) =A(t)\big{(}K(t)\big{)}^{\alpha}\big{(}(1-\alpha_{l})L(t)\big{)} ^{1-\alpha} \tag{14}\] \[\dot{A}(t) =\alpha_{l}L(t)^{\gamma}A(t)^{\phi}\] (15) \[\dot{K}(t) =sY(t)-\delta K(t)\] (16) \[\dot{L}(t) =nL(t). \tag{17}\]
That is, we consider a simple semi-endogenous growth model with Hicks-neutral technical change, constant savings rate, and with scientists split between final goods production and R&D. Solving the steady-state growth rates, we get that:
\[g_{a}=\frac{\gamma}{1-\phi}n,\;\;g_{k}=n\bigg{[}\frac{\gamma+(1-\phi)(1-\alpha)} {(1-\phi)(1-\alpha)}\bigg{]}.\]
The steady-state rate of growth is thus:
\[g_{y}=n\bigg{(}\alpha\frac{\gamma+(1-\phi)(1-\alpha)}{(1-\phi)(1-\alpha)}+ \frac{\gamma}{1-\phi}(1-\alpha)\bigg{)}.\]
Hence, \(g_{y}\) is proportional to \(n\). For instance, if we follow the meta-analyses from Sequeira and Neves 2020 and Neves and Sequeira 2018 and adopt \(\phi=0.8\), and \(\gamma=0.2\), and as is standard, assume \(\alpha=0.3\), then \(g_{y}\approx 1.5n\). Hence, semi-endogenous growth theory predicts that growth is capped at some rate that is, in some sense, quite close to the maximum rate of \(n\).
Assuming human reproduction is such that the average woman would have no more than 10 offspring who survive to adulthood throughout her reproductive years, semi-endogenous growth theory predicts that the economic growth rate is bounded from above to high single-digit or low double-digit percentages. This suggests that semi-endogenous growth theory correctly predicts the maximum rate of output growth that we have observed so far.
**Appendix D: Explosive growth from growth in stock of digital workers**
Consider an exogenous growth model with technological progress, where investment is split between compute and other capital:
\[Y(t)=AL(t)^{\alpha}K(t)^{1-\alpha},\]
the stocks of effective labor and capital grow as a result of investment:
\[\frac{dL(t)}{dt}=sfY(t)/\bar{c}-\delta_{L}L,\;\frac{dK(t)}{dt}=s(1-f)Y(t)- \delta_{K}K \tag{18}\]
where \(f\) is the fraction of investment channelled towards AI, \(s\) is the saving rate of the economy, and \(\bar{c}\) the average cost of running a human-equivalent AI. \(\delta_{L},\delta_{K}\) are the depreciation rates for the effective labor and capital stocks, respectively. Assuming that \(A\) is constant, some algebra reveals that:3
Footnote 3: The depreciation rate sets the timescale over which the hardware is useful: it’s \(\sim 1/\delta_{L}\).
\[g_{y}=As\left[\alpha\left(\frac{K(t)}{L(t)}\right)^{1-\alpha}\frac{f}{\bar{c} }+(1-\alpha)\left(\frac{K(t)}{L(t)}\right)^{-\alpha}(1-f)\right]-\alpha\delta _{L}-(1-\alpha)\delta_{K} \tag{19}\]
Along a balanced growth path, the ratio \(L/K\) should be equal to \(f/(1-f)\cdot 1/\bar{c}\). Substituting this into the expression for \(g_{y}\) and optimizing over \(f\) to find the value that leads to the highest growth rate in the long run gives \(f=\alpha\), so we can assume that after labor becomes accumulable, in the long run we will have \(L/K\approx\alpha/(1-\alpha)\cdot 1/\bar{c}\).
Substituting into (2) would then lead to
\[g_{y}=As\frac{1}{\bar{c}^{\alpha}}B_{\alpha}-\alpha\delta_{L}-(1-\alpha) \delta_{K},\;B_{\alpha}=\left[\alpha^{2}\left(\frac{1-\alpha}{\alpha}\right) ^{1-\alpha}+(1-\alpha)^{2}\left(\frac{\alpha}{1-\alpha}\right)^{\alpha}\right] \tag{20}\]
We can also get an estimate for the value of \(A\) for a frontier economy. The total capital stock of the US economy was estimated in 2019 to be around 70 trillion USD in 2017 prices. Furthermore, the number of employed people in the US in 2019 was around 180 million: there were 150 million nonfarm employees according to FRED, and the same page states that nonfarm employment makes up for around \(80\%\) of the employees that contribute to gross domestic product. Finally, US real GDP was around 19 trillion 2012 USD, or 20 trillion 2017 USD, in the year 2019.
Combining all of this information and assuming \(\alpha=0.7\) gives us the equation
\[2\times 10^{13}\,\$/\text{year}=A\times(1.8\times 10^{8}\,\text{workers})^{0.7} \times(7\times 10^{13}\,\$)^{0.3}\]
Solving for \(A\) yields
\[A\approx 2337\,\$^{0.7}\text{workers}^{-0.7}\text{year}^{-1}\]
We can now put all of this together to compute the growth rate we should expect post-AGI. If we make the simplification that \(\delta_{L},\delta_{K}\ll g_{y}\), we can approximate the solution by
\[g_{y} \approx A(t)s\bar{c}^{-0.7}\big{(}0.7\cdot(0.3/0.7)^{0.3}\cdot 0.7+0.3 \cdot(0.3/0.7)^{-0.7}\cdot 0.3\big{)} \tag{21}\] \[\approx 2337\times s\bar{c}^{-0.7}\times 0.54\] (22) \[\approx 1262\times s\bar{c}^{-0.7} \tag{23}\]
where the units of the final answer will be year inverse. Explosive growth requires \(g_{y}\geq 0.3\), suggesting the bound
\[s\bar{c}^{-0.7}\geq 0.3/1262\approx 2.38\times 10^{-4}\]
or, cast slightly differently,
\[\bar{c}\leq s^{10/7}\cdot(1.5\times 10^{5})\,\$/\text{worker}\]
## Appendix E Robustness to investment delays
Here, we show that our earlier results are robust even if we assume that there are delays to investment, in the sense that "realized investment" is an exponential moving average of past inputs to investment. Formally, with a "forgetting rate" of \(\eta>0\), such a model would look like
\[\frac{dK}{dt} =I \tag{24}\] \[\frac{dI}{dt} =\eta(sY-I)\] (25) \[Y =AK \tag{26}\]
Here, \(s\) is a constant factor saving parameter as above, \(A\) is a multiplier with dimensions of frequency that converts the capital stock (which has dimensions of dollars) into GDP (which has dimensions of dollars per unit time), and \(\eta\) is a parameter with dimensions of frequency that controls how responsive realized investment \(I\) is to changes in savings \(sY\).
As we shall see, \(1/\eta\) is the "characteristic time scale" of investment delays in this model. Specifically, \(\log(2)/\eta\) is the number of months for investment to move halfway between I and sY.
This is a straightforward system of differential equations and the asymptotic growth rate will be determined by the positive eigenvalue of the associated matrix
\[A=\begin{bmatrix}0&1\\ As\eta&-\eta\end{bmatrix}\]
The characteristic polynomial of this matrix is \(\det(tI-A)=t^{2}+\eta t-As\eta\), which has positive root
\[\lambda=\frac{-\eta+\sqrt{\eta^{2}+4As\eta}}{2}=\eta\times\frac{-1+\sqrt{1+4As /\eta}}{2}\]
When \(\eta=\infty\) so that adjustment is instant, the steady state growth rate should be \(As\). Indeed, this is true in the limit, as can be seen from the first order approximation \(\sqrt{1+\varepsilon}=1+\varepsilon/2+O(\varepsilon^{2})\).
Quantitatively, the deviation from this limit is insignificant unless \(\eta\ll 4As\). The order of magnitude here is dominated by \(A\) as \(s\) is a dimensionless saving rate parameter and \(4\) is a constant, so roughly speaking this expression is comparing the two frequency parameters \(A\) and \(\eta\). In the calculation from the previous section, the constant that corresponds to \(A\) is \(2/3\,\text{years}^{-1}\), so this means that we won't see a substantial impact of delays to investment on the growth rate of the economy unless \(\eta\ll 2/3\,\text{years}^{-1}\) or \(1/\eta\gg 18\,\text{months}\).
The most likely values for \(1/\eta\), which is the "characteristic time scale" of investment delays in this model, are probably on the order of a few years. Therefore this simple calculation predicts a constant factor effect of investment delays on the growth rate of the economy after full automation. Indeed, if we assume \(\eta=As\), this constant factor is
\[\frac{\sqrt{5}-1}{2}=\frac{1}{\phi}\approx 0.618\ldots\]
which is the reciprocal of the golden ratio, meaning that the growth rate is reduced to roughly \(60\%\) of what we would have predicted it to be in the naive model not taking these adjustment costs into account. Overall, we think the uncertainty this adds to the calculation is much smaller than the uncertainty already present from our estimates of \(A\) and \(s\), so this effect looks like it can be safely ignored, perhaps at the expense of choosing other model parameters a bit more conservatively.
|
2309.11848 | TeachingBot: Robot Teacher for Human Handwriting | Teaching physical skills to humans requires one-on-one interaction between
the teacher and the learner. With a shortage of human teachers, such a teaching
mode faces the challenge of scaling up. Robots, with their replicable nature
and physical capabilities, offer a solution. In this work, we present
TeachingBot, a robotic system designed for teaching handwriting to human
learners. We tackle two primary challenges in this teaching task: the
adaptation to each learner's unique style and the creation of an engaging
learning experience. TeachingBot captures the learner's style using a
probabilistic learning approach based on the learner's handwriting. Then, based
on the learned style, it provides physical guidance to human learners with
variable impedance to make the learning experience engaging. Results from
human-subject experiments based on 15 human subjects support the effectiveness
of TeachingBot, demonstrating improved human learning outcomes compared to
baseline methods. Additionally, we illustrate how TeachingBot customizes its
teaching approach for individual learners, leading to enhanced overall
engagement and effectiveness. | Zhimin Hou, Cunjun Yu, David Hsu, Haoyong Yu | 2023-09-21T07:45:25Z | http://arxiv.org/abs/2309.11848v1 | # TeachingBot: Robot Teacher for Human Handwriting
###### Abstract
Teaching physical skills to humans requires one-on-one interaction between the teacher and the learner. With a shortage of human teachers, such a teaching mode faces the challenge of scaling up. Robots, with their replicable nature and physical capabilities, offer a solution. In this work, we present TeachingBot, a robotic system designed for teaching handwriting to human learners. We tackle two primary challenges in this teaching task: the adaptation to each learner's unique style and the creation of an engaging learning experience. TeachingBot captures the learner's style using a probabilistic learning approach based on the learner's handwriting. Then, based on the learned style, it provides physical guidance to human learners with variable impedance to make the learning experience engaging. Results from human-subject experiments based on 15 human subjects support the effectiveness of TeachingBot, demonstrating improved human learning outcomes compared to baseline methods. Additionally, we illustrate how TeachingBot customizes its teaching approach for individual learners, leading to enhanced overall engagement and effectiveness.
## I Introduction
Robots play a crucial role in various physical interaction tasks with humans [1]. They go beyond being mere learners [2, 3], collaborators [4], or assistant [5, 6] of humans, and have the potential to become the teacher for humans [7]. In human physical skill learning, such as writing characters, physical guidance provides a more direct learning signal than other signals, e.g., visual signal. However, physical guidance often requires personalized one-on-one interactions between the teacher and the learner. With a shortage of human teachers, robots, being highly scalable and capable of physical interaction, can be promising candidates. In this work, we focus on character writing, an essential skill used in daily life. The robot serves as the _teacher_ to provide guidance through physical interaction with humans to teach handwriting.
Teaching humans to write effectively presents challenges even for human teachers. First, human learners have different writing styles, leading to different learning preferences. Secondly, striking the right level of guidance is difficult. An insufficient amount of guidance would fail to provide adequate support for human learning, while excessive assistance would result in a dependency on the teacher's guidance [5], hindering learner's acquisition of essential skills in the task [7]. Therefore, a successful robot teaching system for writing must adapt to learner's style and promote engagement while providing physical guidance.
We present a robot system, illustrated in Fig. 1, referred to as _TeachingBot_, aiming to enable the human learner to follow the physical guidance of the robot teacher to learn to write the reference style of given Chinese characters. TeachingBot can capture learner's style from its handwriting and provide necessary physical guidance to learners by two main steps. First, the robot collects handwriting of the human learner. The dispersion of the writing trajectories represents the style of the learner for writing the given reference character. Particularly, inspired by [8], we apply the mixture of Gaussian distributions to encode the learner's styles by the mean trajectory and its variability. Then, leveraging the learned learner's styles and the reference style, we use the probabilistic learning method to generate the teaching trajectory for the robot teacher. Second, utilizing the generated teaching trajectory, the robot engages in compliant interactions with the human learner using variable impedance control [9]. In order to offer well-balanced guidance to each learner, the impedance should be increased when the learner deviates significantly from the reference and reduces impedance to encourage the learner's engagement when the deviation is minor [5, 10, 11]. This integration allows TeachingBot to encode learner styles into adaptive physical guidance, thus, TeachingBot can generate physical guidance that tailored to an individual learner. As a result, TeachingBot provides physical guidance that does not impede individual expression and facilitates effective learning.
In this work, it's crucial to distinguish between robot learning and robot teaching. Robot learning involves robots acquiring skills or knowledge to enhance their capabilities, while robot teaching positions the robot as an instructor guiding human learners, as exemplified by TeachingBot.
Fig. 1: The pipeline of TeachingBot. It consists of three phases: 1) Pre-test: collects learner handwriting; 2) Robot teaching: (a) captures learner style, (b) generates individual impedance, and (c) generates individual training trajectory for robot; 3) Evaluation: evaluates learner improvement.
Through human subject experiments involving various Chinese characters, we demonstrate the effectiveness of TeachingBot in facilitating human learning and illustrate how it customizes its teaching approach for individual learners, thereby enhancing overall engagement in human learning. The potential of robot teaching opens up new opportunities for scaling up education and offering learning opportunities to many, even in the absence of human teachers.
## II Related Work
**Teaching Algorithm:** Various algorithms have been employed for human learning, with successes in crowd classification and concept acquisition [12, 13, 14, 15]. However, mastering intricate motor control skills through mere visual or verbal cues remains challenging. Recent advancements integrate skill discovery techniques from reinforcement learning to establish curricula based on skill decomposition, enhancing human motor skill acquisition [16]. The emergence of advanced language models has introduced language correction as a tool to facilitate human learning [17]. In the realm of robot-assisted learning, the concept of robot teaching has emerged. Robots can facilitate human learning physically by using specified dynamics [7, 18, 19]. In our work, instead of only providing physical guidance, we aim to achieve adaptive teaching by learning the style of the learner in the physical guide for teaching.
**Robot Training Strategies:** Extensive research has explored encoding human skills and representing robot skills [2]. Probabilistic methods like Gaussian Mixture Model (GMM)/Gaussian Mixture Regression (GMR) and Dynamic Movement Primitives (DMP) [20, 21, 22] have succeeded in generating robot reference trajectories. Others, such as Probabilistic DMP (ProMP) and Gaussian Process (GP) models [23, 8, 3], have utilized basis functions for trajectory representation and adaptation. While these methods have been applied in physical human-robot interaction (pHRI) tasks like robotic training and rehabilitation [24, 25], most of them focus on robots learning from human experts. In contrast, our framework is centered on generating appropriate trajectories for human learning.
Furthermore, varying the intensity of interaction is crucial for effective human learning [26]. Active engagement of human users can enhance neural plasticity and accelerate learning [27, 5]. Variable stiffness/impedance controllers are highly effective for modulating physical interaction [28, 10]. They adapt assistance levels by updating impedance parameters, as seen in [9, 29, 11], where time-varying impedance is iteratively adjusted based on motion tracking errors. LfDs and reinforcement learning (RL) methods were also developed to achieve state-varying modulation for physical interaction based on task goals and human user characteristics [30, 31, 32]. However, model-free RL faces challenges in pHRI tasks due to high sample complexity [31, 32]. In contrast, TeachingBot, proposes a variable impedance scheme to efficiently facilitate human learning by adjusting teaching intensity based on the learner's style and real-time writing performance.
## III Method
### _Problem Formulation_
We build upon prior work [7] where the target task is a Markov Decision Process (MDP). and the teaching task is a Partially Observable Markov Decision Process (POMDP). The target task is to write Chinese characters and the teaching task is to teach human such skill. The teaching policy is designed to influence the learner's performance through interactive actions, such as generating reference character trajectories and introducing physical interaction impedance to enrich feedback. The goal of the TeachingBot is to efficiently teach the human learner to write the reference character. Specifying the elements within the POMDP is challenging due to the absence of an accurate human learning model. Instead of solving the complete POMDP, we choose a simplified solution. TeachingBot adapts support to the learner's needs purely based on the current observation of the learner's style and performance, which is practical in scenarios with limited knowledge of human learning dynamics [16, 33].
### _Overview of TeachingBot_
The overview of TeachingBot is depicted in Fig. 2. The reference character is denoted by a variable \(\mathbf{c}\in\mathbf{C}\), represented by an image. \(\mathbf{C}\) is the dataset including the images of reference Chinese characters. To facilitate the robot teaching trajectory generation, the reference waypoints \(\{\mathbf{x}_{\mathbf{c}}^{n}\}_{n=1}^{N}\) of each stroke are extracted from the image of the reference character. For instance, the reference waypoints of two strokes are plotted in Fig. 2(a). As illustrated in Fig. 1, the human learner would grasp the robot's handle, and the robot teacher guides the human learner to write the reference character by following a generated reference trajectory. For one round of finishing writing the character, we call it one _teaching iteration_. A variable impedance controller (VIC) is implemented with the sample interval \(T_{s}\) to provide correction following the given reference trajectory \(\mathbf{\tau}_{d}=[\mathbf{x}_{d}(0),\mathbf{x}_{d}(T_{s}),\cdots,\mathbf{x}_{d}(\mathbf{\Delta t }_{c})]\), \(\mathbf{x}_{d}\in\mathbb{R}^{3}\) is the reference position of the robot end-effector. Particularly, at each robot teaching iteration, a reference trajectory is generated for the robot teacher. The actual writing trajectory and interaction force trajectory are respectively collected as \(\mathbf{\tau}=[\mathbf{x}(0),\mathbf{x}(T_{s}),\ldots,\mathbf{x}(\mathbf{\Delta t}_{c})]\) and \(\mathbf{\Gamma}=[\mathbf{F}_{h}(0),\mathbf{F}_{h}(T_{s}),\ldots,\mathbf{F}_{h}(\mathbf{\Delta t }_{c})]\), \(\mathbf{F}_{h}\in\mathbb{R}^{3}\) is the interaction force and \(\Delta\mathbf{t}_{c}\) is writing time of the given reference character \(\mathbf{c}\). The writing trajectory is also downsampled to \(N\) writing waypoints \([\mathbf{\chi}(0),\mathbf{\chi}(T_{w}),\ldots,\mathbf{\chi}(\mathbf{\Delta t}_{c})]\), \(T_{w}\) is the time interval of the waypoints (\(T_{s}\ll T_{w}\)). The writing waypoints are abbreviated as \(\{\mathbf{\chi}^{n}\}_{n=1}^{N}\) in the following.
Given a reference character \(\mathbf{c}\), the robot teaching is repeated for \(I\) iterations. At \(i\)-th teaching iteration (\(i\in I\)), the previous \(L\) actual writing trajectories are collected as \(\{\mathbf{\tau}^{l}\}_{l=1}^{L}\), which are downsampled to writing waypoints \(\{\{\mathbf{\chi}^{l,n}\}_{n=1}^{N}\}_{n=1}^{L}\) for learner style learning. The dataset \(\mathcal{D}_{L}^{i}=\{\{\mathbf{t}_{n}^{l},\ \mathbf{\chi}^{l,n}\}_{n=1}^{N}\}_{L=1}^{L}\) is constructed to include all time-driven waypoints. The key via-points are extracted from the
reference waypoints as training via-points and stored in \(\mathcal{D}_{V}^{i}\). A GMR-GP model \(f(\mathbf{\chi}_{d}|\mathbf{t},\mathcal{D}_{L}^{i},\mathcal{D}_{V}^{i})\) is learned to generate the training waypoints \(\{\mathbf{\chi}_{d}^{i,n}\}_{n=1}^{N}\) based on the given human style and training via-points. Afterward, the training waypoints are interpolated into the reference trajectory \(\mathbf{\tau}_{d}^{i}\) for robot impedance control.
### _Learner Style Learning_
A GMM with \(Z\) components is applied to model the joint distribution of the input time \(\mathbf{\xi}_{i}=\mathbf{t}\in\mathbb{R}^{d_{i}}\), and output writing waypoint \(\mathbf{\xi}_{o}=\mathbf{\chi}\in\mathbb{R}^{d_{o}}\), as
\[\mathcal{P}(\mathbf{\xi})=\sum\nolimits_{z=1}^{Z}h_{z}\mathcal{N}(\mathbf{\xi};\mathbf{ \mu}_{z},\mathbf{\Sigma}_{z}),\mathbf{\xi}=[\mathbf{\xi}_{i},\mathbf{\xi}_{o}]^{T}, \tag{1}\]
where \(h_{z}\), \(\mathbf{\mu}_{z}\), and \(\mathbf{\Sigma}_{z}\) are the prior probability, mean, and covariance of the \(z\)-th Gaussian component, respectively. These parameters are optimized by the Expectation-Maximization algorithm given the dataset \(\mathcal{D}_{L}^{i}\)[21, 34, 8]. Therefore, the learner style is represented by a probabilistic reference trajectory \(\{\widehat{\mathbf{\chi}}^{n}_{n=1}\}_{n=1}^{N}\). Each waypoint \(\widehat{\mathbf{\chi}}^{n}\) is retrieved from a conditional Gaussian distribution \(\mathcal{P}(\widehat{\mathbf{\chi}}^{n}|\mathbf{t}_{n})=\mathcal{N}(\widehat{\mathbf{\mu} }(\mathbf{t}_{n}),\widehat{\mathbf{\Sigma}}(\mathbf{t}_{n}))\). \(\widehat{\mathbf{\mu}}(\mathbf{t}_{n})\) and \(\widehat{\mathbf{\Sigma}}(\mathbf{t}_{n})\) are the conditional mean and covariance. The covariance \(\widehat{\mathbf{\Sigma}}(\mathbf{t}_{n})\) encapsulates the variability of the learner's potential writing trajectories.
For instance, \(L=3\) writing waypoints of reference character \(\mathbf{c}\), which are plotted by grey lines in Fig. 2(a) and Fig. 2(b). The GMM (\(Z=8\)) is optimized and visualized by the green ellipses in Fig. 2(e). The mean writing waypoints and variance are derived from GMM, which are plotted by the green line and green region in Fig. 2(f).
### _Training Via-points Extraction_
We perform curvature-based trajectory compression to extract the training via-points from the reference waypoints. Particularly, we retain waypoints where the curvature (change in direction) is highest, as these waypoints are likely to represent features of the reference character. Given the reference waypoints \([\mathbf{\chi}_{c}^{1},\dots,\mathbf{\chi}_{c}^{N}]\), calculate curvature \(\kappa_{n}\) for each interior waypoint \(\mathbf{\chi}_{c}^{n}\) using the first order derivative \(\dot{\mathbf{\chi}}_{c}^{n}\) and second order derivative \(\ddot{\mathbf{\chi}}_{c}^{n}\). The curvature can be computed as \(\kappa_{n}=\ddot{\mathbf{\chi}}_{c}^{n}/(1+(\dot{\mathbf{\chi}}_{c}^{n})^{2})^{1.5}\). Then, select the \(H\) waypoints with the highest curvature values as training via-points, indicating significant changes in trajectory direction.
The extracted training via-points are stored in \(\mathcal{D}_{V}^{i}=\{\mathbf{t}_{h}^{i},\mathbf{\chi}_{c}^{k}\}_{n=1}^{H}\) (\(H<<N\)). In Fig. 2(b), we visualize the \(H=5\) extracted training via-points by red scatters.
### _Training Waypoints Generation_
Multi-output GP (MOGP) is employed to fit the deterministic relationship \(\mathbf{\xi}_{o}=f(\mathbf{\xi}_{i})+\mathbf{\epsilon}_{t}\) from the input \(\mathbf{\xi}_{i}=\mathbf{t}\) to vector-valued output \(\mathbf{\xi}_{o}=\mathbf{\chi}\), \(\mathbf{\epsilon}_{t}=[\mathbf{\epsilon}_{t}^{1},\mathbf{\epsilon}_{t}^{p},\cdots,\mathbf{ \epsilon}_{t}^{d_{o}}]\), \(\mathbf{\epsilon}_{t}^{p}\sim\mathcal{N}(0,\sigma_{p}^{2})\). The distribution of \(\mathbf{\chi}\) input \(\mathbf{t}\) is given by \(\mathbf{\chi}(\mathbf{t})\sim\mathcal{GP}(\mathbf{\mu}(\mathbf{t}),\mathbf{k}(\mathbf{t},\mathbf{t}^{ \prime}))\), where \(\mathbf{\mu}(\cdot):\mathbb{R}^{d_{i}}\rightarrow\mathbb{R}^{d_{o}}\) and \(\mathbf{k}(\cdot,\cdot):\mathbb{R}^{d_{i}}\times\mathbb{R}^{d_{i}}\rightarrow \mathbb{R}^{d_{o}}\times\mathbb{R}^{d_{o}}\) are the mean and kernel function. The joint distribution of observed samples and the predicted output \(\mathbf{\chi}^{*}\) of the input \(\mathbf{t}_{*}\) is given by
\[\begin{bmatrix}\mathbf{\chi}^{\downarrow:N}\\ \mathbf{x}_{*}\end{bmatrix}\sim\mathcal{N}\left(\begin{bmatrix}\mathbf{\mu}(\mathbf{t}_{ 1:N})\\ \mathbf{\mu}(\mathbf{t}_{*})\end{bmatrix},\quad\begin{bmatrix}\mathbf{K}(\mathbf{t},\mathbf{t})+\mathbf{ \Sigma}_{c}&\mathbf{K}(\mathbf{t},\mathbf{t}_{*})\\ \mathbf{K}(\mathbf{t}_{*},\mathbf{t})&\mathbf{K}(\mathbf{t}_{*},\mathbf{t}_{*})\end{bmatrix}\right), \tag{2}\]
where \(\mathbf{\chi}^{1:N}\) are the observed values at input \(\mathbf{t}_{1:N}\). \(\mathbf{K}(\mathbf{t},\mathbf{t})\in\mathbb{R}^{Nd_{o}\times Nd_{o}}\times Nd_{o}\), \(\mathbf{K}(\mathbf{t},\mathbf{t}_{*})\in\mathbb{R}^{Nd_{o}\times d_{o}}\), and \(\mathbf{K}(\mathbf{t}_{*},\mathbf{t}_{*})\in\mathbb{R}^{d_{o}\times d_{o}}\) are the Gram matrices that all elements are calculated by the kernel function of all input pairs \((\mathbf{t},\mathbf{t})\). Similar to [8, 2], the kernel function \(\mathbf{k}(\mathbf{t},\mathbf{t}^{\prime})=\sum_{q=1}^{Q}\mathbf{\Xi}_{q}\mathbf{k}_{q}(\mathbf{t},\mathbf{t }^{\prime})\) is designed based on the Linear Model of Coregionalization (LMC) assumption. \(\mathbf{\Xi}_{q}\in\mathbb{R}^{d_{o}\times d_{o}}\) is a positive semi-definite coregionalization matrix. \(\mathbf{k}_{q}(\mathbf{t},\mathbf{t}^{\prime})\) is the scalar kernel function. The design of \(\mathbf{\Xi}_{q}\) and \(\mathbf{k}_{q}(\cdot,\cdot)\) depends on the prior knowledge of \(\mathcal{GP}(\cdot)\).
For \(i\)-th teaching iteration, given the previous writing waypoints in \(\mathcal{D}_{L}^{i-1}\), the posterior distribution of the learner writing waypoints is derived as a multivariate Gaussian distribution (MGD), as \(\mathcal{P}(\mathbf{\chi}^{*}|\mathbf{t}_{*},\mathcal{D}_{L}^{i-1})\sim\mathcal{N}(\bm {\mu}_{L}^{*},\mathbf{\Sigma}_{L}^{i})\). The mean and covariance \(\mathbf{\mu}_{L}^{*}\) and \(\mathbf{\Sigma}_{L}^{*}\) are calculated as
\[\begin{split}\mathbf{\mu}_{L}^{*}&=\mathbf{\mu}(\mathbf{t}_{*})+\mathbf{K}(\mathbf{t}_{ *},\mathbf{t})(\mathbf{K}(\mathbf{t},\mathbf{t})+\mathbf{\Sigma}_{c})^{-1}(\mathbf{\chi}^{1:N}-\mathbf{ \mu}(\mathbf{t}))\\ \mathbf{\Sigma}_{L}^{*}&=\mathbf{K}(\mathbf{t}_{*},\mathbf{t}_{*})-\mathbf{K}(\mathbf{t}_{*}, \mathbf{t})(\mathbf{K}(\mathbf{t},\mathbf{t})+\mathbf{\Sigma}_{c})^{-1}\mathbf{K}(\mathbf{t},\mathbf{t}_{*}), \end{split} \tag{3}\]
where \(\mathbf{\Sigma}_{c}\) is the covariance matrix of the given noise. \(\mathbf{\mu}(\mathbf{t}_{*})\) is the prior mean value of the input \(\mathbf{t}_{*}\).
The training via-points in \(\mathcal{D}_{V}^{i}\) are considered as the new observations besides the previous observations of learner styles in \(\mathcal{D}_{L}^{i-1}\). The style of the learner has been fitted by GMR from \(\mathcal{D}_{L}^{i-1}\) in Section III-C. Unlike standard MOGP, GMR-GP model replaces the prior mean value \(\mathbf{\mu}(\mathbf{t}_{*})\) with the estimated GMR mean value. The kernel function
Fig. 2: Overview of TeachingBot. At each teaching iteration, a reference character is selected from the image dataset. The reference and writing waypoints are extracted from the reference image and handwritten images of the learner (as depicted by blue and gray lines in (a)). First, a GMM is fitted to capture main features of previous writing waypoints, and GMR is used to represent learner writing styles calculated from the learned GMM (as depicted in (e) and (f)). Second, training via-points are derived from reference waypoints (as depicted by red scatters in (b)). Third, a GMR-GP is fitted to generate training waypoints based on the training via-points and the learned writing style (as depicted by black lines in (c)). Then, the initial stiffness of each learner for impedance control is obtained from the difference between writing waypoints and reference waypoints (as depicted in (d)), and an impedance variation function (as depicted in (g)) is utilized to modulate the level of active participation.
\(\mathbf{k}(\mathbf{t},\mathbf{t}^{\prime})=\sum_{z=1}^{Z}h_{z}(\mathbf{t})h_{z}(\mathbf{t}^{\prime}) \widehat{\mathbf{\Sigma}}_{z}\mathbf{k}_{z}(\mathbf{t},\mathbf{t}^{\prime})\) is designed based on the learned variability. \(h_{c}(\mathbf{t})\) and \(\widehat{\mathbf{\Sigma}}_{z}\) are the derived responsibilities and componentwise conditional covariance matrix calculated by (1). The MGD for the training waypoints generation is rewritten as
\[\begin{split}\mathcal{P}(\mathbf{\chi}^{*}|\mathbf{t}_{*},\mathcal{D}_{ L}^{i-1},\mathcal{D}_{V}^{i})\propto\mathcal{P}(\mathbf{\chi}^{*}|\mathbf{t}_{*}, \mathcal{D}_{L}^{i-1})\mathcal{P}(\mathbf{\chi}^{*}|\mathbf{t}_{*},\mathcal{D}_{V}^{i} ),\\ \mathcal{P}(\mathbf{\chi}^{*}|\mathbf{t}_{*},\mathcal{D}_{L}^{i-1}, \mathcal{D}_{V}^{i})\sim\mathcal{N}(\mathbf{\mu}_{L-V}^{i},\mathbf{\Sigma}_{L-V}^{i}),\end{split} \tag{4}\]
where \(\mathbf{\mu}_{L-V}^{i}\) and \(\mathbf{\Sigma}_{L-V}^{i}\) can be derived based on \(\{\mathbf{\mu}_{L}^{*},\mathbf{\Sigma}_{L}^{i},\mathbf{\mu}_{V}^{*},\mathbf{\Sigma}_{V}^{*}\}\) according to [2]. \(\mathbf{\mu}_{L}^{*}\) and \(\mathbf{\Sigma}_{L}^{*}\) are the mean and covariance of \(\mathcal{P}(\mathbf{\chi}^{*}|\mathbf{t}_{*},\mathcal{D}_{V}^{i-1})\); \(\mathbf{\mu}_{V}^{*}\) and \(\mathbf{\Sigma}_{V}^{*}\) are the mean and covariance of \(\mathcal{P}(\mathbf{\chi}^{*}|\mathbf{t}_{*},\mathcal{D}_{V}^{i})\).
Afterwards, the training waypoints \(\{\mathbf{\chi}_{d}^{i,n}\}_{n=1}^{N}\) is sampled from the learned posterior distribution in (4) given \(\{\mathbf{t}_{n}\}_{n=1}^{N}\). For instance, we use the Matern kernel (lengthscale \(\nu=2.5\)) for above \(\mathcal{GP}\) fitting. Given the training via-points in Fig. 2(b) and learned styles in Fig. 2(e), the sampled training waypoints are plotted by the black line in Fig. 2(c).
### _Variable Impedance Control_
The reference trajectory \(\mathbf{\tau}^{i}\) for the robot control is obtained by interpolating the obtained training waypoints \(\{\mathbf{\chi}_{d}^{i,n}\}_{n=1}^{N}\). At each actuation step, the control input \(\mathbf{u}\) is sampled from the controller \(\pi(\mathbf{u}|\mathbf{x};\mathbf{x}_{d},\mathbf{\mathcal{K}}_{d}^{i},\mathbf{\mathcal{B}}_{d}^{i})\) for the reference motion tracking. \(\mathbf{\mathcal{K}}_{d}^{i}\in\mathbb{R}^{3\times 3}\) and \(\mathbf{\mathcal{B}}_{d}^{i}\in\mathbb{R}^{3\times 3}\) are the reference stiffness and damping for \(i\)-th teaching iteration.
#### Iii-F1 Control Law
A control law is designed as \(\mathbf{u}=\mathbf{J}^{T}\big{[}-\mathbf{\mathcal{K}}_{d}^{i}(\mathbf{x}-\mathbf{x}_{d})-\mathbf{ \mathcal{B}}_{d}^{i}(\hat{\mathbf{x}}-\hat{\mathbf{x}}_{d})+\mathbf{F}_{fd}\big{]},\) where \(\hat{\mathbf{x}}\) and \(\hat{\mathbf{x}}_{d}\) are the actual velocity and the reference velocity. \(\mathbf{J}\) is the Jacobian matrix. Similar to [35], \(\mathbf{F}_{fd}\) is added to compensate for the robot dynamics.
#### Iii-F2 Impedance Variation
For each learner and given the reference character \(\mathbf{c}\), \(\mathbf{\mathcal{K}}_{d}^{i}\) and \(\mathbf{\mathcal{B}}_{d}^{i}\) are updated as follows \(\mathbf{\mathcal{K}}_{d}^{i}=\mathbf{\mathcal{K}}_{r}+\mathbf{\mathcal{K}}_{s}^{i},\quad \mathbf{\mathcal{B}}_{d}^{i}=\mathbf{\mathcal{B}}_{r}+\mathbf{\mathcal{B}}_{s}^{i}\), where \(\mathbf{\mathcal{K}}_{r}\) and \(\mathbf{\mathcal{B}}_{r}\) are the initial stiffness and damping based on the individual pre-test without robot teaching. \(\mathbf{\mathcal{K}}_{s}^{i}\) and \(\mathbf{\mathcal{B}}_{s}^{i}\) are updated according to the \((i-1)\)-th training performance to adjust the engagement of the learner. \(\mathbf{\mathcal{K}}_{s}^{0}\) and \(\mathbf{\mathcal{B}}_{s}^{0}\) for the first teaching iteration are set to zero.
\(L\) writing images for the reference character \(\mathbf{c}\) during individual pre-test are collected as \(\{\widetilde{\mathbf{c}}_{l}\}_{l=0}^{L}\). The writing waypoints are extracted as \(\{\{\mathbf{\chi}_{\widetilde{\mathbf{c}}_{l}}^{l,n}\}_{n=1}^{N}\}_{l=1}^{L}\). The mean writing waypoints \(\overline{\mathbf{\chi}}_{c}\) can be estimated following Section III-C. \(\mathbf{\chi}_{\mathbf{c}}\) and \(\overline{\mathbf{\chi}}_{c}\) are aligned by the Dynamic Time Warping (DTW)[36], as depicted in Fig. 2(d). \(\mathbf{\mathcal{K}}_{r}\) is obtained as follows
\[\mathbf{\mathcal{K}}_{r}=\beta_{r}|\Delta\overline{\mathbf{\chi}}|,\Delta\overline{ \mathbf{\chi}}=\overline{\mathbf{\chi}}_{c}-\mathbf{\chi}_{\mathbf{c}}, \tag{5}\]
where \(\beta_{r}\) is the coefficient. In practice, damping \(\mathbf{\mathcal{B}}_{r}\) is obtained from the stiffness \(\mathbf{\mathcal{B}}_{r}=1/2\sqrt{\mathbf{\mathcal{K}}_{r}}\). \(\mathbf{\mathcal{K}}_{s}^{i}\) and \(\mathbf{\mathcal{B}}_{s}^{i}\) are obtained as
\[\mathbf{\mathcal{K}}_{s}^{i} =\mathbf{\mathcal{K}}_{s}^{i-1}+\beta_{\mathcal{K}}\mathbf{\Psi}(\Delta \mathbf{\chi}^{i-1}),\mathbf{\mathcal{B}}_{s}^{i}=1/2\sqrt{\mathbf{\mathcal{K}}_{s}^{i}}, \tag{6}\] \[\mathbf{\Psi}(\Delta\mathbf{\chi}^{i-1,p}) =\frac{\exp{(\alpha\mathbf{\Omega}^{i-1,p}-\Pi_{p})}-1}{\exp{(\alpha \mathbf{\Omega}^{i-1,p}-\Pi_{p})}+1},p=0,1,2,\]
where \(\mathbf{\Psi}(\cdot)\) is an element-wise scalar function (as shown in Fig. 2(g)). \(\Delta\mathbf{\chi}^{i-1}=\mathbf{\chi}^{i-1}-\mathbf{\chi}_{d}^{i-1}\) and \(\Delta\mathbf{\chi}^{i-1,p}\) is \(p\)-th element of \(\Delta\mathbf{\chi}^{i-1}\). \(\mathbf{\Omega}^{i-1,p}=\Delta^{2}\mathbf{\chi}^{i-1,p}-\Pi_{p}^{2}\). \(\beta_{\mathcal{K}}\) is the coefficient to regulate the convergence speed. \(\Pi_{p}\) is the predefined threshold that determines the region whether to increase the assistance to the learner. \(\alpha\) is a positive scalar.
## IV Experiments
We conduct human-subject experiments to answer the following questions: 1) In general, does TeachingBot enable humans to learn the target skill more efficiently compared with baselines?; 2) Is TeachingBot able to capture human style and provide adaptive guidance to learners?; and 3) Is the learning experience with TeachingBot encouraging to the learner?
### _Experimental Setups_
#### Iv-A1 Hardware Setup
We employ the Franka Emika Panda robot, a 7-DoF torque-controlled robot for all teaching experiments (as depicted in Fig. 1) The length and width of the writing space are 350mm and 350mm, respectively. The interaction force on the end-effector handle is estimated from joint torques. The reference character \(\mathbf{c}\) is represented by a 128 \(\times\) 128 image. During the pre-test and evaluation phases,
Fig. 3: Visualization of training waypoints generation for characters \(C_{3}\) and \(C_{4}\). (a) Reference waypoints of each stroke; (b) learned GMMs (green ellipse) from writing waypoints; (c) learned style by GMR from writing waypoints; (d) training via-points; (e) generated training waypoints (black lines) by GMR-GP.
the handwritten characters of human learners are captured by a mounted camera, and the image is also resized to 128 \(\times\) 128. The reference waypoints and writing waypoints of each character are extracted from the images via feature extraction methods. For the robot-teaching phase, the learner's actual writing trajectories are represented by the actual position of the robot end-effector.
The impedance control of the robot teacher runs at 1000Hz (\(T_{s}=0.001s\)). The maximal stiffness of the VIC is set to \(\mathcal{K}_{d}^{max}=[1200,1200,1200]N/m\). All parameters for style learning and waypoints generation are set as \(N=200\), \(L=3\), \(H=5\), and \(I=9\). The parameters for impedance variation are set as \(\beta_{r}=1000N/m^{2}\), \(\beta_{\mathcal{K}}=100N/m^{2}\), \(\Pi_{p}=0.05m\) and \(\alpha=2000\).
#### Iv-A2 Data Curation
The Chinese character dataset \(\mathbf{C}\) we utilize is obtained from link. We select 5 distinct characters with varying numbers of strokes and levels of complexity from the dataset: \(C_{1}:\text{--}\), \(C_{2}:\text{\Large$\bigwedge$}\), \(C_{3}:\text{\Large$\left|\right|$}\), \(C_{4}:\text{\Large$\left|\right|$}\), and \(C_{5}:\text{\Large$\left|\right|$}\), in ascending order of complexity. As depicted in Fig. 3, we visualized the process for generating the training waypoints of characters \(C_{3}\) and \(C_{4}\) following the implementation in Section III.
### _Experimental Protocol_
_Baselines:_ We assess the effectiveness of TeachingBot by comparing it to two primary baselines:
1. _Font-Copy (FC)_: Participants are only allowed to look at the reference character without any physical correction from the robot. The subjects in this group are referred to as _Group 1_ (\(G1\)).
2. _Robot Guided Writing (RGW)_: Similarly to robotic physical training [29] and rehabilitation [9, 11], this baseline uses high stiffness \(\mathcal{K}_{d}^{max}\) to fully guide participants in replicating the reference trajectory without impedance variation for considering the learner active engagement. The subjects in this group are referred to as _Group 2_ (\(G2\)).
The group of subjects trained with TeachingBot is referred to as _Group 3_ (\(G3\)). We recruited 15 human subjects, with 4 males and 1 female in each group who have different Chinese character writing experience. The experiments for each group are implemented for same times (\(I=9\)) and consists of three phases:
1. _Pre-test_: The pre-test phase occurs before any training to assess the initial performance of each subject, eliminating the influence of prior knowledge. Human subjects write the given reference character \(L\) times without any physical guidance or visual guidance. The individual writing time \(\Delta t_{c}\) of each reference character \(\mathbf{c}\) is estimated for each subject.
2. _Robot Teaching/Font Copy_: For \(G2\) and \(G3\), as illustrated in Fig. 1, during the robot teaching phase, the learner holds the handle and looks at the actual writing trajectories, and the robot executes the teaching trajectory to guide the learner. The subjects of \(G1\) first look at the simulation of character writing on the screen and then handwrite the reference character.
3. _Evaluation_: The evaluation phase is conducted after the robot teaching or font copy phase. All subjects are allowed to write the reference character without any reference.
### _Experimental Results_
#### Iv-C1 Performance Comparison
Two key metrics are defined to measure the similarity between the reference and written waypoints using DTW distance[36]. First, as illustrated in Fig. 4, _Metric 1_ (\(M_{1}\)) aims to capture the global structural similarity so that the center of the written and reference character are aligned before DTW distance calculation. Second, as illustrated in Fig. 5, unlike \(M_{1}\), _Metric 2_ (\(M_{2}\)) aims to capture the stroke-wise similarity so that the starting points of reference and written strokes are aligned before DTW distance calculation.
The statistical results of all experiments are presented in Fig. 6. Our first finding is that all subjects of three groups have highly similar initial performance. The average value of the percentage of similarity improvement over all subjects of each group after training is calculated for each character and depicted in Fig. 6(a) and (b). In particular, the average improvement in similarity between \(M_{1}\) and \(M_{2}\) of each group over five characters is given in Table I. Compared to both baselines (FC and RGW), it was observed that the improvement of the learners trained with TeachingBot was significantly better. Although, in terms of \(M_{1}\), the improvement compared to baselines is not entirely significant (\(p<0.05\)) and (\(p\approx 0.09\)). Compared to RGW, as the \(M_{1}\) metric focuses on structural similarity, the effect of adaptation to the specific human learning style is marginally significant. That said, this also highlights the importance of physical guidance for teaching learners to write. In terms of
Fig. 4: Writing performance before teaching (red lines) and after teaching (black lines) according to \(M_{1}\) of character \(C_{2}\) by three groups. (a) FC: (b) RGW; (c) Ours.
Fig. 5: Writing performance before teaching (red lines) and after teaching (black lines) according to \(M_{2}\) of character \(C_{2}\) by three groups. (a) FC; (b) RGW; (c) Ours.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & Value(\%)\(M_{1}\) & Value(\%)\(M_{2}\) & Force (N) \\ FC & 18.790 \(\pm\) 12.15 & 17.521 \(\pm\) 7.621 & NA \\ RGW & 25.008 \(\pm\) 13.044 & 24.837 \(\pm\) 5.362 & 6.813 \(\pm\) 1.131 \\ Ours & **34.296 \(\pm\) 14.03** & **36.436 \(\pm\) 6.986** & **19.921 \(\pm\) 4.096** \\ \hline \hline \end{tabular}
\end{table} TABLE I: Average Similarity Improvement and Interaction Force.
\(M_{2}\), in contrast to FC and RGW, TeachingBot leads to better improvement (\(p<0.05\)) and (\(p<0.05\)). This indicates the need for adaptation when learning fine-grained stroke-wise details of the reference character.
#### Iv-C2 Adaptiveness of Training Waypoints
In Fig. 7, we visualize the training waypoints of the second stroke of character \(C_{5}\) for the second subject in \(G3\) during roboting teaching phase to demonstrate the adaptation ability. In Fig. 7(a), red scatters are the mean of writing waypoints learned from the collected data in pre-test phase. We can see a big difference with the reference waypoints that are plotted by the blue line. The difference is applied to generate the initial reference stiffness. Fig. 7(b), Fig. 7(c), Fig. 7(d), and Fig. 7(e) demonstrate the results of \(1\)-st, \(3\)-rd, \(6\)-th, and \(9\)-th teaching iteration, respectively. In Fig. 7(b), training via-points (\(H=5\)) are extracted to enable the learner to capture the curvature of the reference style in the dashed circle. At \(3-\)rd and \(6\)-th teaching iterations, as illustrated in Fig. 7(c) and (d), training via-points (\(H=8\)) are extracted to enable the learner to focus on different parts of reference style (as shown in the dashed circles). As the robot teaching progresses, the generated training waypoints are gradually getting closer to the reference waypoints. Particularly, from Fig. 7(d), we can see that the learner can write the reference character with small variability.
#### Iv-C3 Engagement of Human Learners
As depicted in Fig. 6(a) and Fig. 6(b), learners in \(G3\) show greater similarity improvement compared to those in \(G2\). To validate the learner engagement is crucial for writing skill learning, one indicator of engagement is the interaction force exerted by the learner. A greater force suggests the greater active participation. We calculate the average interaction force for each character across all learners. As depicted in Fig. 6(c), the interaction force in \(G3\) significantly surpasses that in \(G2\) (\(p<0.05\)), indicating higher human engagement during robot teaching, which aids in improving writing skills. In this article, we select the same parameters for impedance variation function, which can also be optimized for each learner to achieve higher learning efficiency.
### _Limitations and Discussion_
We execute post-experiment survey to assess perceived improvement of human learners' in writing skills. Despite \(G3\) showing quantitative performance advantages, learners do not report increased confidence in better writing skills compared with \(G2\), consistent with previous research [18]. This contradiction leads to two hypotheses: _1)_ Lack of visual signal integration: human learners may need clear visual signals in addition to physical ones for better understanding. _2)_ Failure to emulate human instructors: due to differences in morphology and the challenge of replicating real teaching scenarios, full emulation is difficult. These hypotheses guide future robot teaching system development, emphasizing the importance of multi-modal signals (e.g., language and visual) for greater effectiveness and generality.
## V Conclusion
In this work, we introduce TeachingBot, a robot teaching system in which robots assume the role of instructors, guiding humans in character writing through adaptive physical interactions. Results from human-subject experiments demonstrate the effectiveness of TeachingBot. Moreover, we provide evidence of how TeachingBot customizes its teaching approach to meet the unique needs of individual learners, leading to enhanced overall engagement and effectiveness. The potential of robot teaching presents exciting opportunities for scaling up education and providing learning opportunities to many, even in the absence of human teachers.
Fig. 6: Statistical results of similarity improvement and interaction force of three experimental groups across five Chinese characters. (a) Percentage of similarity improvement based on \(M_{1}\); (b) percentage of similarity improvement based on \(M_{2}\); (c) average interaction force per waypoint.
Fig. 7: Visualization of training waypoints changes. Blues lines are reference waypoints, scatter red points are training via-points, red lines are learned means of writing waypoints, black lines are the mean of training waypoints, and purple lines are three sampled potential writing waypoints. (a) Initial reference stiffness generation; (b) \(1\)-st teaching iteration; (c) \(3\)-rd teaching iteration; (d) \(6\)-th teaching iteration; (e) \(9\)-th teaching iteration. |
2301.13869 | Reverse engineering adversarial attacks with fingerprints from
adversarial examples | In spite of intense research efforts, deep neural networks remain vulnerable
to adversarial examples: an input that forces the network to confidently
produce incorrect outputs. Adversarial examples are typically generated by an
attack algorithm that optimizes a perturbation added to a benign input. Many
such algorithms have been developed. If it were possible to reverse engineer
attack algorithms from adversarial examples, this could deter bad actors
because of the possibility of attribution. Here we formulate reverse
engineering as a supervised learning problem where the goal is to assign an
adversarial example to a class that represents the algorithm and parameters
used. To our knowledge it has not been previously shown whether this is even
possible. We first test whether we can classify the perturbations added to
images by attacks on undefended single-label image classification models.
Taking a "fight fire with fire" approach, we leverage the sensitivity of deep
neural networks to adversarial examples, training them to classify these
perturbations. On a 17-class dataset (5 attacks, 4 bounded with 4 epsilon
values each), we achieve an accuracy of 99.4% with a ResNet50 model trained on
the perturbations. We then ask whether we can perform this task without access
to the perturbations, obtaining an estimate of them with signal processing
algorithms, an approach we call "fingerprinting". We find the JPEG algorithm
serves as a simple yet effective fingerprinter (85.05% accuracy), providing a
strong baseline for future work. We discuss how our approach can be extended to
attack agnostic, learnable fingerprints, and to open-world scenarios with
unknown attacks. | David Aaron Nicholson, Vincent Emanuele | 2023-01-31T18:59:37Z | http://arxiv.org/abs/2301.13869v2 | # Reverse engineering adversarial attacks with fingerprints from adversarial examples
###### Abstract
In spite of intense research efforts, deep neural networks remain vulnerable to adversarial examples: an input that forces the network to confidently produce incorrect outputs. Adversarial examples are typically generated by an attack algorithm that optimizes a perturbation added to a benign input. Many such algorithms have been developed. If it were possible to reverse engineer attack algorithms from adversarial examples, this could deter bad actors because of the possibility of attribution. Here we formulate reverse engineering as a supervised learning problem where the goal is to assign an adversarial example to a class that represents the algorithm and parameters used. To our knowledge it has not been previously shown whether this is even possible. We first test whether we can classify the perturbations added to images by attacks on undefended single-label image classification models. Taking a "fight fire with fire" approach, we leverage the sensitivity of deep neural networks to adversarial examples, training them to classify these perturbations. On a 17-class dataset (5 attacks, 4 bounded with 4 epsilon values each), we achieve an accuracy of 99.4% with a ResNet50 model trained on the perturbations. We then ask whether we can perform this task without access to the perturbations, obtaining an estimate of them with signal processing algorithms, an approach we call "fingerprinting". We find the JPEG algorithm serves as a simple yet effective fingerprint (85.05% accuracy), providing a strong baseline for future work. We discuss how our approach can be extended to attack agnostic, learnable fingerprints, and to open-world scenarios with unknown attacks.
deep neural network, adversarial machine learning, classification, supervised learning, adversarial examples
## I Introduction
Deep neural networks are susceptible to adversarial examples [1, 2]: inputs optimized to produce incorrect or unexpected outputs. Typically adversarial samples are generated by optimizing a perturbation \(\delta\) added to a benign image \(\mathbf{x}\)[3]. This added perturbation can be optimized by one of an ever-growing list of attack algorithms [4, 5], e.g., by maximizing the loss of the softmax function used to train single-label neural networks for image classification.
It remains unclear whether it will prove too computationally expensive or theoretically impossible [6, 7, 8, 9, 10, 11] to completely defend neural networks from adversarial attacks, at least for neural network models in their current mathematical formulation. Defenses are notoriously difficult to evaluate, in spite of concerted efforts by the community to establish good practices [12]. Given the difficulties faced in developing defenses against adversarial attacks, we consider a different question. We ask whether it is possible to reverse engineer adversarial attacks, using adversarial examples. If this were possible, it could deter bad actors from deploying adversarial example in the real world, e.g., due to the threat of attribution. Accordingly, we identified two capabilities that would be desirable in a system for classifying and searching datasets of adversarial examples. We describe these two capabilities then explain how we formulate a machine learning task that can produce models with those capabilities.
Fig. 1: Schematic of our approach to reverse engineering adversarial attacks. Top row illustrates “fingerprinting” adversarial examples to obtain an estimate of the perturbation \(\delta\) added by an attack to an image. Bottom row illustrates training deep neural networks to classify fingerprints from adversarial examples, assigning each to a class that corresponds to an attack algorithm and parameters. Our goal in formulating the problem this way is to train models on large datasets of fingerprints from attacks generated with existing software frameworks, and then classify attacked images without knowledge of the attack or access to the perturbation, by using fingerprints.
### _Classifying perturbations by attack_
The first capability we would want such a system to have is to classify adversarial examples by the attack algorithm used to generate them. Of course it may be possible to reverse engineer attacks without classifying them, e.g., by literally "reversing" the optimization. In other words, here we formulate reverse engineering as a supervised classification problem. Our motivation is pragmatic. We ask whether we can leverage widely used and well understood methods, as well as software frameworks that have been developed to generate large datasets of adversarial examples, that we can use to train a machine learning model to classify by attack algorithm.
To the best of our knowledge, it is still an open question in the literature whether adversarial examples can be classified by attack algorithm. We emphasize that classifying examples by attack algorithm is different from detecting that an image has been attacked, e.g. by monitoring the outputs of the model for outliers, or (equivalently) classifying an image as an adversarial example [13].
It could be the case that very different attack algorithms all arrive at similar perturbations, in which case it would be difficult or impossible to classify them. For example, a PGD attack with an \(L^{\infty}\) bound of \(\epsilon\)=8/255 on a specific instance of a ResNet50 model may produce perturbations that are indistinguishable from those produced by a Square attack with the same bound, on the same model. Alternatively, it could be the case that, because of the abundance of adversarial examples in image space, each algorithm can produce unique perturbations that other attacks are much less likely to find. By the same token, it could be the case that classifying adversarial examples by attack algorithm can be done using the attacked images themselves, and therefore is trivial. Alternatively, classification of attacks might require the perturbation added by an attacker, which a defender may not have access to, even when they know the image is attacked. These open questions make it important to carry out a rigorous study of whether or not this task is even possible.
#### Ii-B1 Classifying families of adversarial attacks
A related question that arises when considering how to reverse engineer adversarial attacks is whether this problem is hierarchical. Can attacks be grouped somehow, and could this grouping help with classification? In this work we again operationally define families of attacks with the goal of understanding how this relates to our ability to classify them. For brevity we avoid summarizing the history of adversarial attacks, and refer the reader to recent reviews. These reviews show that attacks can be schematized in several ways [4, 5], and grouped along several dimensions. Although it is important to consider all dimensions, in our studies we focus on two: the _threat model_, and the _constraint_. Researchers in this area often speak of a _threat model_, a term that summarizes the access that an attacker has to the machine learning model. Under a _white-box_ threat model, the attacker has total access, and thus can compute the gradient, which generally speaking allows for more powerful attacks. In contrast, under a _black-box_ threat model, the attacker only has access to outputs of the machine learning model, which may be a single predicted label or a vector of scores. Black-box attacks generally require many more queries or iterations to produce a powerful attack. Another dimension along which attacks can be grouped is the constraints placed on the perturbation. It is common for both white-box and black-box attacks to use \(L^{p}\)-ball constraints, where the size of a perturbation generated by an attack is constrained to be less than some \(\epsilon\) as computed with the corresponding \(p-\text{norm}\) (e.g., an \(L^{2}\) norm). In contrast, patch attacks do not constrain the perturbation size but constrain the attack to a confined space within the image [14]. Considering just these two dimensions, we can already begin to group each attack algorithm into a family. E.g., both Fast Gradient Sign Method (FGSM) [2] and Projected Gradient Descent (PGD) [3], could be considered part of a family of white-box, \(L^{p}\)-ball attacks, while square attack [15] could be considered part of the family of black-box, \(L^{p}\)-ball attacks. We of course recognize that attacks can be grouped in other ways --e.g., one might consider a patch attack a physical attack on real-world objects, as opposed to white-box attacks on images [16]-- but we think most researchers would not find it controversial to group attacks into families, and agree that it would be useful to ask whether the ability to classify adversarial examples depends in part on the family. An effect of family figures into our analyses below.
### _Estimating perturbations in an attack-agnostic manner_
Assume for a moment that we can classify adversarial examples by attack algorithm, but that this is best done using the perturbation added by the attack, not the attacked image itself (as our result below indicate). This suggests that the second capability we will want our system to have is to estimate the true perturbation \(\delta\) generated by an attack, ideally in an attack-agnostic fashion that does not require us to develop a reverse engineering method for each algorithm or family of attacks. Here again, little work has been done, and so we take a methodical approach. As we detail in Section II, we test whether we can obtain estimates of the perturbations using familiar signal-processing methods for image compression and restoration, namely the JPEG algorithm and a compressed sensing-based image reconstruction algorithm. We were first motivated to take this approach after observing that simply reconstructing attacked images \(\mathbf{x}^{\prime}\) and subtracting the reconstruction \(\hat{\mathbf{x}}\) from \(\mathbf{x}^{\prime}\) to obtain an estimate of the perturbation \(\hat{\delta}\) can produce a mark that is obvious upon visual inspection. We show an example in the top panel of Figure 1. We refer to the estimates of \(\hat{\delta}\) so obtained as "fingerprints". Our motivation for this approach also springs from previous work showing that such algorithms can remove the perturbation \(\delta\) added by an attacker [17]. We are of course aware of work showing that adaptive attacks can produce perturbations that successfully attack models even after input transformations such as JPEG are applied [18]. Our express goal here is _not_ to defend the model, but to obtain an estimate of the perturbation in an attack-agnostic fashion. We see the simplifying assumption
of testing on undefended models as part of our methodical approach, and consider it important to test in this simplified setting to lay the foundation for future work.
### _Our contribution_
Without a theory of adversarial examples, we cannot state unequivocally that each capability can be designed, but we can test whether each is possible empirically. Here we provide evidence that such a system can be designed with the two capabilities just described, suggesting it will be possible to reverse engineer adversarial attacks. Our contributions are as follows:
* We show that given the true perturbations \(\delta\) added to benign images, we can predict with near perfect accuracy the attack used and at least one of its parameters, the value of the bound \(\epsilon\) for bounded attacks.
* We then show that we can obtain estimates of the perturbation, which we call fingerprints, in an attack agnostic manner that still allows us to classify the perturbations according to attack.
* We demonstrate that fingerprints obtained with simple signal processing methods allow us to classify attacks with accuracy of 84.49%. Compared to our empirical upper bound of near 100% accuracy shown with true perturbations, this leaves room for improvement, but we provide a strong baseline for future work using data-driven methods, a point we return to in the discussion.
## II Approach
### _Notation_
To discuss our method, we adopt the following notation: Let \(F_{\theta}\) be a deep neural network model with parameters \(\theta\) trained to map inputs \(\mathbf{x}\) to to a set of \(c\) class labels \(Y=\{y_{1},y_{2},...,y_{c}\}\), For all supervised learning problems, we update \(\theta\) by minimizing a loss function \(L\).
Where needed, we use a superscript \(F_{\theta}^{\delta}\) to distinguish a deep neural network trained to classify adversarial examples into attack classes \(Y^{\delta}=\{y_{1},y_{2},...,y_{c}\}\) from the standard network for single-label image classification \(F_{\theta}\). For such a network \(F_{\theta}^{\tilde{\delta}}\), each class \(c\) in \(Y\) is a specific attack algorithm from one of the families we study, where the class also denotes a bound and an epsilon parameter when the attack uses such constraints: e.g., the PGD attack from the white-box \(L^{p}\)-ball family, with an \(L^{\infty}\) bound and \(\epsilon=4/255\). (We define these further below.) Using \(\hat{\delta}\) as a superscript denotes that we have trained \(F_{\theta}^{\tilde{\delta}}\) on some estimate of the perturbation \(\delta\) added by an attack to an image \(\mathbf{x}\) to produce the adversarial example \(\mathbf{x}^{\prime}\). We call these estimates \(\tilde{\delta}\) "fingerprints".
### _Classifying adversarial images by attack algorithm_
Here we focus on attacks on deep neural network models for single-label image classification, as this is where much of the research on attacks has centered. To simplify the problem, we assume that attacked models are undefended, and that a researcher is able to train machine learning models on datasets of attacked images without threat of adversarial attack. Taking a "fight fire with fire" approach, we train neural networks to classify adversarial examples by the attack used to optimize the perturbation \(\delta\) added to the benign image \(\mathbf{x}\). This approach is motivated by previous work showing that deep neural networks are susceptible to adversarial examples in part because they latch on to high-frequency components of images that are largely imperceptible to humans [19, 20, 21].
### _Adversarial attacks_
#### Ii-C1 Attack families, algorithms, and parameters
Although research on adversarial attacks is rapidly evolving, the community of researchers has recognized families of attacks. As stated in Section II, we group attack algorithms into three families for our problem formulation: white box \(L^{p}\)-ball attacks, black box \(L^{p}\)-ball attacks, and patch attacks. We see this is a reasonable choice given that much work has focused on attacks from these three families.
As shown in Table I, we generated attacks with two algorithms from the white box family, FGSM [2] and PGD [3], and one each from the black box and patch families: square attack [15] and the universal patch attack of [14], respectively. For purposes of classification, we further divided attacks by the value of \(p\) used for the \(L^{p}\) norm, (i.e., \(L^{2}\) or \(L^{\infty}\)) and the value of the bound parameter \(\epsilon\), where we used 4 unique values per attack and norm. In total this gave us 17 different classes, as shown in Table I.
#### Ii-C2 Dataset
All attacks were generated on images from Imagenette1, a version of the ImageNet dataset with only 10 classes, and approximately 2000 images per class. We preserved the training and test sets from Imagenette to avoid contaminating our test set with training images, as explained further below. Thus, we generated attacked images for both the training and test sets. For all experiments here, we used only successful attacks. The number of attacked images thus generated for each combination of attack and epsilon size (for bounded attacks) ranged from 5750-6700 for the training set, and from 2400-2990 for the validation set.
Footnote 1: [https://github.com/fastai/imagenette](https://github.com/fastai/imagenette)
### _Taking "fingerprints" of perturbations_
After generating these pools of attacked images, we "took fingerprints" from them. The same pipeline was used for all
\begin{table}
\begin{tabular}{l l l l}
**Attack algorithm** & **Attack family** & \(\epsilon\) **values** & **Other parameters** \\ FGSM & White box, & \{1, 2, 4, 8\} & None \\ & \(L^{p}\)-ball & / 255 & 250 steps, \\ PGD, \(L^{\infty}\) & White box, & \{1, 2, 4, 8\} & step size = \\ & \(L^{p}\)-ball & / 255 & (2.5 * \(\epsilon\)) \\ & & & / steps \\ & & & 250 steps, \\ PGD, \(L^{2}\) & White-box, & \{0.25, 0.5, & step size = \\ & \(L^{p}\)-ball & 1.0, 2.0\} & (2.5 * \(\epsilon\)) \\ & & & / steps \\ Square, \(L^{\infty}\) & Black box, & \{1, 2, 4, 8\} & 10k queries \\ & \(L^{p}\)-ball & / 255 & None \\ & & & \\ \end{tabular}
\end{table} TABLE I: Attack algorithms, families, and parameters used.
combinations of fingerprint methods and parameter settings reported.
To extract fingerprints form attacked images, we used different methods for image compression and reconstruction, under the working hypothesis that these methods will tend to remove the adversarial perturbation added to an image. If this hypothesis is true, then we should be able to subtract the reconstructed image \(\mathbf{\hat{x}}\) from the attacked image \(x\) to obtain an estimate of the perturbation \(\hat{\delta}\) added by an attacker. Using this notation, for all reconstruction methods, we obtain our fingerprint like so:
\[\hat{\delta}=\mathbf{x}^{\prime}-\mathbf{\hat{x}}\]
JpegTo use JPEG as a fingerprint extraction technique, we simply set the quality parameter of JPEG and then used the compressed image \(x_{jpeg}\) to create a fingerprint \(\delta_{jpeg}=x^{\prime}-x_{jpeg}\).
Compressed sensingAt a high level, compressed sensing is a family of algorithms that obtains high-fidelity estimates of a signal given many fewer samples than required by classical signal processing theorems, by solving an undetermined system of linear equations with a sparsity constraint. The system of equations typically consists of a random sampling matrix \(S\), a dictionary \(D\) that transforms the signal into a domain where it can be considered sparse (e.g., DCT), and a regularization constant \(\lambda\) that enforces sparsity. The algorithm we use also adds a \(k/n\) parameter, the ratio of random samples to the number of true samples in the signal (for images, the percentage of pixels). By randomly grabbing a subset of samples, in effect we use a Bernoulli sampling matrix.
Given this set of parameters for compressed sensing, \((S,D,k/n,\lambda)\), we solve for the attack fingerprint \(\delta_{cs}\) as
\[\chi_{cs}=\left\{\chi\in\mathrm{I\!R}^{n}|\min_{\chi}\|b-SD\chi\|_ {2}+\lambda\|\chi\|_{1}\right\} \tag{1}\] \[x_{cs}=D\chi_{cs}\] (2) \[\delta_{cs}=x^{\prime}-x_{cs}. \tag{3}\]
1 is typically solved using techniques described in [22].
### _Neural network training_
#### Ii-E1 Dataset preparation
We built training and test sets from our database of fingerprints to train and test neural network models \(F_{\theta}^{\delta}\) that assign labels to adversarial examples according to attack class. So that we could avoid contaminating the test set with the training set, we maintained the original splits from Imagenette. That is, we built our training sets using fingerprints extracted from attacked images generated with the Imagenette training set, and likewise built our test sets with fingerprints extracted from attacked images generated with the Imagenette test set. Both our training and test sets contained fingerprints taken from 1000 unique images from the original Imagenette dataset. These unique images were sampled randomly when creating the splits from the larger pools of fingerprints generated as described above. The training set was further split into training and validation sets, with 90 percent of the samples used for training, and the other 10 percent used to validate performance during training. Before creating splits as just described, we filtered the total set of adversarial examples to keep only successful attacks. For the 17-class dataset used for our main result, this gave us a training set size of 124156 samples and a test set size of 52283 samples, for each fingerprint or other input used to train networks (the true perturbation \(\delta\) or the adversarial example itself). (I.e., there was a training set of 124156 adversarial examples, and a separate training set of 124156 JPEG reconstructions, etc.)
#### Ii-E2 Model, optimizer, hyperparameters
For all experiments using fingerprints or comparison training data, we used the ResNet50 architecture [23] as the neural network model \(F_{\theta}^{\delta}\). As our loss function \(L\) we used standard cross-entropy loss, and we optimized parameters with the Adam optimizer [24] with the learning rate \(\alpha=0.01\), and a batch size of 128. We configured training such that networks would train for a maximum of 50 epochs (where each epoch is an iteration through the entire training set), but used an early stopping scheme. Early stopping depended on accuracy as measured on the validation set every 400 steps (i.e., every 400 batches). If four validation checkpoints elapsed without accuracy increasing beyond the maximum recorded, then training stopped. This meant that in practice the optimization rarely ran for the full 50 epochs. Visually inspecting the training histories showed that this scheme enabled sufficient training for the optimization to converge while preventing networks from overfitting on the training set. For all experiments, we trained four replicates of ResNet50, where each replicate had weights initialized before training.
## III Results
### _Classification of adversarial examples by attack algorithm_
We began by asking whether it is even possible to classify adversarial examples by the attack algorithm and parameters used. As stated in Section I, we start here because we formulate reverse engineering attacks as a supervised learning problem, and because to our knowledge this remains an unaddressed question in the literature. To answer this question, we began by training a ResNet50 on the perturbations added to images from the 10-class Imagenette dataset, by attacking a separate, undefended ResNet50 model pre-trained on all of ImageNet. We generated adversarial examples with 5 different attacks, 4 of them bounded, with 4 epsilon values per attack (see Section II), giving us a 17-class dataset. Results are shown in Figure 2
We found that, yes, we were able to assign labels to perturbations corresponding to the attack algorithm and size of the epsilon bound on attacks, as shown in the rightmost column of Figure 1(a). The ResNet50 trained on the perturbations alone was able to perform this task with near-perfect accuracy: 99.4% \(\pm\) 0.15% (mean \(\pm\) standard deviation) across 4 training replicates (instances of a model trained from randomly initialized weights). In contrast, when we asked the same ResNet50 model to perform the same task given the attacked images
themselves (i.e., the benign image \(\mathbf{x}\) + the perturbation \(\delta\) we classified before), we were only able to achieve 54.84% \(\pm\) 15.62 % accuracy on the held-out test set (Figure 1(a), second column from left). We analyze these two results further below, but note that taken together they indicate that it is possible to classify perturbations by attack algorithm and parameters, given the true perturbation, and additionally suggest that it will not be sufficient to simply classify the attacked images themselves.
_Obtaining and classifying estimates of perturbations by fingerprinting with signal-processing methods_
Given this initial evidence that it is possible to classify perturbations by attack and parameters used, we next asked whether we would be able to classify attacks even if we did not have access to the true perturbations. This would be the case if we detected that the image was attacked, by inspecting the image and comparing the human label with the incorrect outputs of an image-classification model, but we did not have knowledge of the attack used. In this situation, we would somehow need to obtain an estimate of the perturbation added by an attacker.
Motivated by previous work on defenses showing signal processing algorithms for image compression and reconstruction can remove perturbations added by non-adaptive adversarial attacks, we chose to test two of these algorithms as methods for obtaining estimates of the perturbation \(\delta\) added to attacked images, as detailed in Section II.
We began by testing with the JPEG algorithm. We trained ResNet50 models on "fingerprints" produced by first compressing and then decompressing the attacked images with JPEG, treating this as an estimate of the benign image before attack \(\mathbf{\hat{x}}\) that we subtracted from the attacked image \(\mathbf{x}\) to give us an estimate of the perturbation added by the attack, \(\tilde{\delta}\). To test for any effect of JPEG parameters, we generated these "fingerprints" with a quality parameter of 75, as used in the original paper proposing JPEG as a defense [17], and also with quality=25. ResNet50 models trained on JPEG quality=75 achieved an accuracy of 85.05% \(\pm\) 0.83 % and those trained on JPEG quality=25 achieved an accuracy of 84.49% \(\pm\) 1.46 %. By comparison, models trained on compressed sensing (CS) fingerprints achieved 55.39% \(\pm\) 1.87% accuracy. These results are also shown in the middle columns of Figure 1(a).
To better understand these results, we generated confusion matrices for each of the models, and visually inspected these to see if they provided additional insights. These are shown in Figure 1(b). The first thing we noticed when inspecting these plots was the inverse relationship between the size of the epsilon bound and the size of the error. I.e., attacks generated with smaller epsilon bounds were more difficult to classify. Additionally we observed that attacks with an \(L^{\infty}\) bound appeared to be easier to classify; results were consistent for these attacks except for the smallest epsilon values, whereas
Fig. 2: Results of training deep neural networks to assign attack classes to adversarial examples
the networks tended to make more mistakes for the PGD \(L^{2}\) attacks.
### _Performance when adding more, smaller \(\epsilon\) values_
Because we noted that our purely supervised classification approach was challenged by smaller values of \(\epsilon\), as can be seen in the confusion matrices in 2b, and because PGD \(L^{2}\) attacks in particular appeared to be challenging to classify, we chose to push on this result further. We generated additional PGD-\(L^{2}\) attacks with another set of values: 0.1, 0.2, 0.3 and 0.4, and then used this expanded dataset to repeat the experiments where we trained ResNet50 models to classify the true perturbation \(\delta\) and the JPEG-based fingerprints.
Again we saw that when given the true perturbation \(\delta\), networks could classify this expanded dataset quite well, achieving 97.18% \(\pm\) 1.96% accuracy. In contrast, we saw a drop in accuracy for models trained on JPEG fingerprints (quality=75), from 85.05% \(\pm\) 0.83% we saw before to 72.11% \(\pm\) 1.65% on this expanded dataset with more \(\epsilon\) values that were "closer" to each other. We generated confusion matrices for these models and indeed saw that for the PGD-\(L^{2}\) class, the models trained on JPEG fingerprints tended to misclassify all but those from attacks generated using the largest \(\epsilon\) values. These confusion matrices are shown in Figure 3.
Taken together, our results provide positive evidence that it is possible to classify adversarial examples according to attack algorithm, but that a purely supervised approach may be challenged when faced with the task of estimating continuous parameters like the value of \(\epsilon\) bound used with an attack.
### _Additional analysis_
Finally, we carried out additional analyses to test possible alternate explanations for our results.
#### Iii-D1 Image quality
First we asked whether different attacks might have different effects on image quality. If so, this could serve as a form of data leakage, where the network simply learns to classify an image by the amount of noise in it. To test for this possibility, we generated 2-D plots of the mean square error (MSE) versus the structural similarity index metric (SSIM) for each attacked image, compared to the benign image before attack, as shown in Figure4a. SSIM declined exponentially as MSE increased, which is perhaps not surprising, but we note that these metrics are not necessarily tightly linked; SSIM was specifically designed as a sensitive perceptual measure that detects changes in image quality that MSE does not take into account, while MSE can increase greatly even due to changes that the human eye would not detect (e.g. shifting the entire image one pixel in one direction) [25]. The important thing to notice here is that we did not see any obvious clustering of attack according to these two values. While this does not let us rule out the possibility of data pollution by image quality, we felt such an explanation was less likely given this result, and so we moved on to consider another possible explanation.
#### Iii-D2 Distribution of class labels produced by untargeted attacks
A second alternative explanation we considered was that different attacks might consistently generate specific labels for untargeted attacks, and this could provide a shortcut that the network would learn. I.e., an untargeted PGD-\(L^{\infty}\) attack with \(\epsilon=4\) might tend to convert the "fish" class into "airplane", whereas the untargeted Square-\(L^{\infty}\) attack might tend to convert the same "fish" class into "truck". To test for this possibility, we plotted the distribution of targeted labels produced by each attack for PGD-\(L^{\infty}\) and Square-\(L^{\infty}\), for all classes and all epsilon sizes. In Figure 4b we show these results. For readability, the distribution fro only three classes is shown. This analysis did not produce evidence suggesting that different attacks produce different distributions of labels that a model might be able to learn. In fact, we observed the opposite, the distributions appear to be quite similar across attack types and across \(\epsilon\) values. For example, in the upper left panel, the PGD-\(L^{\infty}\) attack was most likely to convert images with ground truth label 0 ("tench") to label 389. This was the case in all other panels as well, and for all other classes, although it was also clear that the label distributions produced by attacks on some classes were higher entropy than others (compare for example the distributions produced for class "0" with the distributions for class "701" shown in Figure 4b). This result suggests that the distributions of labels generated by attacks are largely a function of the decision boundaries learned by the model under attack, consistent with previous work [2, 6], and strengthens our claim that deep neural networks trained on the perturbations or fingerprints are learning to classify the attacks and epsilon sizes, not the targeted label, since the latter can vary highly within a ground truth class.
## IV Discussion
We investigated whether it is possible to reverse engineer attacks, in part because of the difficulties faced in develop
Fig. 3: Confusion matrices for ResNet50 models trained on dataset with more \(\epsilon\) values for PGD-\(L^{2}\) attack. As in Figure 2b, rows are ground truth labels, columns are predicted labels, and grayscale intensity in each square indicates probability of predicting each label, normalized within rows. Colored rectangles indicate attack algorithms. \(\epsilon\) values are sorted within attack algorithm to increase from top to bottom and from left to right. Each plot is generated from predictions of one training replicate of a ResNet50 model. In general, results were similar across replicates and so a representative example is shown.
ing defenses against them. More specifically, we formulated reverse engineering as a problem of classification with supervised learning methods. We found that we were able to classify adversarial examples according to the attack algorithm and the size of the epsilon bound used, given the true perturbation added to the attack. Classifying the attacked images themselves was not sufficient, although we observed that it was possible for attacks with large perturbations (e.g., large values of the bound \(\epsilon\) for bounded attacks). Additionally, we tested whether we could "fingerprint" attacked images to obtain an estimate of the perturbations added by attack algorithms. We showed that the well established and widely available JPEG algorithm can be used to provide a good estimate of the perturbation added by a wide range of attacks, and that neural network models trained on these fingerprints achieved \(\sim\)85% accuracy.
These results are not without caveats. Our study focuses on reducing the problem to its simplest form, and so we have not for example tested our approach on adaptive adversarial attacks, and we have not tested whether we can modify adaptive attacks to render the task of classifying adversarial examples difficult. In spite of these limitations, the results presented here are still important, directly demonstrating for the first time that classifying perturbations according to attack is possible, and providing a strong baseline for future work for more sophisticated methods of reverse engineering attacks.
### _Future work_
We identify several directions that future studies can take based on our results. The first would be to combine classification with regression to better estimate parameters such as the value of \(\epsilon\) used as a bound. As we observed, deep neural network models are quite capable of classifying attack algorithms given the true perturbation, but it is more difficult to classify the values of parameters such as \(\epsilon\). A method to achieve both might be to combine classification with a regression loss, in the same way that object detection models apply classification loss to the labels predicted for bounding boxes and regression loss to the coordinates of the bounding box. This would allow for prediction of the continuous variable of the epsilon bound and could be extended to other parameters such as the number of steps.
Our results on fingerprints also make a strong case for a learnable, attack-agnostic method for estimating the perturbation \(\delta\). This data-driven approach may seem obvious to researchers with a deep learning mindset, but the clear differences we observed between signal processing algorithms suggest that careful study may incorporate appropriate biases or constraints into fingerprinting models that improve performance. In particular the superior performance of JPEG compared to compressed sensing suggests that a neural network model for estimating fingerprints may benefit from integrating the perceptual components of the JPEG algorithm into its architecture. Previous work has shown that the JPEG algorithm can be implemented as a neural network [26], and recent
Fig. 4: Analysis of classification results
studies of adversarial attacks have suggested that algorithms tend to place energy in specific channels of the color space used by the JPEG algorithm [27, 28].
A last consideration for future studies will be how to deal with unknown attacks that are not contained within the dataset. This would need to be studied with open-world and open-set classification problem formulations. A natural starting point would be modified loss functions such as the entropic loss proposed by [29]. Note that the derivation of the entropic loss assumes a neural network with a fully connected layer without a bias term just before the final output layer, which may limit its applicability with state-of-the-art neural network models for single-label image classification such as ResNet, although practically speaking one can just add such a layer to a ResNet model as [29] do in their experiments.
## V Conclusion
We have shown that the exquisite sensitivity of deep neural networks to adversarial examples can be converted from a bug into a feature. Our results are consistent with the idea that deep neural network models can classify adversarial examples according to attack algorithm. These findings set the stage for future work on reverse engineering adversarial attacks.
|
2308.01966 | DCTM: Dilated Convolutional Transformer Model for Multimodal Engagement
Estimation in Conversation | Conversational engagement estimation is posed as a regression problem,
entailing the identification of the favorable attention and involvement of the
participants in the conversation. This task arises as a crucial pursuit to gain
insights into human's interaction dynamics and behavior patterns within a
conversation. In this research, we introduce a dilated convolutional
Transformer for modeling and estimating human engagement in the MULTIMEDIATE
2023 competition. Our proposed system surpasses the baseline models, exhibiting
a noteworthy $7$\% improvement on test set and $4$\% on validation set.
Moreover, we employ different modality fusion mechanism and show that for this
type of data, a simple concatenated method with self-attention fusion gains the
best performance. | Vu Ngoc Tu, Van Thong Huynh, Hyung-Jeong Yang, M. Zaigham Zaheer, Shah Nawaz, Karthik Nandakumar, Soo-Hyung Kim | 2023-07-31T06:02:35Z | http://arxiv.org/abs/2308.01966v1 | # DCTM: Dilated Convolutional Transformer Model for Multimodal Engagement Estimation in Conversation
###### Abstract.
Conversational engagement estimation is posed as a regression problem, entailing the identification of the favorable attention and involvement of the participants in the conversation. This task arises as a crucial pursuit to gain insights into human's interaction dynamics and behavior patterns within a conversation. In this research, we introduce a dilated convolutional Transformer for modeling and estimating human engagement in the MULTIMEDIATE 2023 competition. Our proposed system surpasses the baseline models, exhibiting a noteworthy 7% improvement on test set and 4% on validation set. Moreover, we employ different modality fusion mechanism and show that for this type of data, a simple concatenated method with self-attention fusion gains the best performance.
engagement estimation, transformer, multimodal +
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †:
+
Footnote †: Footnote †: thanks: [
us to utilize all of these three attributes for developing an effective solution.
In recent years, the success of Transformer (Wang et al., 2018) model and its successors (Wang et al., 2018; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019) in different fields such as natural language processing (Wang et al., 2018; Wang et al., 2019), time series analysis (Wang et al., 2019), and computer vision (Wang et al., 2019) gain the sequence model family a huge popularity and became the model of choice for different problems. In multimodal engagement estimation task, although there have been some study already carried out on the non-attention-based models like LSTM (Liu et al., 2019) and RNN (Wang et al., 2019), the attention-based model has not been thoroughly investigated. Moreover, we hypothesize that as the problem is multimodal with temporal information critical towards final prediction, an attention-based model such as transformers can be extremely effective.
To this end, we propose an architecture for engagement estimation that combines dilated convolution and transformers. We treat the modalities of the three attributes described previously as the signal and use them as the time-series-based data.
## 2. Methodology
In this section, we introduce our proposed method for the estimation of continuous engagement.
### Problem statement
The objective of engagement estimation is to predict frame-wise the degree of engagement from participant on the continuous scale ranging from 0 (lowest) to 1 (highest) from the input which is the multimodal signal. We formulate the engagement estimation as a regression problem on time-series data.
### Dilated Convolutional Transformer model
Overall, our approach consists of three main components: the Long sequence feature extractor, the multiple modalities combination module and the Frame-wise regressor. Input of the model is the sequence of time-series data obtained from sliding window. The architecture is shown in Figure 1.
#### 2.2.1. Long Sequence Feature extraction
During conversations, when a participant reaches a specific engagement state, the duration tends to be prolonged with minimal changes in the engagement score. Therefore, it becomes crucial to have a comprehensive model coverage that captures the overall trend and extracts global information from the sequence. However, the use of large convolutional filters can lead to overfitting, particularly due to the limited size of the available data.
To address this issue, we propose the utilization of dilated convolution, which allows us to enlarge the model's receptive field while preserving the input resolution throughout the network. Dilated Convolution, introduced by Holschneider et al. in 1990 (Holschneider et al., 1990), Dilated Convolution has become a prominent method for signal processing. Since the first use in deep learning (Wang et al., 2019), it has become one of the most popular convolution techniques (Chen et al., 2018; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019).
The dilated convolution expands the kernel by introducing gaps between its elements, effectively "inflating" it. The dilation rate, an additional parameter, determines the extent of expansion or widening of the kernel. From the formulation for traditional convolution:
\[(F*k)(p)=\sum_{s*t=p}F(s)k(t). \tag{1}\]
The dilated convolution is determined as:
\[(F*_{l}k)(p)=\sum_{s*t=p}F(s)k(t). \tag{2}\]
With \(F\) is the input, \(k\) is the the kernel, \(s\), \(t\) is the position of the considered elements, \(l\) is the dilation rate.
#### 2.2.2. Frame-wise regressor
Regression on time-series data requires the sequence model to operate. Due to the effectiveness of Transformer in time-series processing tasks (Wang et al., 2018; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019), we decide to use it for our regression module. The transformer layers adopt a unique strategy by modeling pairwise interactions among temporal tokens within each layer. This design enables the transformer layer to effectively capture long-range dependencies throughout the entire time series sequence, starting from the initial layer. Given an extracted time-series embedding from convolution layers, we employ Position Embedding in conjunction with these features to form the order for the sequence of tokens. Subsequently, these tokens are input into Transformer layers, consisting of Multi-Headed Self-Attention (MSA) (Wang et al., 2018), layer normalization (LN) (Chen et al., 2018), and MLP blocks.
Given that the Transformer was originally designed for translation problem, we make slight modifications to adapt the model for multi-label classification tasks. Specifically, we treat the transformer as an auto-encoder to generate the sequence of meaningful information. Then, a fully-connected layer will receive this information to return the engagement score for each frame.
### Modalities fusion
We employ two different fusion methods to find the good strategy of combining modalities information together. These fusion methods are desmonstrated in Figure 2.
Figure 1. An overview of the engagement estimation model. The model leverages 3 modalities as input, passing them through convolutional layers before merging and sending them to the transformer for the desired outcome.
#### 2.3.1. Self-attention fusion
From the feature extracting from convolution layers, we use the naive channel-wise concatenation to merge the modalities (Wang et al., 2016; Wang et al., 2017). Each frame feature is considered as an token and is concatenated right before passing to the transformer model. Despite being a simple strategy, this method already has already been proven as effective in different types of multimodality models. Feeding whole feature without fusions and alterations allow the attention layers to fuse the information itself. Hence utilize better the robustness of attention layers in mixing and finding the most informative components.
#### 2.3.2. Multimodal Gated Fusion
The Gated Multimodal Unit (GMU) (Wang et al., 2016; Wang et al., 2017; Wang et al., 2018; Wang et al., 2018) is a model that draws inspiration from flow control mechanisms found in recurrent architectures such as GRU or LSTM. The GMU is designed to serve as an internal unit within a neural network architecture, aiming to generate an intermediate representation by combining data from different modalities.
In the GMU the feature vectors associated with each modality denoted as \(x_{1}\), \(x_{2}\) are fed into neurons with a tanh activation function, which encode internal representation features based on their respective modalities. For every input modality, there exists a gate neuron (Multiplication node) responsible for controlling the contribution of the feature derived from input feature vector to the overall output of the unit. This gated neuron serves as an attention layer, analyzing inter-modality relationships to determine the relevance of each modality in encoding a specific input sample. To fuse all three modalities, a hierarchical architecture is constructed.
## 3. Experiments and Results
### Datasets
Engagement Estimation task in MULTIMEDIATE challenge employ Novice Expert Interaction (NOXI) Dataset (Beng et al., 2016) as the benchmark. This dataset contains 128 videos and audio files: 76 training, 20 validation and 32 testing recording 64 conversation session between two participants. In these conversations, one participant is assumed as an expert and the other participant plays the role of a novice. In total, there are 2502433 annotated frames in the training and validation set of the dataset. NoXI dataset also provides signals recorded from those session in terms of three modalities: head, pose, and voice.
### Experiments Settings
All experiments were conducted on a GTX 3090 GPU using PyTorch Lightning for the implementation of the entire pipeline. The model was trained with the Adam optimizer, employing a learning rate of 1e-6, over 30 epochs. The input sequence size was set to 64 frames. For the Dilated Convolution, we utilized three 1D Convolution Layers with kernel sizes of 5, 5, and 3, respectively, and a dilation rate of 4. In terms of the Transformer architecture, we employed a full model with 4 encoder layers and 4 decoder layers. The self-attention layers had 8 heads, with a hidden size of 128.
The evaluation metric used for the Engagement Estimation task of NOXI dataset is the Concordance Correlation Coefficient (CCC). We also use this metrics for loss function. CCC is formulated as:
\[\rho_{\rm c}=\frac{(2\rho_{\rm x}\sigma_{y})}{\sigma_{\rm x}^{2}+\sigma_{y}^{2 }+(\mu_{\rm x}-\mu_{y})^{2}} \tag{3}\]
where \(\mu_{\rm x}\) and \(\mu_{y}\) are the means for the two variables and \(\sigma_{\rm x}^{2}\) and \(\sigma_{y}^{2}\) are the corresponding variances. \(\rho\) is the correlation coefficient between the two variables.
### Experimental Results
#### 3.3.1. Fusion strategy, Sequence model and loss function comparison
Table 1 presents the results of our comprehensive experiments conducted with various configurations and settings. The reported scores on the validation set represent the best validation score achieved by the model.
Based on the analysis of the table, it is evident that the combination of Dilated Convolution and Self-attention fusion achieves the highest performance on the test set, with a score of 0.66. However, the Gated Fusion with Transformer model, despite obtaining the best score on the validation set (0.77), experiences a significant drop in performance on the test set (0.6), showing the occurrence of overfitting. Furthermore, the unsatisfactory results obtained when training and validating each role independently (Expert: 0.58, Novice: 0.61) suggest that employing separate pipelines for each
Figure 2. Two types of fusion strategies considered in our work: (a) Self-attention fusion, (b) Multimodal Gated Fusion.
participant's role may not be an effective choice for this model, despite its initial intuitiveness.
#### 3.3.2. Comparison with the baseline and competitors on Leaderboard
The leaderboard presented in Table 2 demonstrates the robustness of our model compared to the baseline, showcasing a 7% improvement on the test set and a 4% improvement on the validation set. However, our results still lag behind the top-ranked team in the challenge by a difference of 5% on the test set. It is worth noting that although there were other teams submitting to the leaderboard, we only consider the team that indicated their intention to submit a paper.
#### 3.3.3. Ablation Study: Validating modalities contribution on Engagement in Conversation
In this subsection, we analyze the impact of different modalities on conversation engagement. We delve deeper to directly compare engagement score correlations with each modality. Using the CCC score and feature magnitude, we assess the correlations between modalities and engagement. This analysis provides insights into modality importance and correlations with engagement. Results are presented in Table 3. It reveals an interesting contrast in our analysis. While speech remains crucial in predicting engagement, there is confusion regarding the importance of the Head and Pose modalities. Initially, the baseline scores suggested the Pose modality's significance over the Head modality. However, considering the correlation between Feature Magnitude and our model scores, the opposite trend emerges.
To gain further insights, we visualize some sample data in Figure 3, demonstrating the complexities arising from variations in human pose and facial expressions in different contexts. Smiling while listening to opponents increases the engagement score, while smiling during a call does not. Similarly, actions like leaning the head and waving the hand to touch the beard have no impact on engagement. However, waving the hand to point indicates full engagement in the conversation. These context-dependent variations pose challenges for accurate engagement estimation.
## 4. Conclusion
Our paper presents dilated convolution based Transformer model for Engagement Estimation in the MULTIMEDIATE Competition. It outperforms the baseline by incorporating dilated convolution and transformer layers, achieving better long-term capture ability. We also find that the Self-attention fusion strategy yields the best results among two multimodal fusion approaches. However, our method shows overfitting with decent validation results but lower test set performance, indicating the need to address this for better generalization.
#### Acknowledgments
This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (RS-2023-00219107). This work was also supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) (No.2021-0-02068, Artificial Intelligence Innovation Hub), and the Artificial Intelligence Convergence Innovation Human Resources Development (IITP-2023-RS-2023-00256629) grant funded by the Korea government (MSIT).
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline Convolution & Fusion & Regressor & Subject & Val & Test \\ \hline Dilated & SA & Transformer & Ex & 0.62 & – \\ Dilated & SA & Transformer & No & 0.65 & – \\ No & SA & Transformer & Ex+No & 0.67 & – \\ Traditional & SA & Transformer & Ex+No & 0.70 & – \\ Dilated & SA & Cross-val LSTM & Ex+No & 0.73 & 0.53 \\ Dilated & GF & LSTM & Ex+No & 0.72 & 0.53 \\ Dilated & SA & LSTM & Ex+No & 0.75 & 0.63 \\ Dilated & GF & Transformer & Ex+No & **0.77** & 0.60 \\ Dilated & SA & Transformer & Ex+No & 0.75 & **0.66** \\ \hline \multicolumn{4}{|l|}{(SA: Self-attention; GF: Gated Fusion) (Ex: Expert; No: Novice)} \\ \end{tabular}
\end{table}
Table 1. Experiment result on different setting: different type of modules and participants.
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline & Feature & Val & Test \\ \hline Head (Head et al., 2017) & AUs & 0.31 & 0.22 \\ Body (Head et al., 2017) & Openpose & 0.54 & 0.44 \\ Voice (Head et al., 2017) & gemaps & 0.58 & 0.55 \\ Baseline (Head et al., 2017) & All features & 0.71 & 0.59 \\ USTC-IAT-United & – & – & 0.71 \\ Our & All features & 0.75 & 0.66 \\ \hline \end{tabular}
\end{table}
Table 2. Result of our methods comparing with challenge baseline and other competitors
Figure 3. Sample from the novice video in session 69. The variation in the Pose and Facial expression in context of the participants posed a huge challenge for engagement estimation in conversation.
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline & Feature magnitude & Baseline Score & Our Score \\ \hline Head & 0.0031 & 0.31 & 0.435 \\ \hline Pose & 0.0003 & 0.54 & 0.192 \\ \hline Voice & 0.0069 & 0.58 & 0.59 \\ \hline \end{tabular}
\end{table}
Table 3. Comparing CCC between Ground truth and Feature Magnitude with models’ score. |
2309.05829 | Mobile Vision Transformer-based Visual Object Tracking | The introduction of robust backbones, such as Vision Transformers, has
improved the performance of object tracking algorithms in recent years.
However, these state-of-the-art trackers are computationally expensive since
they have a large number of model parameters and rely on specialized hardware
(e.g., GPU) for faster inference. On the other hand, recent lightweight
trackers are fast but are less accurate, especially on large-scale datasets. We
propose a lightweight, accurate, and fast tracking algorithm using Mobile
Vision Transformers (MobileViT) as the backbone for the first time. We also
present a novel approach of fusing the template and search region
representations in the MobileViT backbone, thereby generating superior feature
encoding for target localization. The experimental results show that our
MobileViT-based Tracker, MVT, surpasses the performance of recent lightweight
trackers on the large-scale datasets GOT10k and TrackingNet, and with a high
inference speed. In addition, our method outperforms the popular DiMP-50
tracker despite having 4.7 times fewer model parameters and running at 2.8
times its speed on a GPU. The tracker code and models are available at
https://github.com/goutamyg/MVT | Goutam Yelluru Gopal, Maria A. Amer | 2023-09-11T21:16:41Z | http://arxiv.org/abs/2309.05829v1 | # Mobile Vision Transformer-based Visual Object Tracking
###### Abstract
The introduction of robust backbones, such as Vision Transformers, has improved the performance of object tracking algorithms in recent years. However, these state-of-the-art trackers are computationally expensive since they have a large number of model parameters and rely on specialized hardware (_e.g._, GPU) for faster inference. On the other hand, recent lightweight trackers are fast but are less accurate, especially on large-scale datasets. We propose a lightweight, accurate, and fast tracking algorithm using Mobile Vision Transformers (MobileViT) as the backbone for the first time. We also present a novel approach of fusing the template and search region representations in the MobileViT backbone, thereby generating superior feature encoding for target localization. The experimental results show that our MobileViT-based Tracker, _MVT_, surpasses the performance of recent lightweight trackers on the large-scale datasets GOT10k and TrackingNet, and with a high inference speed. In addition, our method outperforms the popular DiMP-50 tracker despite having 4.7\(\times\) fewer model parameters and running at 2.8\(\times\) its speed on a GPU. The tracker code and models are available at [https://github.com/goutamyg/MVT](https://github.com/goutamyg/MVT).
## 1 Introduction
The two prominent paradigms of visual object tracking algorithms are Discriminative Correlation Filters (DCFs) and deep Siamese Networks (SNs) []. The DCF-based trackers localize the target object based on the filter response generated by convolving the features extracted from the search region with the filter coefficients learned from the target template. The SN-based trackers perform the cross-correlation (or similar) operation between features extracted from the template and search regions to generate the response map for target localization and bounding box estimation. The explicit learning of target-specific filter coefficients in DCF tracking increases their robustness against semantic background regions compared to SN trackers; however, SNs are faster due to their simpler model architecture supporting end-to-end evaluation on a GPU. With the adoption of powerful backbones and effective feature fusion techniques, SN trackers have shown state-of-the-art performance on various benchmarks [].
Feature representation of the target object plays a crucial role in tracker performance [11]. Most SN trackers use the ResNet [12] backbone for feature extraction, with ResNet-50 and ResNet-101 being the popular choices. The more recent trackers use pre-trained Vision Transformer (ViT) models [13, 14] as their backbone, surpassing the performance of ResNet-based SN trackers. However, a notable disadvantage of ViT-based trackers is the complexity of their backbone, both in terms of memory (a large number of model parameters) and latency (low inference speed). By deploying these models, achieving high tracking speed on an ordinary CPU or mobile device is challenging. This limitation severely restricts the usage of such tracking algorithms for several resource-constrained applications.
On the other hand, most lightweight tracking algorithms deploy compact convolutional neural network (CNN) backbones to minimize the model latency. The inductive biases of convolutional blocks effectively model the spatially local information related to the target object but fail to capture the global relations essential for accurate target state estimation in tracking [13]. Such a lack of global association between the template and search region in the backbone increases the burden on the feature fusion module (or the neck) to generate the fused encoding favorable for accurate and robust tracking. The self-attention-based Transformers [13] as the backbone is effective at global contextual modeling and have been excellent for tracking [14, 15]; however, they are computationally expensive.
In this paper, we are the first to investigate the usefulness of Mobile Vision Transformers (MobileViTs) as the backbone for single object tracking to present a lightweight but high-performance tracking algorithm, _MVT_. The recent MobileViTs [13] for image classification are known for their low latency, lightweight architecture, and adaptability to downstream tasks, \(e\)._g_., object detection and semantic segmentation. In addition, while all the related lightweight trackers independently compute the template and search region features in their respective backbone, our _MVT_ algorithm employs a hybrid feature extraction method where template and search regions are blended in the backbone by our novel Siamese Mobile Vision Transformer (Siam-MoViT) block.
## 2 Related Work
Multiple SN-based lightweight trackers have been presented in the last few years. LightTrack [13] employed Neural Architecture Search [14] to present an efficient tracking pipeline. It designed a search space of lightweight building blocks to find the optimal backbone and head architectures with pre-set constraints on the number of model parameters. E.T.Track [13] incorporated Exemplar Transformers for tracking to achieve real-time speed on a CPU. It used a stack of lightweight transformer blocks in the head module to perform target classification and bounding box regression. FEAR [14] tracker deployed a dual-template representation to incorporate temporal information during tracking. With a compact backbone, FEAR achieved over 200 frames-per-second (_fps_) speed on iPhone 11 with negligible impact on battery level. Stark-Lightning [13] used a RepVGG [13] backbone and a transformer-based encoder-decoder architecture in the neck module to model spatio-temporal feature dependencies between the target template and search regions. HiFT [13] proposed a hierarchical feature transformer-based approach for aerial tracking. It generated hierarchical similarity maps from the multi-level convolutional layers in the backbone network to perform a transformer-based fusion of shallow and deep features. SiamHFFT [14] extended the hierarchical feature fusion approach by [13] to model the inter-dependencies within the multi-level features and achieve high tracking speed on a CPU.
Among the related lightweight trackers, LightTrack is closest to our work, having similar neck and head modules but a different backbone. Stark-Lightning uses a transformer-based neck module to fuse features from the template and search regions. In contrast, the proposed _MVT_ uses a simple, parameter-free cross-correlation operation in its neck module. E.T.Track uses a transformer-based head module, while our _MVT_'s head module is built using a fully convolutional network. As a post-processing step, the related trackers LightTrack and E.T.Track refine their predicted bounding boxes by penalizing significant changes in bounding box size and aspect ratio between consecutive frames. Unlike these trackers, the proposed _MVT_ does not perform such heuristic-based bounding box refinements.
Most importantly, all the related lightweight trackers use a two-stream approach during feature extraction, i.e., the backbone features from the template and search region are computed independently. Such a two-stream computation limits the interaction between the template and search regions to the neck module only, resulting in inferior tracking performance. To alleviate this problem, we propose a hybrid feature extraction method where template and search regions are blended in the backbone by our novel Siam-MoViT block, as shown in Figure 1. The resulting entangled feature representation generated using our Siam-MoViT block improves the tracker performance while maintaining high inference speed. Efficient transformer architectures is an emerging research topic [] and has been unexplored by previously proposed lightweight trackers. To our knowledge, we are the first to use MobileViT as the backbone for object tracking. We are also the first to propose a tracking pipeline with a joint feature extraction and fusion approach in the tracker backbone.
Our contributions in this paper are thus:
* A novel lightweight tracking algorithm using MobileViTs. We show that the proposed MobileViT-based tracker performs better than related lightweight trackers.
* A hybrid feature extraction approach, intertwining the template and search regions using our Siam-MoViT block, producing better features for target state estimation.
Figure 1: The pipeline of the proposed _MVT_ tracker and our Siam-MoViT block. The backbone consists of MobileNetV2 [] (or ) and Siam-MoViT blocks for feature extraction. indicates spatial downsampling by a factor of 2. Details of our Siam-MoViT block can be found in Section 3.
## 3 Proposed Mobile Vision Transformer-based Tracker
In this section, we discuss the pipeline of our _MVT_ algorithm for single object tracking (shown in Figure 1) and information related to model training.
### Proposed _Mvt_ Backbone and the Siam-MoViT block
The input to our _MVT_ backbone is a pair of the target template and search region image patches, \(Z_{in}\in R^{W_{x}\times H_{z}\times 3}\) and \(X_{in}\in R^{W_{x}\times H_{x}\times 3}\), respectively. The tracker backbone consists of cascaded MobileNetV2 [\(\boxed\)] and the proposed Siam-MoViT blocks, as shown in Figure 1. These modules process the input image patches sequentially, with recurrent spatial down-sampling operations to reduce the feature dimensionality. The proposed Siame-MoViT block uses a modified MobileViT block [\(\boxed\)], especially around the transformer encoder, to accommodate features from the template and search region.
Our Siam-MoViT block receives a pair of intermediate feature maps \(Z\) and \(X\), belonging to the template and search regions, respectively. We assume that \(Z\) and \(X\) have \(C\) channels. Inside the Siam-MoViT block, first, we apply a \(3\times 3\) convolutional filter to learn spatially local feature representations. It is followed by a \(1\times 1\) convolutional filter, projecting the features onto a \(D\)-dimensional space as a linear combination of \(C\) input channels. Next, we perform the _unfold and concatenate_ operation (_cf._ Figure 1), where we divide the feature maps \(X\) and \(Z\) into \(N\) non-overlapping patches of size \(w\times h\). We then flatten these patches to generate tokens of size \(P\times N\times D\), where \(P=w\cdot h\) and \(N=\frac{W\cdot H}{P}\). These tokens are concatenated and passed through a series of \(L\) transformer blocks to encode the global relationship between the template and search regions. Our implementation uses the standard multi-headed self-attention transformer encoder blocks [\(\boxed\)]. This operation of learning self-attention on the concatenated features facilitates the exchange of information between template and search regions, thereby generating high-quality encodings for robust target localization. To restore the spatial ordering of feature maps, we split the output tokens from the transformer and re-arrange them to obtain feature maps of size \(H_{z}\times W_{z}\times D\) and \(H_{x}\times W_{x}\times D\), shown as the _split and unfold_ operation in Figure 1. Then, we re-map the number of channels from \(D\) to \(C\) by applying a \(1\times 1\) convolutional filter and concatenate the resulting feature maps with the inputs to the Siam-MoViT block, i.e., \(Z\) and \(X\). Finally, we apply a \(3\times 3\) convolutional filter on the concatenated feature maps to generate the output of our Siam-MoViT block, denoted as \(\hat{Z}\) and \(\hat{X}\), having the same size as \(Z\) and \(X\), respectively. Note that all the MobileNetV2 blocks in the backbone and the CNN blocks within the Siam-MoViT block are applied separately to template and search regions, as shown in Figure 1, with shared weights.
### Neck and Head Modules
The output from the last layer of the _MVT_ backbone has feature maps corresponding to the template and search region. We fuse these features in the neck module to generate an encoded feature representation \(f_{xx}\in R^{\frac{H_{z}W_{z}}{16^{2}}\times\frac{H_{x}}{16}\times\frac{H_{x} }{16}}\) for target state estimation. For this, we use a simple pointwise cross-correlation operator [\(\boxed\)] in the neck module, the same as LightTrack [\(\boxed\)]. We use a layer of batch-normalization (BN) [\(\boxed\)] before performing cross-correlation. We then apply a \(1\times 1\) convolutional _channel-adjust_ layer on \(f_{zx}\) to match the number of channels between \(f_{zx}\) and the head module.
For classification and regression, we adopt the head module from [\(\boxed\)], which uses a fully convolutional network (FCN) to perform target classification and bounding box regression.
The FCN consists of a stack of five Conv-BN-ReLU blocks. The classification network predicts a score map \(\mathcal{R}\in R^{\frac{H_{s}}{16}\times\frac{W_{v}}{16}}\), and the position of the maximum value in \(\mathcal{R}\) is considered as the target location. The regressor network predicts the normalized bounding box size (i.e., target width and height) and corresponding local offset values.
### Loss Function for Training
During training, we use loss functions for the classification and regression output by the head module of our _MVT_ tracker. As in [], we use the weighted focal loss \(L_{cls}\) to handle the imbalance between positive and negative training examples for target classification. For bounding box regression, same as [], we use the \(\ell_{1}\) and generalized IoU loss functions, denoted by \(L_{1}\) and \(L_{giou}\), respectively. As in [], we define the overall loss function as,
\[L_{total}=L_{cls}+\lambda_{1}\cdot L_{1}+\lambda_{2}\cdot L_{giou}, \tag{1}\]
where \(\lambda_{1}\) and \(\lambda_{2}\) are the hyperparameters controlling the relative impact of \(L_{1}\) and \(L_{giou}\) on the overall training loss.
## 4 Implementation Details and Experimental Results
This section discusses the implementation details of our _MVT_ tracker and compares its results with related lightweight and state-of-the-art heavy trackers. We also discuss the ablation study results for the proposed feature fusion and an attribute-based robustness analysis.
### Implementation Details
We set the dimensions of the inputs to our _MVT_ backbone, i.e., \(Z_{in}\) and \(X_{in}\) from Section 3.1, to \(128\times 128\) and \(256\times 256\), respectively. We divide our _MVT_ backbone into five layers with _layer-id_s for notation convenience, as shown in Figure 1. The number of channels in the feature maps increases along these five layers as \(\{3\to 16,16\to 32,32\to 64,64\to 96,96\to 128\}\). We set the number of transformer encoders for the proposed Siam-MoViT block in _layer-_3 and _layer-_4 to 2 and 4, respectively. We set the parameters \(w=h=2\) for folding and unfolding operations within our Siam-MoViT block. The number of upscaled channels \(D\) in the Siam-MoViT block is set to 144 and 192 for _layer-_3 and _layer-_4, respectively. The backbone has a stride of 16 (i.e., four downsampling operations, each by a factor of two), resulting in feature maps of size \(8\times 8\) and \(16\times 16\) for the template and search regions, respectively. The _channel-adjust_ layer in the neck module, described in Section 3.2, upscales the number of channels from 64 to 256.
We use the training split of the GOT10k dataset [] to train our model. We use _Adam-W_[] as the optimizer with a weight decay of \(10^{-4}\). We trained our model for 100 epochs with 60000 image pairs per epoch, sampled from the training dataset. We use the validation split of GOT10k to compute the values of \(L_{cls}\), \(L_{1}\), and \(L_{giou}\) from Eq. 1 during training to examine the possibility of overfitting. We set the initial learning rate \(lr\) to \(4\times 10^{-4}\) and use cosine annealing [] as the learning rate scheduler (without the warm restarts). We keep the \(lr\) for the backbone module 0.1 times the \(lr\) for the rest of the network throughout training. We use the data augmentation techniques horizontal flip and brightness jitter during training. We initialize the backbone using the weights of the pre-trained MobileViT model provided by its authors []. Like [], we do not use positional embeddings for the transformer
blocks in our _MVT_ backbone. We set the hyperparameters \(\lambda_{1}\) and \(\lambda_{2}\) in Eq. 1 to 5 and 2, respectively, as in [5]. We use a single Nvidia Telsa V100 GPU (32GB) for training and set the batch size to 128.
Our choice of optimizer and hyperparameters is based on the training settings typically used by the related trackers. We set our batch size based on the maximum number of images that can be loaded onto the GPU used for training the model. We experimented using the Ray-Tuner package in Pytorch [5] to search for the best set of hyperparameters jointly. First, the hyperparameter search was time-consuming due to the sheer volume. Second, due to a strong inter-dependency between some of the hyperparameters (_e.g._, batch size and learning rate), it was challenging to find the optimal set using random search-based methods.
During inference, we define the search space at frame \(t\) by extracting an image patch around the estimated target location at frame \(t-1\), four times the area of the target template. We apply a Hanning window on the classification score map \(\mathcal{R}\) as the post-processing step. After this multiplication, we determine the location of the highest value in \(\mathcal{R}\) as the target location, and we choose the corresponding bounding box as the tracker output. We define the target annotation from the first frame as the template and do not perform any model update. We generate the GPU-based inference results using an Nvidia RTX 3090 GPU.
### Results and Comparison to Related Work
To demonstrate the effectiveness of the proposed _MVT_, we evaluate it using GOT10k-test [5], TrackingNet-test [5], LaSOT-test [5], and NfS30 [5] datasets. GOT10k has 180 test videos, with non-overlapping target classes from their training videos, to promote generalization during tracker development. TrackingNet has 511 challenging test videos with 15 attributes. GOT10k and TrackingNet datasets sequester the test set annotations and provide an online evaluation server to submit the tracker results to ensure a fair evaluation. LaSOT dataset has 280 test videos, with an average length of 2500 frames per video. NfS dataset has 100 videos captured at 240 and 30 _fps_; we use the 30 _fps_ videos. GOT10k provides a train, validation, and test split for its annotated videos, whereas TrackingNet and LaSOT provide the train and test splits. NfS30 has only test videos in the dataset.
GOT10k uses Overlap Ratio (_OR_) and Success Rate (_SR_) at a threshold of 0.5 and 0.75 (i.e., \(SR_{0.50}\) and \(SR_{0.75}\)) to quantify the tracker performance. Metric _OR_ is equivalent to Area Under the Curve (_AUC_) [5]. _SR_ measures the fraction of frames where the Intersection-over-Union (_IoU_) between groundtruth and predicted boxes is higher than a threshold. TrackingNet uses _AUC_, Precision (_P_), and Normalized-Precison (_Pnorm_) for tracker performance. Precision \(P\) measures the distance between centers of groundtruth and predicted bounding boxes, whereas _Pnorm_ computes the same metric using normalized bounding boxes. For LaSOT and NfS30, we use the _AUC_ and Failure Rate (_FR_) as the performance metrics. _FR_ calculates the fraction of frames where the tracker has drifted away, i.e., its bounding box prediction has no overlap with the groundtruth (i.e., _IoU_ score is zero).
We compare the results of the proposed _MVT_ with the related lightweight trackers: LightTrack [5], Stark-Lightning [5], FEAR-XS [5], and E.T.Track [5], evaluated using the pre-trained models provided by their authors. From Table 1, we can see that our _MVT_ outperforms all other lightweight trackers on the server-based test set of GOT10k and TrackingNet. No related tracker scores second best constantly for these datasets. On GOT10k-test, our tracker is better by at least 3.7%, 4.6%, and 7.3%, than the second best tracker in terms of _OR_, \(SR_{0.50}\), and \(SR_{0.75}\), respectively. Recall that GOT10k-test has unseen object classes; this indicates a higher generalization ability of _MVT_ towards tracking novel object classes than
the related trackers. It also highlights the impact of feature fusion in our tracker backbone compared to other two-stream-based lightweight trackers. We observe a similar behavior using the TrackingNet dataset, where our _MVT_ performs better by approximately 2% in AUC, \(P\), and \(P_{norm}\) than its competitor, LightTrack. No single tracker constantly performs better in \(AUC\) or \(FR\) for the NfS30 and LaSOT datasets with groundtruth available for the test sets. For NfS30, our tracker is better by 2.6% in \(FR\) than the second-best Stark-Lightning while lower by 1.6% in AUC. For LaSOT, our tracker is lower by 2.1% than the second best LightTrack in \(FR\) and by 4.4% than the best E.T.Track in \(AUC\).
Across all the datasets and performance metrics, we can see that our tracker scores the best in most cases (7/10) while being second best in 2/10 cases. Our closest competitor, Stark-Lightning, scores the second-best 5/10 times and the best only once. Regarding speed, our _MVT_ runs 175 _fps_ during GPU-based evaluation, that is, 15% slower than its competitor Stark-Lightning, as shown in Table 1. It is because Stark-Lightning computes the template region features only once during inference due to its two-stream tracking pipeline. In contrast, our _MVT_ requires evaluation of the template features at every frame due to the entanglement of the template and search regions in its backbone, which impacts tracking speed.
### Comparison to State-of-the-art trackers
In Table 2, we compare the proposed _MVT_ to state-of-the-art (SOTA) heavyweight trackers on server-based GOT10k and TrackingNet test datasets. We take the values of evaluation metrics for these trackers from the respective papers; however, we compute their _fps_ values on a GPU (i.e., Nvidia RTX 3090) and a CPU (i.e., 12th Gen Intel(R) Core-i9 processor), as shown in the last column of Table 2. As we can see, in comparison to the popular DCF-based DiMP-50 [], the deployment of transformers for feature fusion [], [] and as the backbone [] has improved the tracker performance, but at the cost of increased computational complexity and lowered tracking speed due to higher number of model parameters. In contrast, proposed _MVT_ surpasses the performance of the popular DiMP-50 on GOT10k and TrackingNet datasets with 4.7\(\times\) fewer parameters while running at 2.8\(\times\) and 2\(\times\) its speed on a GPU and CPU, respectively. Compared to the best-performing SOTA tracker MixFormer-L [] in Table 2, proposed _MVT_ has 33.43\(\times\) fewer model parameters and higher _fps_, i.e., 3.87\(\times\) on GPU and 5.88\(\times\) on CPU, but has a lower \(AUC\) of 10.7% on average across the two datasets. Our tracker provides a tradeoff between accuracy and complexity for real-time applications with resource constraints.
### Ablation Study
To analyze the effectiveness of the proposed feature fusion technique deployed in our _MVT_ backbone, we evaluate the performance of our tracker trained without the concatenation of
\begin{table}
\begin{tabular}{c|c c c|c c c|c c|c c c} \hline \hline Tracker & \multicolumn{2}{c|}{**GOT10k** (**E**) (server)} & \multicolumn{2}{c|}{**TrackingNet (**E**) (server)} & \multicolumn{2}{c|}{**NFS30 (**E**)} & \multicolumn{2}{c|}{**LaSOT (**E**)} & \multicolumn{2}{c|}{_fps_} \\ \cline{2-11} \multicolumn{1}{c|}{} & \(OR\uparrow\) & \(SR_{0.50}\uparrow\) & \(SR_{0.75}\uparrow\) & \(AUC\uparrow\) & \(P_{norm}\uparrow\) & \(P\uparrow\) & \(AUC\uparrow\) & \(FR\downarrow\) & \(AUC\uparrow\) & \(FR\downarrow\) & (GPU) \\ \hline LightTrack [] (CVPR 21) & 0.582 & 0.668 & 0.442 & 72.9 & 79.3 & 69.9 & 0.582 & 0.146 & 0.524 & **0.116** & 99 \\ Stark-Lightning [] (ICCV\({}^{2}\))1 & 0.596 & 0.696 & 0.479 & 72.7 & 77.9 & 67.4 & **0.619** & 0.111 & 0.585 & 0.151 & 205 \\ EFAX8 [] (ECCV\({}^{2}\))2 & 0.573 & 0.681 & 0.455 & 71.5 & 80.5 & 69.9 & 0.487 & 0.207 & 0.508 & 0.273 & 275 \\ E.T.Track [] (WACV\({}^{2}\))2 & 0.566 & 0.646 & 0.425 & 74.0 & 79.8 & 69.8 & 0.589 & 0.172 & **0.997** & 0.162 & 53 \\ MVT (ours) & **0.633** & **0.742** & **0.551** & **74.8** & **81.5** & **71.9** & 0.603 & **0.085** & 0.553 & 0.137 & 175 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of related lightweight SN trackers with our _MVT_ on server-based GOT10k-test and TrackingNet-test, and groundtruth available NfS30 and LaSOT-test datasets. The best and second-best results are highlighted in red and blue, respectively.
the template and search region features inside the proposed Siam-MoViT block (_cf._ Figure 1). Table 3 summarizes the ablation results on the four datasets discussed in Section 4.2. We can see that the proposed feature fusion improves the _OR_ (or the equivalent metric _AUC_) by 1.9% on average across all the datasets. It also increases the robustness of our _MVT_ tracker by reducing the _FR_ on NfS30 and LaSOT datasets by 3.7% and 2.6%, respectively. Learning self-attention on the concatenated features using the transformer blocks in our _MVT_ backbone facilitates the global relational modeling _within_ and _between_ the template and search regions, thereby generating superior features for accurate target localization and robust tracking.
### Robustness Analysis
To analyze the robustness of the proposed _MVT_ tracker against various challenging factors (or attributes), we compute its _FR_ for attributes annotated under the LaSOT dataset, namely Aspect Ration Change (_ARC_), Background Clutter (_BC_), Camera Motion (_CM_), Deformation (_DEF_), Fast Motion (_FM_), Full Occlusion (_FOC_), Illumination Variation (_IV_), Low Resolution (_LR_), Motion Blur (_MB_), Out-of-View (_OV_), Partial Occlusion (_POC_), Rotation (_ROT_), Scale Variation (_SV_), and Viewpoint Change (_VC_). From Figure 2, we can see that our _MVT_ is most robust to target deformation (_DEF_) and appearance changes (_VC_). It is least robust to attribute _FM_ since we use a Hanning window on the classification score map during target localization. However, not using the Hanning window deteriorates the robustness of our tracker against _BC_ and increases the overall _FR_, as we observed from our experiments. Also, our _MVT_ has a higher _FR_ for videos under the attribute _LR_. These videos contain small, texture-less target objects such as _volleyball_ and _yo-yo_, which are generally fast-moving (i.e., _FM_) and are sensitive to _BC_. SOTA trackers address the challenges of _FM_, _LR_, and _BC_ with deep features and larger search area to avoid target loss, but these improvements come at the expense of higher model complexity and memory footprint, as shown in Table 2.
\begin{table}
\begin{tabular}{c|c c|c c||c c|c c} \hline \hline \multirow{2}{*}{Tracker} & \multicolumn{2}{c|}{GOT10k} & \multicolumn{2}{c||}{TrackingNet} & \multicolumn{2}{c|}{\#params \(\downarrow\)} & \multicolumn{2}{c}{_fps_} \\ & \(OR\uparrow\) & \(SR_{0.50}\uparrow\) & \(AUC\uparrow\) & \(P_{norm}\uparrow\) & (in millions) & GPU \(\uparrow\) & CPU \(\uparrow\) \\ \hline DiMP-50 [**D**] & 0.611 & 0.717 & 74.0 & 80.1 & 26.1 & 61.5 & 15.0 \\ TransT [**D**] & 0.671 & 0.768 & 81.2 & 85.4 & 23.0 & 87.7 & 2.3 \\ STARK-ST101 [**D**] & 0.688 & 0.781 & 82.0 & 86.9 & 47.2 & 80 & 7.8 \\ OSTrack-384 [**E**3] & 0.740 & 0.835 & **83.9** & 88.5 & 92.1 & 74.4 & 4.4 \\ MixFormer-L [**D**] & **0.756** & **0.857** & **83.9** & **88.9** & 183.9 & 45.2 & \(<\) 5 \\ \hline \hline MVT (ours) & 0.633 & 0.742 & 74.8 & 81.5 & **5.5** & **175.0** & **29.4** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of our _MVT_ with the state-of-the-art heavyweight trackers on server-based GOT10k and TrackingNet test datasets. Best and second best results in accuracy and complexity (i.e., # of parameters and _fps_) are highlighted in red and blue, respectively.
\begin{table}
\begin{tabular}{c|c c|c c|c c|c c} \hline \hline feature fusion & \multicolumn{2}{c|}{GOT10k} & \multicolumn{2}{c|}{TrackingNet} & \multicolumn{2}{c|}{NFS30} & \multicolumn{2}{c}{LaSOT} \\ in backbone & \(OR\uparrow\) & \(SR_{0.50}\uparrow\) & \(AUC\uparrow\) & \(P_{norm}\uparrow\) & \(AUC\uparrow\) & \(FR\downarrow\) & \(AUC\uparrow\) & \(FR\downarrow\) \\ \hline ✗ & 0.600 & 0.703 & **74.9** & 80.0 & 0.566 & 0.122 & 0.544 & 0.163 \\ ✓(ours) & **0.633** & **0.742** & 74.8 & **81.5** & **0.603** & **0.085** & **0.553** & **0.137** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation study results related to the proposed feature fusion in our _MVT_ backbone. Best results are highlighted in red.
## 5 Conclusion and Future Work
In this paper, we proposed _MVT_, our visual object tracking algorithm that uses, for the first time, the Mobile Vision Transformers as the backbone. We also proposed the Siam-MoViT block to model the global interactions between template and search regions in the tracker backbone, thereby enhancing the quality of feature encodings for target localization. Our simulation results showed that the proposed tracker performed better than the related lightweight trackers on the large-scale GOT10k and TrackingNet datasets, showcasing the effectiveness of the proposed tracking method. Despite having \(4.7\times\) fewer model parameters, our _MVT_ performs better than the popular DCF-based DiMP-50 tracker, while running at least \(2\times\) its speed during CPU and GPU-based evaluation. Our ablation studies highlighted the importance of the proposed feature fusion on our tracker performance.
In our future work, we plan to explore lightweight vision transformer backbone architectures to enhance the quality of encoded features further. Effective feature fusion in the backbone can make the neck module redundant for lightweight tracking, simplifying the tracking pipeline. We also plan to deploy and test our models on low-memory embedded devices, such as smartphones.
|
2309.03280 | BASS-XL: X-ray variability properties of unobscured Active Galactic
Nuclei | We investigate the X-ray variability properties of Seyfert1 Galaxies
belonging to the BAT AGN Spectroscopic Survey (BASS). The sample includes 151
unobscured (N$_{\rm H}<10^{22}$ cm$^{-2}$) AGNs observed with XMM-Newton for a
total exposure time of ~27 Ms, representing the deepest variability study done
so far with high signal-to-noise XMM-Newton observations, almost doubling the
number of observations analysed in previous works. We constrain the relation
between the normalised excess variance and the 2-10 keV AGN luminosities, black
hole masses and Eddington ratios. We find a highly significant correlation
between $\sigma^{2}_{NXS}$ and $M_{\rm BH}$, with a scatter of ~0.85 dex. For
sources with high $L_{2-10}$ this correlation has a lower normalization,
confirming that more luminous (higher mass) AGNs show less variability. We
explored the $\sigma^{2}_{NXS}$ vs $M_{\rm BH}$ relation for the sub-sample of
sources with $M_{\rm BH}$ estimated via the "reverberation mapping" technique,
finding a tighter anti-correlation, with a scatter of ~ 0.65 dex. We examine
how the $\sigma^{2}_{NXS}$ changes with energy by studying the relation between
the variability in the hard (3-10 keV) and the soft (0.2-1 keV)/medium (1-3
keV) energy bands, finding that the spectral components dominating the hard
energy band are more variable than the spectral components dominating in softer
energy bands, on timescales shorter than 10 ks. | Alessia Tortosa, Claudio Ricci, Patricia Arévalo, Michael J. Koss, Franz E. Bauer, Benny Trakhtenbrot, Richard Mushotzky, Matthew J. Temple, Federica Ricci, Alejandra Rojas Lilayu, Taiki Kawamuro, Turgay Caglar, Tingting Liu, Fiona Harrison, Kyuseok Oh, Meredith Clark Powell, Daniel Stern, Claudia Megan Urry | 2023-09-06T18:00:13Z | http://arxiv.org/abs/2309.03280v2 | # BASS-XL: X-ray variability properties of unobscured Active Galactic Nuclei
###### Abstract
We investigate the X-ray variability properties of Seyfert 1 Galaxies belonging to the BAT AGN Spectroscopic Survey (BASS). The sample includes 151 unobscured (\(\rm{N_{H}}<10^{22}\,cm^{-2}\)) AGNs observed with _XMM-Newton_ for a total exposure time of \(\sim 27\,\rm{Ms}\), representing the deepest variability study done so far with high signal-to-noise _XMM-Newton_ observations, almost doubling the number of observations analysed in previous works. We constrain the relation between the normalised excess variance and the \(2-10\,\rm{keV}\) AGN luminosities, black hole masses and Eddington ratios. We find a highly significant correlation between \(\sigma_{\rm{NXS}}^{2}\) and \(M_{\rm{BH}}\), with a scatter of \(\sim 0.85\,\rm{dex}\). For sources with high \(L_{2-10}\) this correlation has a lower normalization, confirming that more luminous (higher mass) AGNs show less variability. We explored the \(\sigma_{\rm{NXS}}^{2}\) vs \(M_{\rm{BH}}\) relation for the sub-sample of sources with \(M_{\rm{BH}}\) estimated via the "reverberation mapping" technique, finding a tighter anti-correlation, with a scatter of \(\sim 0.65\,\rm{dex}\). We examine how the \(\sigma_{\rm{NXS}}^{2}\) changes with energy by studying the relation between the variability in the hard (\(3-10\,\rm{keV}\)) and the soft (\(0.2-1\,\rm{keV}\))/medium (\(1-3\,\rm{keV}\)) energy bands, finding that the spectral components dominating the hard energy band are more variable than the spectral components dominating in softer energy bands, on timescales shorter than \(10\,\rm{ks}\).
keywords: Supermassive Black Hole - Active galaxies - Seyfert galaxies - X-rays
## 1 Introduction
Supermassive Black Holes (SMBHs, \(M_{\rm BH}>10^{6}M_{\odot}\)) are ubiquitously found at the center of massive galaxies. Mass accretion onto SMBHs is the mechanism that powers Active Galactic Nuclei (AGNs, Salpeter, 1964) which are very powerful sources of X-ray radiation, emitting through the entire electromagnetic spectrum. Variability is a distinctive feature shared by all classes of AGN, occurring over a wide range of timescales and amplitudes across all the wavelengths (e.g., Ulrich et al., 1997; McHardy et al., 2004). These flux variations can also be accompanied by prominent spectroscopic changes (e.g., Ricci & Trakhtenbrot, 2022). In the X-ray band, variability is observed on both short (e.g., \(<10^{3}\) s; Uttley & McHardy, 2005; McHardy et al., 2004) and long timescales (e.g., years; McHardy, 2001; Ishibashi & Courvoisier, 2009; Sartori et al., 2018) giving insight into the innermost regions of the AGN. Thus, its study can help us to understand the emission properties of AGNs (e.g., Mushotzky et al., 1993; Ulrich et al., 1997; Uttley et al., 2014; Cackett et al., 2021; De Marco et al., 2022) and better characterize the growing population of extremely variable AGNs identified in the optical (e.g., Lawrence et al., 2016; Rumbaugh et al., 2018; Trakhtenbrot et al., 2019; Shen, 2021; Zeltryn et al., 2022; Temple et al., 2023) and X-rays (e.g., Timlin et al., 2020; Ricci et al., 2020, 2021; Masterson et al., 2022).
One method used to study the temporal structure of the variations is the power spectral density (PSD) analysis. If the temporal frequency is \(\nu=1/t\), where \(t\) is the time, the observed power spectrum is generally modeled as a power-law of the form: \(P_{\nu}\propto\nu^{\alpha}\). For short timescales (high frequencies) \(\alpha\sim-2\), while for long timescales (low frequencies) \(\alpha\sim-1\)(Papadakis & McHardy, 1995). The PSD break timescales, \(T_{B}\), can be obtained by fitting a broken power laws to the observed PSD. This parameter has been found to be positively correlated with the black hole mass (\(M_{\rm BH}\); e.g., Lu & Yu, 2001; Bian & Zhao, 2002; Uttley et al., 2002; Markowitz et al., 2003; Papadakis, 2004). However, Narrow Line Seyfert 1 (NLS1) galaxies, which typically accrete at very high Eddington ratios (\(L_{\rm bol}/L_{\rm Edd}=\lambda_{\rm Edd}\); McHardy et al., 2004), display a different behaviour, with their break timescales being shorter for a given \(M_{\rm BH}\). To explain this, Uttley & McHardy (2005) suggested that the break timescales could depend also on a second parameter, such as the accretion rate or the black hole spin.
Accurately determining the AGNs power spectra can be difficult, since it requires high-quality data, long exposures and sometimes monitoring campaigns, to extend time coverage that adequately covers relevant PSD frequency ranges that include potential breaks. Given such difficulties, it is common practice to quantify the X-ray variability of AGNs in terms of the so-called normalised excess variance (\(\sigma^{2}_{\rm NXS}\), Nandra et al., 1997). Although it does not contain the same amount of information as the PSD, the normalised excess variance can be used to confirm the PSD results in large samples of AGN, and it also allows the discovery of new correlations between the X-ray variability amplitude and other AGNs physical parameters. The normalised excess variance of AGNs has been widely studied in the past decades, finding that \(\sigma^{2}_{\rm NXS}\) has a strong dependence on \(M_{\rm BH}\). Using the data from the _Advanced Satellite for Cosmology and Astrophysics_ (_ASCA_), Lu & Yu (2001) and Bian & Zhao (2003) found an anti-correlation between the excess variance (on a timescales of \(\sim\)1 day) and \(M_{\rm BH}\). Papadakis (2004), using _Rossi X-Ray Timing Explorer_ (_RXTE_) data on much longer timescales (\(\sim\)300 days), also found an anti-correlation between these two parameters. Ponti et al. (2012) investigated this relation using high quality _XMM-Newton_ data on timescales of 10 ks. They found that the \(\sigma^{2}_{\rm NXS}\)\(\sim\)\(M_{\rm BH}\) relation flattens for masses below \(\sim 10^{6}M_{\odot}\), as confirmed later also by Ludlam et al. (2015) studying a sample of low mass AGNs observed by _XMM-Newton_. Akylas et al. (2022), using light curves of local Seyfert from the Nuclear Spectroscopic Telescope Array hard X-ray mission (_NuSTAR_), extended the \(\sigma^{2}_{\rm NXS}\) vs \(M_{\rm BH}\) relation to energy band higher than 10 keV, finding that it is possible to accurately measure the \(M_{\rm BH}\) in AGN using the above-mentioned correlation in the \(3-10\) and the \(10-20\) keV bands. However, the minimum necessary S/N is \(\sim 3\) and duration of the light curves should be \(\sim 80-100\) ks. Several works suggested that the excess variance is related to other source properties, such as the X-ray luminosity, \(L_{2-10}\), (Barr & Mushotzky, 1986; Nandra et al., 1997; Turner et al., 1999). However, studying a sample of 46 AGNs observed by _ASCA_, Papadakis (2004) found that once the dependence of \(\sigma^{2}_{\rm NXS}\) from \(M_{\rm BH}\) is removed, the correlation between \(\sigma^{2}_{\rm NXS}\) and \(L_{2-10}\) is no longer present, implying that the correlation with \(L_{2-10}\) was associated to the \(\sigma^{2}_{\rm NXS}\)\(-\)\(M_{\rm BH}\) relation. The same effect was recovered by O'Neill et al. (2005).
Past studies of hard X-ray selected AGNs with _Swift_/BAT data, focusing on longterm light curves show that in most of these AGNs a significant variability on months to years timescales is present. In general this variation is not related to changes of the absorption column density but to variations of the power-law continuum (Soldi et al., 2014). Moreover, unlike previous studies, no correlation between hard X-ray variability and different properties of the AGNs including luminosity and black hole mass was found (Shimizu & Mushotzky, 2013). Also Phillipson et al. (2023), studying the hard X-ray variability properties of _Swift_/BAT AGNs, show that type 1 AGNs in the 14-150 keV band, are found to be less prone to harboring deterministic variability than type 2 AGNs on timescales of \(\sim 15\) years.
In this paper, we present the results from an excess variance analysis of a sample of 151 hard X-ray selected, unobscured (N\({}_{H}<10^{22}\) cm\({}^{-2}\)) AGNs using \(\sim\)500 high signal-to-noise _XMM-Newton_ observations, almost double of the number of observations analysed in previous works (e.g., Ponti et al., 2012), for a total of \(\sim\)27 Ms exposure time.
The paper is organized as follows. Section 2 presents the selected sample and the data reduction of the sources of our sample. Section 3 describes the timing analysis of the data and the extrapolation of the \(\sigma^{2}_{\rm NXS}\) together with the analysis of the correlation between \(\sigma^{2}_{\rm NXS}\) and several physical parameters of the sources. We summarize and conclude the results of our analysis in Section 4. Standard cosmological parameters (H=70 km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega_{\Lambda}\)=0.73 and \(\Omega_{m}\)=0.27) are adopted throughout the paper.
## 2 The sample and data reduction
### The Bass Sample
Since its launch in 2004, the Burst Alert Telescope (BAT; Barthelmy et al., 2005) on board the _Neil Gehrels Swift observatory_(Gehrels et al., 2004) has been carrying out an all-sky survey in the \(14-195\) keV band. Our sample consists of all the unobscured (\(N_{H}<10^{22}\)cm\({}^{-2}\)), radio quiet, type 1 AGNs belonging to the Swift/BAT AGN Spectroscopic Survey (BASS1) which have public _XMM-Newton_ observations by December 2022.
Being unbiased by obscuration up to Compton-thick levels (N\({}_{\rm H}>10^{24}\) cm\({}^{-2}\), Ricci et al., 2015) and not affected by dust obscuration or star formation, BASS provides an important census of AGNs. It gives
a full picture of the bright AGNs in the local Universe, providing the largest available spectroscopic sample of _Swift_/BAT ultra-hard X-ray (\(14-195\) keV) detected AGNs (Oh et al., 2018), complementary with _Swift_, _Chandra_, and _XMM-Newton_ for X-ray broad-band (\(0.5-200\) keV) spectral measurements (Ricci et al., 2017). It also includes extensive multi-wavelength follow-up data, from optical emission (Oh et al., 2022), high spatial resolution near-IR (Lamperti et al., 2017; Koss et al., 2018), mid- and far-IR emission from _WISE_, _IRAS_, _Spitzer_, _Akari_, and _Herschel_(Ichikawa et al., 2017; Shimizu et al., 2017; Ichikawa et al., 2019) and mm/radio emission (Kawamuro et al., 2022; Koss et al., 2021; Ricci et al., 2023), giving insight on the sample over the broadest possible spectral range.
The first BASS data release (DR1, Koss et al., 2017) reported \(M_{\rm BH}\) and X-ray properties for all the 838 AGNs from the _Swift_/BAT 70-month catalogue (Ricci et al., 2017), while the second BASS data release (DR2, Koss et al., 2022) reports more secure and uniformly assessed \(M_{\rm BH}\) for 780 unbeamed AGNs from the 70-month catalogue. The masses are estimated from broad Balmer lines and/or "reverberation mapping" technique (RM) for type 1s and masers, and from dynamics and/or velocity dispersions for type 2s (Koss et al., 2022; Mejia-Restrepo et al., 2022; Ricci et al., 2022). Moreover, in the DR2, \(\lambda_{\rm Edd}\) (\(L_{\rm bol}\)/\(L_{\rm Edd}\)) are computed using the bolometric luminosities calculated from the intrinsic 14-150 keV luminosities as shown in Ricci et al. (2017) with a bolometric correction of 8 (Koss et al., 2022). In this work, we considered \(L_{\rm 2-10}\), \(\lambda_{\rm Edd}\), and \(M_{\rm BH}\) values from BASS DR2 (Koss et al., 2022).
### Data Reduction
The X-ray Multi-Mirror Mission (_XMM-Newton_, Jansen et al., 2001) high statistics, low background and uninterrupted light-curves obtained for the sources in this sample are crucial to compute the \(\sigma^{2}_{\rm NXS}\) in large samples of AGN.
The sample of BASS unobscured AGNs is composed by 365 sources, with a median redshift of \(z_{\rm med}=0.035\) (lower than the parent sample median redshift). Of these, 153 had public _XMM-Newton_ observations as of December 2022. We downloaded all the observations from the _XMM-Newton_ Science Archive, and extracted the EPIC-pn (Struder et al., 2001) light curves using the Science Analysis System (SAS) software package (v.18.0.0) (Gabriel et al., 2004) and the calibration database CALDB 20221102. The MOS detectors (Turner et al., 2001) and the Reflection Grating Spectrometer (RGS, den Herder et al., 2001) were not considered because their lower statistics would not significantly improve the quality of the lightcurves. The _XMM-Newton_ EPIC-pn raw data have been processed using the _epixma_ tool of SAS to obtain calibrated and concatenated event lists. The extraction radii and the optimal time cuts to exclude periods of high flaring particle background were computed via an iterative process which maximizes the signal-to-noise ratio (SNR), as described in Piconcelli et al. (2004), filtering out those time intervals for which the count rate of the background reach values so high that the SNR of the source does not improve (or even worsens) when including such time intervals in the analysis. Since the pn camera has a full-frame time resolution of 73.3ms per CCD, the observations generally do not suffer significantly from pile-up, making them suitable for variability analysis. Nonetheless, the light curves were extracted after confirming that the data were not affected by pile-up, as indicated by the SAS task _epxplot_. The resulting optimal extraction radius was \(\sim 30-40\arcsec\) and the background spectra were extracted from source-free circular regions with radii of \(\sim 50-60\arcsec\) for all the observations analyzed in this work. With these regions we extracted the EPIC-pn source and background light curves using the command _evselect_ and we corrected the source light curve for the background using the command _epxleccorr_. We extracted the light curves using several different time and spectral binning strategies: 100 s and 1000s in the \(0.2-10\) keV energy band, and 100 s in the \(0.2-1\) keV (soft), \(1-3\) keV (medium) and \(3-10\) keV (hard) energy bands.
Following Ponti et al. (2012) we selected the observations which had cleaned exposure times larger than \(10\) ks, and which had at least 10 counts in those \(10\) ks chunks and in each (rest-frame) energy band used in this analysis, i.e. \(0.2-1\) keV, \(1-3\) keV and \(3-10\) keV and for each time bin of 100 s and 1000 s. We did this selection to avoid having not enough counts in the \(10\) ks independent light curve to constrain the \(\sigma^{2}_{\rm NXS}\). A total of 151 sources (\(\sim 500\) observations) fulfill these criteria. The distributions of \(M_{\rm BH}\), \(L_{\rm 2-10}\), \(\lambda_{\rm Edd}\) and \(\rm N_{\rm H}\) of our sample is shown in Fig. 1.
We show in Appendix A the _XMM-Newton_ EPIC-pn background subtracted light curves of a sub-sample of representative sources for different \(M_{\rm BH}\) values (see Fig. 11).
## 3 Analysis
The excess variance (\(\sigma^{2}_{\rm NXS}\) ) is a quantity used to describe the variability amplitude. It is the difference between the total variance of a light curve and the mean squared error that is normalised for the average of the \(N\) flux measurements squared (e.g. Nandra et al., 1997; Turner et al., 1999). Here \(N\) is the number of good time intervals in a light curve, and \(x_{i}\) and \(\sigma_{i}\) are the flux and error in each interval, respectively. The excess variance is defined (Vaughan et al., 2003) as follows:
\[\sigma^{2}_{\rm NXS}=\frac{S^{2}-\overline{\sigma^{2}}}{\overline{x_{i}^{2}}} \tag{1}\]
Where \(\overline{\sigma^{2}}\) is the mean square error:
\[\overline{\sigma^{2}}=\frac{1}{N}\sum_{i=1}^{N}[\sigma_{i}^{2}] \tag{2}\]
\begin{table}
\begin{tabular}{l l l} \hline \hline ID & _Swift_/ID & OBSID \\ \hline
6 & SWIFTJ0006.2+2012 & 0101040701 \\
6 & SWIFTJ0006.2+2012 & 0510010701 \\... & \\
16 & SWIFTJ0029.2+1319 & 0783270201 \\
34 & SWIFTJ0051.6+2928 & 0903040301 \\
36 & SWIFTJ0051.9+1724 & 0801890301 \\
39 & SWIFTJ0054.9+2524 & 0301450401 \\
39 & SWIFTJ0054.9+2524 & 0841480101 \\
43 & SWIFTJ0059.4+1350 & 0312190101 \\
61 & SWIFTJ0113.8+1450 & 0147920101 \\
73 & SWIFTJ0123.9-5846 & 0101040201 \\
73 & SWIFTJ0123.9-5846 & 0721110201 \\... & \\
77 & SWIFTJ0127.5+1910 & 0112600601 \\
77 & SWIFTJ0127.5+1910 & 0830551001 \\... & \\
106 & SWIFTJ0206.2-0019 & 0201090201 \\
106 & SWIFTJ0206.2-0019 & 0554920301 \\... & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of the _XMM-Newton_ observations of the sources (OBSID) of our sample together with the _Swift_ identification name (_Swift_ ID) and the _Swift_ identification number (ID). This table is available in its entirety in a machine-readable form in the online journal. A part is shown as guidance for the reader regarding its content.
and \(S^{2}\) is the sample variance:
\[S^{2}=\frac{1}{N-1}\sum_{i=1}^{N}[(x_{i}-\overline{x_{i}})^{2}] \tag{3}\]
corresponding to the integral of the PSD between two frequencies (\(\nu_{1}\) and \(\nu_{2}\)), which yields the contribution to the expectation value of the variance due to variations between the corresponding timescales (\(1/\nu_{1}\) and \(1/\nu_{2}\)):
\[\langle S^{2}\rangle=\int_{\nu_{1}}^{\nu_{2}}P(\nu)d\nu \tag{4}\]
To study the correlations between \(\sigma_{\rm NNS}^{2}\) and \(M_{\rm BH}\), \(L_{2-10}\) and \(\lambda_{\rm Edd}\), we calculated \(\sigma_{\rm NNS}^{2}\) from the _XMM-Newton_ light curves. The values are listed in Tab. 2.
\(\sigma_{\rm NXS}^{2}\) is a good estimator of the intrinsic variance of a source but it has some biases. It is related to the integral of the PSD between two frequencies and thus depends on the length of the monitoring time interval, on the red-noise character of the X-ray variability and also, due to the effect of cosmological time dilation, on the redshift (Lawrence & Papadakis, 1993; Green et al., 1993; Lawrence & Papadakis, 1993; Papadakis et al., 2008; Vagnetti et al., 2011; Vagnetti, F. et al., 2016). Since our sample of 151 type I AGNs is composed mainly of local AGNs (\(z_{\rm med}=0.035\)), the impact of redshift is negligible. However, we need to avoid biases related to the different exposure times of our observations and the red-noise character of the light curves. Therefore, we computed the \(\sigma_{\rm NXS}^{2}\) from 10 ks-long independent light curve sections and, for the sources with cleaned exposure time that lasted for a multiple of 10 ks, we took the median of the excess variances of all these independent sections in each energy band. For the sources with more than one observation, we used the median value of the \(\sigma_{\rm NXS}^{2}\) in both the cases of the light curves with 100 s and 1000 s time bin in each energy band. We applied this procedure also for the 7 sources of our sample which are classified as 'changing-look' (CL) AGNs (i.e. Mrk 1018, Fairall 9, Mrk 590, NGC 3516, NGC 1566, 3C 390.3, NGC 7603, Jin et al. 2022; Temple et al. 2023) since the \(\sigma_{\rm NXS}^{2}\) computed for these sources are consistent within the error among the different observations.
The X-ray spectrum of AGNs in different energy bands is strongly impacted by different components: the primary power-law component and the reflection component are dominant in the hard energy band (\(3-10\) keV, Haardt & Maraschi, 1991, 1993; Haardt & Matt, 1993) while soft-excess and warm-absorbers (WA) can impact the soft (\(0.2-1\) keV, Bianchi et al., 2009) and medium (\(1-3\) keV, Blustin et al., 2005; Tombesi et al., 2013) energy bands. Variations of these different components will lead to distinct spectral variability in differ
\begin{table}
\begin{tabular}{r l l c c c c c c c} \hline ID & _Swift_ID & Counterpart & z & log(\(\mathbf{N_{H}}\)) & log(\(M_{\rm BH}\)) & log(\(L_{\rm 2-10}\)) & log(\(L_{\rm bol}\)) & \(\lambda_{\rm Edd}\) & log(\(\sigma_{\rm NXS}^{2}\) 10\({}_{\rm bol}\)) & log(\(\sigma_{\rm NXS}^{2}\) 10\({}_{\rm bol}\)) \\ & & & [cm\({}^{-2}\)] & [M\({}_{\rm mm}\)] & [erg s\({}^{-1}\)] & [erg s\({}^{-1}\)] & & \\ \hline
6 & SWIFTJ0006.2+2012 & Mrk 335 & 0.025 & 20.48 & 7.23 & 43.23 & 44.36 & 0.068 & -2.30 & -2.30 \\
16 & SWIFTJ0029.2+1319 & PG 0026+129 & 0.142 & 20.01 & 8.48 & 44.39 & 45.72 & 0.104 & -3.63 & -3.03 \\
34 & SWIFTJ0051.6+2928 & UGC 524 & 0.036 & 20.02 & 7.62 & 42.99 & 44.08 & 0.032 & -3.20 & -3.20 \\
36 & SWIFTJ0051.9+1724 & Mrk 1148 & 0.064 & 20.3 & 7.75 & 44.12 & 45.31 & 0.234 & -4.54 & -5.14 \\
39 & SWIFTJ0054.9+2524 & PG 0052+251 & 0.155 & 20.02 & 8.46 & 44.62 & 45.89 & 0.135 & -3.36 & -3.99 \\
43 & SWIFTJ0059.4+3150 & Mrk 352 & 0.014 & 20.01 & 7.55 & 42.72 & 44.09 & 0.016 & -3.00 & -3.00 \\
61 & SWIFTJ113.8+1450 & Mrk 1152 & 0.052 & 20.01 & 8.32 & 43.47 & 45.16 & 0.037 & -3.13 & -4.43 \\
73 & SWIFTJ1023.9-5846 & Fairall 9 & 0.047 & 20.02 & 8.29 & 44.13 & 45.29 & 0.058 & -2.22 & -2.30 \\
77 & SWIFTJ1027.5+1910 & Mrk 359 & 0.017 & 20.61 & 6.04 & 42.66 & 43.83 & 0.339 & -2.69 & -2.69 \\
106 & SWIFTJ0206.2-0019 & Mrk 1018 & 0.042 & 20.01 & 7.81 & 43.61 & 45.08 & 0.094 & -3.61 & -4.03 \\... & & & & & & & & & & \\ \hline \end{tabular} Note: The values of the \(M_{\rm BH}\) are estimated from RM or broad lines (Koss et al., 2022c; Mejia-Restrepo et al., 2022; Ricci et al., 2022). \({}_{\rm Edd}\) are computed using the bolometric luminosities calculated from the intrinsic 14–150 keV luminosities as shown in Ricci et al. 2017 with a bolometric correction of 8 (Koss et al., 2022a). \({}_{\rm BH}\) and \(L_{2-10}\) from Ricci et al. 2017
\end{table}
Table 2: List of the _Swift_ identification number (ID), the _Swift_ name (_Swift_ID), the counterpart name, the redshift (z), the hydrogen column density (N\({}_{H}\) [cm\({}^{-2}\)]), the mass (\(\mathbf{M}_{\rm BH}\) in solar masses), the 2–10 keV luminosity (\(L_{\rm 2-10}\) [erg s\({}^{-1}\)]), the bolometric luminosity (\(L_{\rm bol}\)[erg s\({}^{-1}\)]), the Eddington ratio (\(\lambda_{\rm Edd}\) ), the normalised excess variance for the light curves binned with 100s (\(\sigma_{\rm NXS}^{2}\) 100s) and 1000s (\(\sigma_{\rm NXS}^{2}\) 100s)of time binning. This table is available in its entirety in a machine-readable form in the online journal, where also the errors are reported. A part is shown as guidance for the reader regarding its content.
Figure 1: Distributions of black hole masses (\(\mathbf{M}_{\rm BH}\) ; left panel), 2–10 keV luminosities (\(L_{\rm 2-10}\) ; middle panel), Eddington ratios (\(\lambda_{\rm Edd}\) ; right panel) of the sources analysed in this work, compared with the parent BASS sample of unobscured AGNs (empty bars).
ent energy bands. We therefore calculated \(\sigma^{2}_{\rm NXS}\) from the \(0.2-1\) keV (soft), \(1-3\) keV (medium) and \(3-10\) keV (hard) light curves to get a fuller picture of the AGNs X-ray variability.
### Correlations between the normalised excess variance and the physical parameters
To investigate the physical parameters driving X-ray variability in our sample of unobscured AGNs we looked for correlations between the _XMM-Newton_ broad-band (\(0.2-10\) keV) \(\sigma^{2}_{\rm NXS}\) and several key AGNs parameters (i.e. \(M_{\rm BH}\), \(L_{2-10}\), \(x_{\rm Edd}\)) by fitting a linear model to the data in the log-log space (see Fig. 2 and Fig. 3) using the following fitting relation:
\[\log(\sigma^{2}_{\rm NXS})={\rm A}+{\rm B}\log({\rm x}) \tag{5}\]
where x is the value of the physical parameter. Among the 151 sources of our sample, we found 46 objects with an intrinsic \(\sigma^{2}_{\rm NXS}\) lower than the respective error. In this case, we define the measurement as a "non-detection", and we consider it as an upper limit. To include the upper limits in our analysis, we used the survival analysis method (SA; e.g., Feigelson & Nelson, 1985; Shimizu et al., 2017) using the scikit-survival (Potster, 2020) package, which applies the principles of SA to astronomical data. SA is a statistical technique used to analyze time-to-event data and it is particularly well-suited for analyzing data that include upper/lower limits. Specifically, scikit-survival calculates the non-parametric Kaplan-Meier product-limit (KMPL) estimator for a sample distribution. The KMPL estimator is an estimate of the survival function, which is simply 1-CDF (cumulative distribution function). Using the KMPL, we calculated, for each bin of \(M_{\rm BH}\), \(L_{2-10}\) and \(\lambda_{\rm Edd}\), the median \(\sigma^{2}_{\rm NXS}\), and estimated their uncertainties. Since the KMPL estimator is a non-parametric method, it is unbiased because it does not assume any specific distribution for the data. We divided \(M_{\rm BH}\) and \(\lambda_{\rm Edd}\) into 6 bins and the \(L_{2-10}\) into 7 bins. These bins are not symmetrical, since we requested each bin to have at least 15 values. We fitted the median values obtained with the SA method using the code: limmix, a hierarchical Bayesian model for fitting a straight line to data with errors in both the x and y directions (Kelly, 2007). From the analysis of the correlation between \(\sigma^{2}_{\rm NXS}\) and \(M_{\rm BH}\), data of sources with \(M_{\rm BH}>10^{9}M_{\odot}\) are excluded since, for these sources, we found mostly upper limits on \(\sigma^{2}_{\rm NXS}\) so, being the last bin populated just by upper limits, the SA method was not reliable in computing the median in the bin.
We report in Tab. 3 the intercepts and the slopes of the linear regression line together with Pearson's correlation coefficients and the correlation probabilities for all the relations analysed in this work and for light curves with 100 s and 1000 s bins. As shown in Tab. 3 the fitting parameters for the 100 s and 1000 s binned light curves are consistent within the errors in each analysed relation.
Fig. 2 shows the \(\sigma^{2}_{\rm NXS}\) vs \(M_{\rm BH}\) relation obtained from the \(0.2-10\) keV light curves binned with 100 s and 1000 s. We also reported the SA results for each bin. Being the SA results the \(\sigma^{2}_{\rm NXS}\) median calculated using the KMPL estimator they are representative of the \(\sigma^{2}_{\rm NXS}\) value in each \(M_{\rm BH}\) bin. We report also the linear regressions obtained from the fitting over the SA results. As expected from previous studies (Nandra et al., 1997; Papadakis, 2004; O'Neill et al., 2005; Ponti et al., 2012), we found, a strong anti-correlation between \(\sigma^{2}_{\rm NXS}\) and \(M_{\rm BH}\) (see Fig. 2). We computed the \(1\sigma\) scatter of the data around the
Figure 2: \(\sigma^{2}_{\rm NXS}\) vs \(M_{\rm BH}\) relation obtained from the \(0.2-10\) keV light curves binned with 100 s (left panels) and 1000 s (right panels). Yellow stars with error bars correspond to the SA results for each bin. The dashed lines are the linear regressions obtained from the fitting over the SA results, while the shaded region represents the combined \(1\sigma\) error on the slope and normalisation. Each colored data points with error bars represents one source of our sample. Colorbars represent the \(L_{2-10}\) (top panels) and \(\lambda_{\rm Edd}\) (bottom panels).
best-fit line using the following equation:
\[\sigma_{\rm scatter}=\sqrt{\sum_{i=1}^{N}[\log(\sigma_{\rm NXS,i}^{2})-f(M_{\rm BH,i})]^{2}/N} \tag{6}\]
where \(f(M_{\rm BH})\) is the logarithmic value of the \(\sigma_{\rm NXS}^{2}\) extrapolated using the best fitting relation (see Tab. 3). We found a scatter for this relation of \(\sim 0.85\) dex in both the case of the light curves binned with 100 s and 1000 s.
We also found a strong anti-correlation between the \(\sigma_{\rm NXS}^{2}\) and the \(L_{2-10}\) (see upper panels of Fig. 3). Given the strong dependence between the \(\sigma_{\rm NXS}^{2}\) and \(M_{\rm BH}\), we decided to correct the \(\sigma_{\rm NXS}^{2}\) for \(M_{\rm BH}\) to check if after the correction, the relation between \(\sigma_{\rm NXS}^{2}\) and \(L_{2-10}\) is still present. Previous works have shown that \(\sigma_{\rm NXS}^{2}\propto\mbox{$M_{\rm BH}$}^{-(-1)}\)(e.g., Ponti et al., 2012). In this work we found a slope for the \(\sigma_{\rm NXS}^{2}\) vs \(M_{\rm BH}\) relation of \(-0.77\pm 0.23\) and \(-1.12\pm 0.18\) when considering light curves with 100 s and 1000 s bins, respectively.
Figure 3: Upper panels: \(L_{2-10}\) vs \(M_{\rm BH}\) relation. Lower panels: \(\lambda_{\rm Edd}\) vs \(\sigma_{\rm NXS}^{2}\) relations. The relations are obtained from the \(0.2-10\) keV light curves binned with 100 s (left panels) and 1000 s (right panels). Yellow stars with error bars correspond to the S/A results for each bin. The dashed lines are the linear regressions obtained from the fitting over the SA results, while the shaded region represents the combined \(1\sigma\) error on the slope and normalisation. Each colored data points with error bars represents one source of our sample. Colorbars represent the \(M_{\rm BH}\).
Figure 4: \(\sigma_{\rm NXS}^{2}\) x \(M_{\rm BH}\) vs \(L_{2-10}\) relation. The relations are obtained from the \(0.2-10\) keV light curves binned with 100 s (left panels) and 1000 s (right panels). Yellow stars with error bars correspond to the SA results for each bin. The dashed lines are the linear regressions obtained from the fitting over the SA results. Colorbars represent the \(M_{\rm BH}\).
Since \(-1\) is still consistent within the errors with our results, for consistency with past works, to check if the \(\sigma^{2}_{\rm NXS}\) vs \(L_{2-10}\) correlation still exists when the primary dependence is removed, we analysed the correlation between \(\sigma^{2}_{\rm NXS}\times M_{\rm BH}\) versus \(L_{2-10}\). Removing the dependence of \(\sigma^{2}_{\rm NXS}\) on \(M_{\rm BH}\), the strong correlation with \(L_{2-10}\), that was present before, is not significant anymore (see Fig.-4), as reported from previous studies (Papadakis, 2004; O'Neill et al., 2005). In fact we found, in both cases of \(\sigma^{2}_{\rm NXS}\) obtained from the \(0.2-10\) keV light curves binned with 100 s and 1000 s, a Pearson correlation coefficient of -0.19 and -0.20 corresponding to a \(1-p_{\rm value}\) of 0.30 (see Tab. 3). Thus, the dependence between the \(\sigma^{2}_{\rm NXS}\) and \(L_{2-10\,{\rm keV}}\) is actually related to the dependence between \(L_{2-10\,{\rm keV}}\) and \(M_{\rm BH}\). From our analysis an anti-correlation between \(\sigma^{2}_{\rm NXS}\) and \(\lambda_{\rm Edd}\) is present (see lower panels of Fig. 3), but it is not significant, according to the Pearson test (see Tab. 3).
From Fig. 3 it is clear that in both relations a gradient of \(M_{\rm BH}\) is present. Thus, to check if the relation \(\sigma^{2}_{\rm NXS}-M_{\rm BH}\) is somehow affected by \(L_{2-10}\) and/or \(\lambda_{\rm Edd}\), we first computed the median values of \(L_{2-10}\) and \(\lambda_{\rm Edd}\) of the sample, which are \(L_{2-10,{\rm med}}=2.79\times 10^{43}\) erg s\({}^{-1}\) and \(\lambda_{\rm Edd,med}=0.06\), respectively. We then used these values as thresholds to divide the sample into two sub-samples depending on their \(L_{2-10}\) and \(\lambda_{\rm Edd}\) (see figure 5). The best-fitting values of the correlations we found are also reported in Tab. 3. We found that the correlation between \(\sigma^{2}_{\rm NXS}\) and \(M_{\rm BH}\) has slightly different normalizations among the two sub-samples depending on the \(L_{2-10}\) or \(\lambda_{\rm Edd}\) but the same slope within the errors, which is also in agreement with the slope of the \(\sigma^{2}_{\rm NXS}\) - \(M_{\rm BH}\) relation found for the total sample. In particular, in the sub-sample depending on \(L_{2-10}\) we found that the slopes of the correlations is similar to the one found for the total sample, while the normalization for the sources with \(L_{2-10}>L_{2-10,{\rm med}}\) is lower. This is not really surprising since sources with high luminosity (high \(M_{\rm BH}\) ) show lower variability. Also, the majority of the sources with \(L_{2-10}>L_{2-10,{\rm med}}\) show an upper limit of the \(\sigma^{2}_{\rm NXS}\).
### Reverberation mapping sub-sample
We checked the relation between the \(\sigma^{2}_{\rm NXS}\) and \(M_{\rm BH}\) in the sub-sample of sources in which \(M_{\rm BH}\) is obtained via RM (35 sources). Being the sub-sample smaller, we did not used the SA method. Instead we used the method of the 'censored fitting' (CF) (Guainazzi et al., 2006; Bianchi et al., 2009), to account for upper limits. This was done by performing a large number of least square fits, using the limmix code, on a set of Monte-Carlo simulated data derived from the observed data points. Each detection was substituted by a value randomly drawn from a Gaussian distribution, whose mean is the best-fit measurement and whose standard deviation is its statistical uncertainty. Each upper limit \(U\) was substituted by a value randomly drawn from a uniform distribution in the interval \([A,U]\), where A was arbitrarily set to \(A\ll U\). We choose \(A=10^{-6}\). We found an anticorrelation in both cases of \(\sigma^{2}_{\rm NXS}\) obtained from the \(0.2-10\) keV light curves binned with 100 s and 1000 s (see Fig. 6). Using Equation 6 we found \(\sigma_{\rm scatter,100}=0.65\) and \(\sigma_{\rm scatter,1000}=0.69\), smaller than the scatters of the \(\sigma^{2}_{\rm NXS}\) vs \(M_{\rm BH}\) relations obtained from the total sample. This is because on average the sources with \(M_{\rm BH}\) estimated via RM are brighter and they show a higher count rate on the same timescales. Using the \(\sigma^{2}_{\rm NXS}\) vs \(M_{\rm BH}\) relations obtained from the RM sample, it is possible to measure \(M_{\rm BH}\) for a total of 87 AGNs (out of 151 AGNs in our sample) and provide an upper/lower limit for the remaining AGN. Thus, even if the X-ray variability is not the
most accurate tool to measure \(M_{\rm BH}\), the relation obtained for the RM sub-sample gives a good \(M_{\rm BH}\) estimation, with a scatter <1 dex.
### The normalised excess variance in the soft, medium and hard energy bands.
It is interesting to compare \(\sigma_{\rm NXS}^{2}\) in various energy bands to check if there is one or more components of the AGNs X-ray spectrum that contributes the most in the variability. In order to verify this we calculated \(\sigma_{\rm NXS}^{2}\) in the the soft (\(0.2-1\) keV), medium (\(1-3\) keV) and hard (\(3-10\) keV) bands. We then looked for a correlation between the \(\sigma_{\rm NXS,hard}^{2}\) and both \(\sigma_{\rm NXS,soft}^{2}\) and \(\sigma_{\rm NXS,med}^{2}\). In the left panel of Fig. 7 we show \(\sigma_{\rm NXS}^{2}\) in the soft energy band versus the same parameter calculated in the hard band, while in the right panel of Fig. 7 we illustrate the excess variance in the medium energy band versus that in the hard band. The best-fitting relations, reported in Tab. 3, are obtained using the CF method (see SS3.2).
We fitted the data also with the bisector method which provides a more balanced and symmetric estimate of the true regression line between the two variables that in principle could be independent.
Figure 6: \(\sigma_{\rm NXS}^{2}\) vs \(M_{\rm BH}\) for the RM sub-sample. The dashed black lines are the linear regressions obtained from the fitting process. Shaded region represents the combined \(1\sigma\) error on the slope and normalisation. The \(\sigma_{\rm NXS}^{2}\) is obtained from the \(0.2-10\) keV light curves binned with \(100\) s (left panel) and \(1000\) s (right panels). Lower panels show the difference between the real \(M_{\rm BH}\) and \(M_{\rm BH}\) extrapolated from the relation (\(\Delta M\)). For details about the coefficients, see Tab. 3.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Relation & \(\Delta t\)(s) & Intercept (A) & Slope (B) & Pearson & 1-P\({}_{\rm value}\) \\ \hline \hline \(\sigma_{\rm NXS}^{2}\) vs \(M_{\rm BH}\) & 100 & \(2.99\pm 1.79\) & \(-0.77\pm 0.23\) & -0.94 & 0.99 \\ \(\sigma_{\rm NXS}^{2}\) vs \(M_{\rm BH}\) & 1000 & \(5.54\pm 2.11\) & \(-1.12\pm 0.18\) & -0.96 & 0.99 \\ \(\sigma_{\rm NXS}^{2}\) vs \(L_{2-10}\) & 100 & \(26.29\pm 25.29\) & \(-0.67\pm 0.08\) & -0.84 & 0.95 \\ \(\sigma_{\rm NXS}^{2}\) vs \(L_{2-10}\) & 1000 & \(29.84\pm 36.41\) & \(-0.76\pm 0.06\) & -0.85 & 0.96 \\ \(\sigma_{\rm NXS}^{2}\) vs \(x_{\rm Edd}\) & 100 & \(-3.55\pm 2.27\) & \(-0.36\pm 0.61\) & -0.10 & 0.25 \\ \(\sigma_{\rm NXS}^{2}\) vs \(x_{\rm Edd}\) & 1000 & \(-3.11\pm 3.19\) & \(-0.24\pm 0.59\) & -0.12 & 0.25 \\ \hline \(\sigma_{\rm NXS}^{2}\) vs \(M_{\rm BH,\,RM}\) & 100 & \(2.94\pm 0.29\) & \(-0.75\pm 0.20\) & -0.66 & 0.75 \\ \(\sigma_{\rm NXS}^{2}\) vs \(M_{\rm BH,\,RM}\) & 1000 & \(2.91\pm 0.34\) & \(-0.76\pm 0.41\) & -0.67 & 0.79 \\ \hline \(\sigma_{\rm NXS}^{2}\) vs \(M_{\rm BH}\) (\(L_{2-10}<L_{2-10,\rm med}\)) & 100 & \(0.36\pm 0.51\) & \(-0.43\pm 0.06\) & -0.89 & 0.97 \\ \(\sigma_{\rm NXS}^{2}\) vs \(M_{\rm BH}\) (\(L_{2-10}<L_{2-10,\rm med}\)) & 1000 & \(0.16\pm 0.38\) & \(-0.37\pm 0.04\) & -0.99 & 0.99 \\ \(\sigma_{\rm NXS}^{2}\) vs \(M_{\rm BH}\) (\(L_{2-10}>L_{2-10,\rm med}\)) & 100 & \(-2.45\pm 0.48\) & \(-0.14\pm 0.16\) & -0.30 & 0.38 \\ \(\sigma_{\rm NXS}^{2}\) vs \(M_{\rm BH}\) (\(L_{2-10}>L_{2-10,\rm med}\)) & 1000 & \(-1.71\pm 0.41\) & \(-0.16\pm 0.15\) & -0.35 & 0.41 \\ \hline \(\sigma_{\rm NXS}^{2}\) vs \(M_{\rm BH}\) (\(L_{\rm Edd}<L_{\rm Edd,med}\)) & 100 & \(0.69\pm 0.76\) & \(-0.48\pm 0.09\) & -0.89 & 0.97 \\ \(\sigma_{\rm NXS}^{2}\) vs \(M_{\rm BH}\) (\(L_{\rm Edd}<L_{\rm Edd,med}\)) & 1000 & \(1.46\pm 1.32\) & \(-0.55\pm 0.16\) & -0.89 & 0.97 \\ \(\sigma_{\rm NXS}^{2}\) vs \(M_{\rm BH}\) (\(L_{\rm Edd}>L_{\rm Edd,med}\)) & 1000 & \(0.16\pm 0.83\) & \(-0.42\pm 0.13\) & -0.89 & 0.97 \\ \(\sigma_{\rm NXS}^{2}\) vs \(M_{\rm BH}\) (\(L_{\rm Edd}>L_{\rm Edd,med}\)) & 1000 & \(2.35\pm 0.88\) & \(-0.71\pm 0.11\) & -0.89 & 0.97 \\ \hline \(\sigma_{\rm NXS}^{2}\) vs \(M_{\rm BH}\) vs \(L_{2-10}\) & 100 & \(14.35\pm 9.75\) & \(-0.26\pm 0.27\) & -0.19 & 0.30 \\ \(\sigma_{\rm NXS}^{2}\) vs \(M_{\rm BH}\) vs \(L_{2-10}\) & 1000 & \(12.05\pm 7.78\) & \(-0.21\pm 0.22\) & -0.20 & 0.30 \\ \hline \(\sigma_{\rm NXS,hard}^{2}\) vs \(\sigma_{\rm NXS,soft}^{2}\) & 100 & \(0.62\pm 0.26\) & \(0.87\pm 0.10\) & 0.71 & 0.99 \\ \(\sigma_{\rm NXS,hard}^{2}\) vs \(\sigma_{\rm NXS,med}^{2}\) & 100 & \(0.47\pm 0.22\) & \(0.93\pm 0.08\) & 0.79 & 0.99 \\ \hline \hline \end{tabular}
\end{table}
Table 3: List of the best-fit relations together with their p-values. The fits are performed in the log-log space using equation Eq. 5.
The values of the slope and intercept we found with this method are consistent within the errors with the one found using the CF method. The values of \(\sigma^{2}_{\rm NXS}\) in the soft and medium energy bands appear to be well correlated with \(\sigma^{2}_{\rm NXS}\) in the hard energy band, although with a slope flatter than the one-to-one relation (green dashed line in Fig. 7). This may imply that for most of our sources, on timescales less than \(10\,\mathrm{k}\mathrm{s}\) the spectral components which dominate in the hard band are increasingly more variable than the ones dominating the soft and medium energy bands, for higher values of the variance. In the soft energy band the dominant component is usually the soft-excess, which can be variable. Furthermore, in the soft-medium energy bands, the presence of absorbing material, either neutral or ionised, can be variable or can absorb the continuum emission responding to the continuum variations. If these variations happened on timescales less than \(\sim 10\,\mathrm{k}\mathrm{s}\) we would expect to measure a larger variability amplitude in the soft/medium energy bands compared to the hard band. We observe the opposite, i.e. weaker variations of the soft-excess and/or warm absorbers than those of the primary continuum and/or reflection component on this timescale, in agreement with previous studies (Ponti et al., 2012; Simm et al., 2016).
For completeness we checked the relations between the \(\sigma^{2}_{\rm NXS}\) in the soft, medium and hard energy bands with the physical properties of the AGN (\(M_{\rm BH}\), \(L_{2-10}\) and \(\lambda_{\rm Edd}\) ) to see whether these relations support the results found in this work. The best fitting results are reported in Appendix B. We found that the \(\sigma^{2}_{\rm NXS}\) vs \(M_{\rm BH}\) relation in the soft energy band is slightly less significant than in the harder energy bands (see Tab. 2), but in general the best-fitting relations in each energy band are consistent with those from the broad _XMM-Newton_ energy band (\(0.2-10\,\mathrm{k}\mathrm{e}\mathrm{V}\)).
## 4 Conclusions
We analysed the variability properties of \(\sim 500\)_XMM-Newton_ observations of a sample of 151 nearby (\(z_{\rm med}=0.035\)) unobscured (\(N_{H}<10^{22}\mathrm{cm}^{-2}\)) AGNs from the BASS survey, studying the correlations of the excess variance with the physical properties of the sources and also checking for the correlations between the excess variance computed in different energy bands. The timescale used to compute the \(\sigma^{2}_{\rm NXS}\) is \(10\,\mathrm{k}\mathrm{s}\), to avoid biases related to the differences on the exposure times of the sources of our sample and to take into account the red-noise character of the light curves. We analysed the relations of \(\sigma^{2}_{\rm NXS}\) with \(M_{\rm BH}\), \(L_{2-10}\) and \(\lambda_{\rm Edd}\). The correlation between \(\sigma^{2}_{\rm NXS}\) and \(M_{\rm BH}\) is a well-know property of AGNs (Lu & Yu, 2001; Papadakis, 2004; O'Neill et al., 2005; Nikolajuk et al., 2006; Nikolajuk et al., 2007; Miniutti et al., 2009; Zhou et al., 2010; Ponti et al., 2012). In agreement with this, in our sample we found a very strong and highly significant correlation between these two quantities.
We do not find a significant correlation between \(\sigma^{2}_{\rm NXS}\) and \(\lambda_{\rm Edd}\), consistently with previous results (O'Neill et al., 2005; Gierlinski et al., 2008; Zhou et al., 2010; Ponti et al., 2012; Lanzuisi et al., 2014). However, according to McHardy et al. (2006) the break timescale increases proportionally as \(M_{\rm BH}\) decreases (\(\lambda_{\rm Edd}\) increases). Thus, if we assume a universal PDS with a single break frequency depending on the \(M_{\rm BH}\) and a equally long-timescale normalization, the strength of the relations \(\sigma^{2}_{\rm NXS}\) vs \(M_{\rm BH}\) and \(\sigma^{2}_{\rm NXS}\) vs \(\lambda_{\rm Edd}\) would be the same, with higher short-timescale variability for low \(M_{\rm BH}\) (high \(\lambda_{\rm Edd}\) ). Our result could suggest that there might be no correlation between break timescale and \(\lambda_{\rm Edd}\), as proposed by Gonzalez-Martin & Vaughan (2012) analysing a larger sample of shorter light curves. Alternatively, following the results of McHardy et al. (2006) and Paolillo et al. (2017), \(\lambda_{\rm Edd}\) could be dependent on the break time scale but, since on short-timescale \(\sigma^{2}_{\rm NXS}\) seems mostly independent of this parameter, the normalization of the power spectrum may be anti-correlated with \(\lambda_{\rm Edd}\).
We found a tight anti-correlation between \(\sigma^{2}_{\rm NXS}\) and \(L_{2-10}\). To remove the \(M_{\rm BH}\) dependence from this correlation, we explored the relation between the \(\sigma^{2}_{\rm NXS}\times M_{\rm BH}\) versus \(L_{2-10}\), finding that, in this case, the \(\sigma^{2}_{\rm NXS}\) vs \(L_{2-10}\) correlation disappears, confirming that the correlation with \(L_{2-10}\) is secondary, while the primary correlation is in fact with the mass, in agreement with what has been found by previous works (e.g. Papadakis, 2004; O'Neill et al., 2005; Lanzuisi et al., 2014).
We explored the \(\sigma^{2}_{\rm NXS}\) vs \(M_{\rm BH}\) relation in the sub-sample of sources with \(M_{\rm BH}\) estimated via RM, finding that the correlation between these quantities in this sub-sample has an intrinsic scatter of \(\sim 0.65-0.69\,\mathrm{dex}\). With this relation we were able to measure \(M_{\rm BH}\) for 87 AGNs and estimate upper/lower limits for the remaining 64 AGNs of our sample. Thus, one could in principle use X-ray variability to measure \(M_{\rm BH}\)(e.g. Nikolajuk et al., 2004; Ponti et al., 2012; Akylas et al., 2022). With the advent of future planned or proposed missions (e.g. _Athena_, _AXIS_, etc) that will provide higher count rates, the accuracy of this relation for mass measurement will improve significantly.
Figure 7: Left panel: soft (\(0.2-1\,\mathrm{k}\mathrm{e}\mathrm{V}\)) vs hard (\(3-10\,\mathrm{k}\mathrm{e}\mathrm{V}\)) \(\sigma^{2}_{\rm NXS}\). Right panel: medium (\(1-3\,\mathrm{k}\mathrm{e}\mathrm{V}\)) vs hard (\(3-10\,\mathrm{k}\mathrm{e}\mathrm{V}\)) \(\sigma^{2}_{\rm NXS}\). The best fit curves are plotted with solid black lines while the shaded region represents the combined \(3\sigma\) error on the slope and intercept. The green dashed lines represent the one-to-one relation. Black dashed lines represent the regression lines found using the bisector method.
Dividing the sample into two bins of \(L_{2-10}\), the normalization of the anti-correlation between \(\sigma^{2}_{\rm NXS}\) and \(M_{\rm BH}\) is lower for the sources with higher luminosity. This is possibly related to the fact that for sources with high luminosity (high mass), we detected lower variability and mostly only an upper limit on \(\sigma^{2}_{\rm NXS}\) was obtained. When dividing the sample into two bins of \(\lambda_{\rm Edd}\) the slope and the normalisation are slightly different in the two sub-samples but still consistent within the errors.
X-ray spectra could be dominated by different components depending on the energy band one is analysing. Therefore we explored the relation of the \(\sigma^{2}_{\rm NXS}\) in the hard X-ray band (\(3-10\) keV) with the \(\sigma^{2}_{\rm NXS}\) in the soft X-ray band (\(0.2-1\) keV) and in the medium X-ray band (\(1-3\) keV), finding that \(\sigma^{2}_{\rm NXS}\) calculated in various energy bands are highly correlated, in agreement with previous studies (Ponti et al., 2012; Simm et al., 2016). In particular we found that, in most sources, the primary continuum and/or the reflection component are increasingly more variable than the spectral components dominating softer energy bands (\(0.2-1\) keV and \(1-3\) keV) on timescales shorter than \(10\) ks. In fact, if WA components were varying, they would show more variability in the medium energy band, while the variance in that band is lower than in the hard energy band. Thus WA variability cannot be generally the cause of fast (shorter than \(10\) ks) variations. Moreover, we found that the soft energy band is less variable than the hard band. This implies that the soft-excess, or at least part of it, is a less variable component (on timescales less than \(10\) ks) which dilutes the \(\sigma^{2}_{\rm NXS}\) by adding to the constant flux in the denominator and not to the variable flux in the numerator, as it was found for the Seyfert 1.5 galaxy NGC 3227 (Arevalo & Markowitz, 2014). Finally, the hard continuum might be intrinsically more variable than the continuum in softer bands because the break timescale of the PDS moves to shorter timescales for higher energy X-ray photons (McHardy et al., 2004; Markowitz et al., 2007; McHardy et al., 2007; Arevalo et al., 2008).
We examined the relation between \(\sigma^{2}_{\rm NXS}\) (calculated in the soft, medium and in the hard X-ray bands) and several important AGN physical parameters, such as \(M_{\rm BH}\), \(L_{2-10}\) and \(\lambda_{\rm Edd}\). Our analysis revealed that the best-fitting relations in each energy band align with those from the broad _XMM-Newton_ energy band (\(0.2-10\) keV). Notably, in the soft energy band, the \(\sigma^{2}_{\rm NXS}\) of \(M_{\rm BH}\) anti-correlation appears to be slightly less significant. This leads support to another key finding of this study, i.e. that, on timescales shorter than \(10\) ks, the primary continuum and/or the reflection component exhibit stronger variability compared to the spectral components dominating softer energy bands.
Compared with previous results (e.g., Ponti et al., 2012) we found a less steep correlation between \(\sigma^{2}_{\rm NXS}\) and \(M_{\rm BH}\). The difference could be attributed to the larger number of black hole masses from reverberation mapping, to the higher quality optical measurements and fitting for the other mass measurements techniques, and also to the larger number of observations used (a factor \(\sim 2\) larger than Ponti et al., 2012), which helped to refine the computation of \(\sigma^{2}_{\rm NXS}\). Our results are consistent with the common picture in which, as a general rule, nearby AGNs display similar patterns of variability once they are rescaled for \(M_{\rm BH}\) and \(\lambda_{\rm Edd}\).
## Acknowledgments
This work was funded by ANID programs FONDECYT Postdoctorado - 3190213 (AT), 3220516 (MT), 3210157(ARL); FONDECYT Regular - 1230345 (CR) and1200495 (FEB); Millennium Science Initiative Program - ICN12_009 (FEB); CATA-BASAL - ACE210002 (FEB) and FB210003 (CR, FEB). TL acknowledges support from the NANOGrav NSF Physics Frontiers Center No. 2020265. BT acknowledges support from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement number 950533) and from the Israel Science Foundation (grant number 1849/19). TK is supported by JSPS KAKENHI grant No. 23K13153 and acknowledges support by the Special Postdoctoral Researchers Program at RIKEN. KO acknowledges support from the National Research Foundation of Korea (NRF-2020R1C1C1005462) and the Korea Astronomy and Space Science Institute under the R&D program(Project No. 2023-1-868-03) supervised by the Ministry of Science and ICT. This work is based on observations obtained with the ESA science mission _XMM-Newton_, with instruments and contributions directly funded by ESA Member States and the USA (NASA). The authors thank the anonymous referee for constructive comments that have helped in improving the quality of the paper.
## Data Availability
All the data utilized in this paper are publicly available in the _XMM-Newton_ data archive at [https://nxsa.esac.esa.int/nxsa-web/#search](https://nxsa.esac.esa.int/nxsa-web/#search). More details of the observations are listed in Tab. 1.
|
2309.15384 | Standard Monomials for Positroid Varieties | We give an explicit characterization of the standard monomials for positroid
varieties with respect to the Hodge degeneration and give a Gr\"obner basis.
Furthermore, we show that promotion and evacuation biject standard monomials of
a positroid variety with those of its cyclic shifts and $w_0$-reflection,
respectively. The connection to promotion allows us to identify standard
monomials of a positroid variety with Lam's cyclic Demazure crystal. Using a
recurrence on the Hilbert series, we give an inductive formula for the
character of cyclic Demazure modules. | Ayah Almousa, Shiliang Gao, Daoji Huang | 2023-09-27T03:31:14Z | http://arxiv.org/abs/2309.15384v2 | # Standard monomials and Grobner bases for positroid varieties
###### Abstract.
We give an explicit characterization of the standard monomials of positroid varieties with respect to the Hodge degeneration, and give a Grobner basis for positroid varieties. As an application, we show that promotion on rectangular-shaped semistandard tableaux gives a bijection between standard monomials of a positroid variety and its cyclic shifts.
###### Contents
* 1 Introduction
* 2 Positroid Varieties
* 2.1 Bounded affine permutations
* 2.2 Cyclic rank matrices
* 2.3 \(k\)-Bruhat orders and Grassmann intervals
* 2.4 Basic positroid varieties
* 3 Hodge algebra structure of Plucker coordinates
* 3.1 The Plucker poset
* 3.2 \(\operatorname{Gr}(k,n)\) and \(\operatorname{Gr}(n-k,n)\)
* 4 Initial ideals and standard monomials for positroid varieties
* 4.1 The initial ideal of a basic positroid variety
* 4.2 Standard monomials of arbitrary positroid varieties
* 5 Grobner bases for positroid varieties
* 6 Promotion on standard monomials
* Acknowledgments
## 1. Introduction
Positroid varieties are subvarieties of the Grassmannian \(\operatorname{Gr}(k,n)\) introduced by Knutson-Lam-Speyer [11, 12], motivated by the study of total positivity [13, 14]. These varieties also arise in Poisson geometry [1] and in the study of scattering amplitudes [1]. Recent work of Galashin-Lam [15, 16] studied mixed Hodge structure on the cohomology of open positroid varieties and its relation
## 1. Introduction
Let \(\mathcal{J}_{S\leq r}\) be a finite field, and let \(\mathcal{J}_{S\leq r}\) be a finite field. We say that \(\mathcal{J}_{S\leq r}\) is _locally
1. _classical quadratic Plucker relations, with leading terms of the form_ \([\mathbf{a}]\cdot[\mathbf{b}]\) _where_ \([\mathbf{a}]\) _and_ \([\mathbf{b}]\) _are incomparable elements of the Plucker poset and neither_ \(\mathbf{a}\) _nor_ \(\mathbf{b}\) _is contains_ \(r+1\) _elements in the interval_ \(S\)_;_
2. _fix a monomial with a generalized antidiagonal as in Theorem_ 1_, and sum all of the monomials where the elements of the generalized antidiagonal have been permuted (weighted with the appropriate sign)._
**Example 1.1**.: We illustrate Theorems A and B with a small example. Let \(n=5\), \(k=3\), \(S=[2,4]\), \(r=2\), and consider the interval positroid variety \(X_{[2,4]\leq 2}\). The generators of the Grobner basis of \(\mathcal{J}_{[2,4]\leq 2}\) that are not Plucker relations for \(\operatorname{Gr}(3,5)\) consist of the following:
\[\begin{array}{|c|c|}\hline\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit \omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit \span\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit \span\omit\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit \span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\
This allows us to identify standard monomials for a positroid variety with Lam's cyclic Demazure crystals [19], which is a priori not apparent.
### Acknowledgments
We thank Allen Knutson, Thomas Lam and David Speyer for inspiring conversations and helpful comments. We also thank Anders Buch, Sean Griffin, Matt Larson, Leonardo Mihalcea, Vic Reiner, Brendon Rhoades, Melissa Sherman-Bennett, Jessica Striker, Keller VandeBogert, Anna Weigandt and Alex Yong for helpful conversations. DH would like to thank ICERM for the Combinatorial Algebraic Geometry Reunion Event, during which many fruitful conversations happened. SG was supported by NSF Graduate Research Fellowship under grant No. DGE-1746047. DH was supported by NSF-DMS2202900.
## 2. Positroid Varieties
A comprehensive introduction of positroid varieties can be found in [11]. We highly recommend that readers unfamiliar with positroid varieties refer to it as a supplement.
Let \(\mathbb{k}\) be an algebraically closed field of characteristic \(0\). Let \(\operatorname{Gr}(k,n)\) be the Grassmannian consisting of \(k\)-dimensional subspaces of \(\mathbb{k}^{n}\). We will fix \(k\) and \(n\) through out the paper. The positroid varieties are subvarieties of \(\operatorname{Gr}(k,n)\) that can be bijectively associated with many different combinatorial objects. Here we will focus on the following:
1. bounded affine permutations,
2. cyclic rank matrices, and
3. Grassmann intervals \([v,u]\).
### Bounded affine permutations
Let \(\widetilde{S}_{n}\) be the group of bijections \(f:\mathbb{Z}\to\mathbb{Z}\) such that
\[f(i+n)=f(i)+n,\]
where the group operation is composition. We refer to these bijections as **affine permutations**. Indeed, let \(\widetilde{f}(i)\in[n]\) to be the remainder of \(f(i)\) divided by \(n\); then \(\widetilde{f}(1)\cdots\widetilde{f}(n)\) is a permutation of \(\{1,\cdots,n\}\). We will sometimes write an affine permutation as \(f=[\cdots f(1)\ f(2)\cdots f(n)\cdots]\). Define
\[\widetilde{S}_{n}^{k}=\{f\in\widetilde{S}_{n}:\sum_{i=1}^{n}(f(i)-i)=kn\}.\]
In particular, \(\widetilde{S}_{n}^{0}\) is the Coxeter group \(\widetilde{A_{n-1}}\). The Bruhat order on \(\widetilde{S}_{n}^{0}\) induces a partial order on \(\widetilde{S}_{n}^{k}\) for all \(k\in\mathbb{Z}\).
Define the set of **bounded affine permutations** to be
\[\operatorname{Bound}(k,n):=\{f\in\widetilde{S}_{n}^{k}:i\leq f(i)\leq i+n\text { for all }i\in\mathbb{Z}\}. \tag{1}\]
The partial order on \(\widetilde{S}_{n}^{k}\) then restricts to a partial order on \(\operatorname{Bound}(k,n)\).
Given \(V\in\operatorname{Gr}(k,n)\), let \(\widetilde{V}\) denote a choice of \(k\times n\) matrix such that \(\operatorname{rowspan}(\widetilde{V})=V\). We write
\[\widetilde{V}=\begin{bmatrix}|&|&&|\\ v_{1}&v_{2}&\cdots&v_{n}\\ |&|&&|\end{bmatrix}.\]
We extend the sequence \(v_{1}\cdots v_{n}\) by setting
\[v_{i}=v_{i+n}\text{ for all }i\in[n],\]
and denote by \(\widetilde{V}_{[i,j]}\) the matrix with column vectors \(v_{i},\cdots,v_{j}\). Consider the affine permutation \(f_{\widetilde{V}}:\mathbb{Z}\to\mathbb{Z}\) given by
\[f_{\widetilde{V}}(i)=\min\{j\geq i:v_{i}\in\operatorname{span}(v_{i+1},\cdots,v_{j})\}. \tag{2}\]
It is known that \(f_{V}\in\operatorname{Bound}(k,n)\) and that all bounded affine permutations can arise this way. Moreover, \(f_{\widetilde{V}}\) only depends on the \(V:=\operatorname{rowspan}(\widetilde{V})\), so we may define \(f_{V}\) for \(V\in\operatorname{Gr}(k,n)\).
**Definition 2.1**.: The **open positroid variety** associated to a bounded affine permutation \(f\) is
\[\Pi_{f}^{\circ}\coloneqq\{V\in\operatorname{Gr}(k,n):f_{V}=f\},\]
and the **Positroid variety** is its Zariski closure \(\Pi_{f}=\overline{\Pi_{f}^{\circ}}\).
In fact, we have
\[\Pi_{f}=\bigsqcup_{f^{\prime}\geq f}\Pi_{f^{\prime}}^{\circ}.\]
### Cyclic rank matrices
An equivalent way to define positroid varieties is through **cyclic rank matrices** as defined in [13, Corollary 3.12]. For any \(f\in\operatorname{Bound}(k,n)\), write \(f\) as the \(\infty\times\infty\) matrix with \(1\)'s at positions \((i,f(i))\) and \(0\)'s everywhere else. Let \(r(f)\) be the infinite periodic matrix defined for all \(i\in\mathbb{Z}\), \(i\leq j\leq i+n\),
\[r(f)_{i,j}=|[i,j]|-\#\{\text{number of $1$s in $f$'s matrix weakly southwest of $(i,j)$}\}.\]
Let \(V\in\operatorname{Gr}(k,n)\) so that \(f_{V}=f\), then \(r(f)_{i,j}=\operatorname{rank}(\operatorname{span}(v_{i},\cdots,v_{j}))\). Furthermore,
\[\Pi_{f}^{\circ}=\{U\in\operatorname{Gr}(k,n):\operatorname{rank}(\widetilde{ U}_{[i,j]})=r(f)_{i,j}\text{ for all }i\in\mathbb{Z},j\in[i,i+n]\}\]
and \(\Pi_{f}\) is obtained by replacing "\(=\)" with "\(\leq\)".
Define also the **essential set of \(f\)** as follows:
\[ess(f):=\{(i,j):i\in\mathbb{Z},j\in[i,i+n],f(i-1)>j,f^{-1}(j+1)<i,f(i)\leq j,f^ {-1}(j)\geq i\}.\]
A diagrammatic description and illustration for \(ess(f)\) is given in [14, Section 2.1]. We omit this description, but strongly encourage the reader to refer to [14, Section 2.1]. We have
\[\Pi_{f}^{\circ}=\{U\in\operatorname{Gr}(k,n):\operatorname{rank}(\widetilde{ U}_{[i,j]})=r(f)_{i,j}\text{ for all }(i,j)\in ess(f)\} \tag{3}\]
and similarly for \(\Pi_{f}\). The following statement, which follows from [14, Theorem 5.1] is crucial for our main results.
**Proposition 2.2**.: _Every positroid variety is defined by imposing finitely many cyclic rank conditions, and these intersections are scheme-theoretic._
**Example 2.3**.: Set \(k=3\), \(n=6\). Let \(f\) be the bounded affine permutation
\[\cdots[5,2,4,7,9,12]\cdots\]
where \(f(1)=5\). Then
\[\Pi_{f}=\{U\in\operatorname{Gr}(3,6):\operatorname{rank}(\widetilde{U}_{[2]} )\leq 0,\operatorname{rank}(\widetilde{U}_{[2,4]})\leq 1,\operatorname{rank}( \widetilde{U}_{[1,5]})\leq 2\}.\]
### \(k\)-Bruhat orders and Grassmann intervals
For permutations \(u,v\in\mathfrak{S}_{n}\), we say \(u\)\(k\)**-covers**\(v\), denoted \(u\)\(\gg_{k}\)\(v\), if \(u\)\(\gg\)\(v\) in strong Bruhat order and \(\{u(1),\cdots,u(k)\}\neq\{v(1),\cdots,v(k)\}\). The \(k\)**-Bruhat order** is the partial order on \(\mathfrak{S}_{n}\) generated by taking the transitive closure of the \(k\)-covering relation \(\gg_{k}\). Let \([v,u]_{k}\subset\mathfrak{S}_{n}\) be the \(k\)-Bruhat interval. This is a graded poset of rank \(\ell(u)-\ell(v)\), where \(\ell(u)\) denotes the Coxeter length of the permutation \(u\).
Consider the equivalence relation on the set of \(k\)-Bruhat intervals where \([v,u]_{k}\sim[v^{\prime},u^{\prime}]_{k}\) if there exists \(z\in\mathfrak{S}_{k}\times\mathfrak{S}_{n-k}\) (where \(\mathfrak{S}_{k}\times\mathfrak{S}_{n-k}\) is identified as a subgroup of \(\mathfrak{S}_{n}\)) such that \(v^{\prime}=vz,u^{\prime}=uz\) where \(\ell(v^{\prime})=\ell(v)+\ell(z)\) and \(\ell(u^{\prime})=\ell(u)+\ell(z)\). Let \(\mathcal{Q}(k,n)\) be the equivalence classes of \(k\)-Bruhat intervals.
Let \(G=\operatorname{GL}_{n}(\Bbbk)\) and \(B,B_{-}\) be the Borel and opposite Borel subgroup of \(G\) consisting of invertible upper and lower triangular matrices, respectively. The **flag variety** is \(\operatorname{Fl}(n)=G/B\). Upon choosing a basis of \(\Bbbk^{n}\), a point \(gB\in G/B\) can be identified with a complete flag
\[F_{\bullet}=0\subsetneq F_{1}\subsetneq F_{2}\subsetneq\cdots\subsetneq F_{ n-1}\subsetneq F_{n}=\Bbbk^{n},\]
where each \(F_{i}\) is the span of the first \(i\) columns of any representative of \(gB\).
The flag variety admits a Bruhat decomposition
\[G/B=\bigsqcup_{w\in\mathfrak{S}_{n}}BwB/B.\]
Define the **Schubert variety**\(X_{w}\) and the **opposite Schubert variety**\(X^{w}\) to be the Zariski closure of the **Schubert cell**\(X^{\circ}_{w}:=B\_wB/B\) and **opposite Schubert cell**\(X^{w}_{\circ}:=BwB/B\) respectively. Define the **Richardson variety**\(X^{u}_{v}\) to be the intersection \(X^{u}\cap X_{v}\). In particular, \(X^{u}_{v}\) is non-empty if and only if \(v\leq u\).
Let \(\pi:\operatorname{Fl}(n)\to\operatorname{Gr}(k,n)\) be the natural projection where \(\pi(F_{\bullet})=F_{k}\). It follows from [14, Proposition 3.3] that If \([v,u]_{k}\sim[v^{\prime},u^{\prime}]_{k}\), then \(\pi(X^{u}_{v})=\pi(X^{u^{\prime}}_{v^{\prime}})\). The following definition is then well-defined.
**Definition 2.4**.: For \(u\geq_{k}v\), define \(\Pi_{[v,u]}:=\pi(X^{u}_{v})\) to be the positroid variety associated to the \(k\)-Bruhat interval \([v,u]_{k}\in\mathcal{Q}(k,n)\).
For a permutation \(w\in\mathfrak{S}_{n}\), define the descent set of \(w\) to be
\[\operatorname{Des}(w):=\{i\in[n-1]:w(i)>w(i+1)\}.\]
We say a permutation \(w\) is \(k\)-Grassmannian for some \(k<n\) if \(\operatorname{Des}(w)\subseteq\{k\}\).
**Lemma/Definition 2.5** ([14], Proposition 2.3).: _For every equivalence class in \(\mathcal{Q}(k,n)\), there is a unique representative \([v,u]_{k}\) such that \(u\) is a \(k\)-Grassmannian permutation. We will call such interval a **Grassmann interval**._
Since \(\Pi_{[v,u]}\) is independent of the choice of representative in each equivalence class in \(\mathcal{Q}(k,n)\), we will assume \([v,u]\) is the Grassmann interval unless specified otherwise.
For \(I=\{i_{1}<i_{2}<\cdots<i_{k}\}\subset[n]\), write \(I^{\vee}=[n]\setminus I=\{i_{1}^{\vee}<\cdots<i_{n-k}^{\vee}\}\). Define the Grassmannian permutation \(w_{I}\) as the following:
\[w_{I}(j)=\left\{\begin{aligned} i_{j}&&\text{ if }j\leq k\\ i_{j-k}^{\vee}&&\text{ otherwise}\end{aligned}\right..\]
**Lemma 2.6** ([13], Proposition 3.15).: _Let \(f\in\mathrm{Bound}(k,n)\). Let \(I\subset[n]\) be the set of indices such that \(f(i)>n\) for \(i\in I\). Then \(\Pi_{f}=\Pi_{[v,u]}\) where_
\[u=w_{I}\text{ and }v=\widetilde{f}^{-1}u. \tag{4}\]
_In particular, \(u\) is a \(k\)-Grassmannian permutation._
### Basic positroid varieties
For any \(\alpha\in[0,n-1]\) and \(m<n\), define the cyclic interval
\[[\alpha+1,\alpha+m]^{\circ}=\left\{\begin{aligned} &\{i:\alpha+1\leq i\leq \alpha+m\}&&\text{ if }\alpha+m\leq n\\ &\{i:\alpha\leq i\leq n\text{ or }1\leq i\leq(\alpha+m)\bmod n\}&& \text{ if }\alpha+m>n\end{aligned}\right..\]
For any \(S\subset[n]\) and any \(r\in\mathbb{N}\), define
\[X_{S\leq r}=\{M\in\mathrm{Gr}(k,n):\mathrm{rank}(M_{S})\leq r\},\]
where \(\mathrm{rank}(M_{S})\) is the rank of the submatrix of \(M\) with column index \(S\).
Let \(\chi:\mathrm{Gr}(k,n)\to\mathrm{Gr}(k,n)\) be the cyclic shifting such that for \(V\in\mathrm{Gr}(k,n)\),
\[\widetilde{V}=\begin{bmatrix}|&|&&|\\ v_{1}&v_{2}&\cdots&v_{n}\\ |&|&&|\end{bmatrix},\quad\chi(\widetilde{V}):=\begin{bmatrix}|&|&&|\\ v_{n}&v_{1}&\cdots&v_{n-1}\\ |&|&&|\end{bmatrix},\quad\chi(V):=\mathrm{rowspan}(\chi(\widetilde{V})). \tag{5}\]
We abuse the notation and define the corresponding cyclic shift on \(\mathrm{Bound}(k,n)\) as
\[\chi(f)(i)=f(i-1)+1.\]
**Lemma 2.7**.: _For \(f\in\mathrm{Bound}(k,n)\), \(\chi(\Pi_{f})=\Pi_{\chi(f)}\)._
Proof.: Pick any \(V\in\Pi_{f}^{\circ}\), we have \(f=f_{V}\) as in (2). Therefore \(\chi(f)=f_{\chi(V)}\) and thus \(\chi(\Pi_{f}^{\circ})=\Pi_{\chi(f)}^{\circ}\) for all \(f\in\mathrm{Bound}(k,n)\). Since \(\chi\) preserves the partial ordering on \(\mathrm{Bound}(k,n)\), we get
\[\chi(\Pi_{f})=\bigsqcup_{f^{\prime}\geq f}\chi(\Pi_{f^{\prime}}^{\circ})= \bigsqcup_{f^{\prime}\geq f}\Pi_{\chi(f^{\prime})}^{\circ}=\Pi_{\chi(f)}.\]
This is immediate by interpreting \(f\) as \(f_{V}\) for some \(V\in\Pi_{f}^{\circ}\) as in (2). More precisely, we have \(\chi(f)=f_{\chi(V)}\) and thus \(\chi(\Pi_{f}^{\circ})=\Pi_{\chi(f)}^{\circ}\) for all \(f\in\mathrm{Bound}(k,n)\).
**Lemma/Definition 2.8**.: _If \(S\) is a cyclic interval, we say \(X_{S\leq r}\) is a **basic positroid variety**. This is indeed an instance of positroid variety._
Proof.: Let \(S=[\alpha+1,\alpha+m]^{\circ}\). In the case where \(\alpha=0\), this is an interval positroid variety in the sense of [10]. It then follows that this is indeed a positroid variety. Since \(X_{S\leq r}=\chi^{\alpha}(X_{[1,m]\leq r})\), all basic positroid varieties are indeed positroid varieties.
**Proposition 2.9**.: _Let \(0\leq r<k\) and \(S=[\alpha+1,\alpha+m]^{\circ}\) for some \(\alpha\in[0,n-1]\), \(r<m<n\), and \(n-m+r\geq k\). \(X_{S\leq r}\) is the positroid variety \(\Pi_{[v,u]}\) where_
1. _if_ \(\alpha<n-m-(k-r)\)_, then_ \(u=w_{[n-k+1,n]}\) _and_ \(v=w_{[k+m-r+\alpha]\setminus[r+\alpha+1,m+\alpha]}\)_._
2. _if_ \(n-m-(k-r)\leq\alpha\leq n-m\)_, then_ \(u=w_{[n-m-(k-r)+1,\alpha]\cup[m-r+\alpha+1,n]}\) _and_ \(v=w_{[n]\setminus[r+\alpha+1,m+\alpha]}\)_._
3. _if_ \(n-m<\alpha\leq n-r\)_, then_ \(u=w_{[\alpha-(k-r)+1,\alpha]\cup[n-r+1,n]}\) _and_ \(v=w_{[\alpha-(n-m)+k-r]\setminus[\alpha-(n-m)]}\)_._
4. _if_ \(n-r<\alpha\leq n-1\)_, then_ \(u=w_{[n-k+1,n]}\) _and_ \(v=w_{[\alpha-(n-m)+k-r]\setminus[\alpha-n+r+1,\alpha-n+m]}\)_._
_(We note that when \(m\leq r\) the rank condition is trivial, and \(n-m+r\geq k\) is a consequence of the underlying \(k\times n\) matrix being full rank.)_
The following statement is immediate from Proposition 2.9.
**Corollary 2.10**.: _The permutation \(u\) has a unique descent at \(k\) and \(v\) has at most one descent. The descent of \(v\) is at \(k\) if and only if \(\alpha=0\), and \(v=\mathrm{id}\) if and only if \(\alpha+m=n\). If \(v\) has a descent at \(l\neq k\), then \(l>k\) if \(a+m<n\) and \(l<k\) if \(\alpha+m>n\)._
**Example 2.11**.: Fix \(n=11\), \(k=5\), \(m=6\), \(r=3\). Then \(n-m-(k-r)=3\), \(n-m=5\), \(n-r=8\).
1. If \(\alpha=2\), then \(u=789\underline{10}\,\underline{111}23456\) and \(v=123459\underline{10}678\underline{11}\).
2. If \(\alpha=5\), then \(u=459\underline{10}\,\underline{111}23678\) and \(v=\mathrm{id}\).
3. If \(\alpha=7\), then \(u=679\underline{10}\,\underline{111}23458\) and \(v=341256789\underline{10}\,\underline{11}\).
4. If \(\alpha=9\), then \(w=789\underline{10}\,\underline{111}23456\) and \(v=156234789\underline{10}\,\underline{11}\).
Proof of Proposition 2.9:.: We start with the case where \(\alpha=0\). It follows from [10, Proposition 2.1] that \(X_{[m]\leq r}=\Pi_{f_{0}}\) where \(f_{0}\) is the bounded affine permutation such that
\[f_{0}(i)=\left\{\begin{array}{ll}r+i&\text{ if }1\leq i\leq m-r\\ k+i&\text{ if }m-r<i\leq n-k+r\\ i+m+k-r&\text{ if }n-k+r<i\leq n\end{array}\right.. \tag{6}\]
More generally, by Lemma 2.7, for \(S=[\alpha+1,\alpha+m]^{\circ}\), we have \(X_{S\leq r}=\Pi_{f_{\alpha}}\) where
\[f_{\alpha}(i)=f_{0}(i-\alpha)+\alpha. \tag{7}\]
Set \(I_{\alpha}=\{i\in[n]:f_{a}(i)>n\}\), combining (6) and (7), we get
\[I_{\alpha}=\left\{\begin{array}{ll}[n-k+1,n]&\text{ if }0\leq\alpha\leq n-m-(k-r) \\ [n-m-(k-r),\alpha]\cup[m-r+\alpha+1,n]&\text{ if }n-m-(k-r)<\alpha\leq n-m\\ [\alpha-(k-r)+1,\alpha]\cup[n-r+1,n]&\text{ if }n-m<\alpha\leq n-r\\ [n-k+1,n]&\text{ if }n-r<\alpha\leq n-1\end{array}\right.. \tag{8}\]
By Lemma 2.6, we get the corresponding \((u,v)\) as desired in Proposition 2.9.
## 3. Hodge algebra structure of Plucker coordinates
Much of the material in this section can be found in more detail in [1, Chapter 7] and [1, Chapter 3]. For this section, we may assume \(\Bbbk\) to be an arbitrary commutative unital ring.
**Notation 3.1**.: Let \(R=\Bbbk[x_{ij}\mid 1\leq i\leq k,\;1\leq j\leq n]\) be a polynomial ring over \(\Bbbk\) and let \(X=(x_{ij})_{1\leq i\leq k,1\leq j\leq n}\) denote a generic \(k\times n\) matrix where \(k\leq n\). For indices \(\mathbf{a}=\{a_{1},\ldots,a_{k}\}\), set
\[[\mathbf{a}]=[a_{1},\ldots,a_{k}]=\det\left(\begin{array}{ccc}x_{1,a_{1}}& \cdots&x_{1,a_{k}}\\ \vdots&\ddots&\vdots\\ x_{k,a_{1}}&\cdots&x_{k,a_{k}}\end{array}\right)\]
The ideal generated by the maximal minors of \(X\) is denoted \(I_{k}(X)\subset R\).
**Remark 3.2**.: Notice that each maximal minor \([a_{1},\ldots,a_{k}]\) of \(X\) is an alternating map, i.e. for every choice of columns \(a_{1},\ldots,a_{k}\), if there exists some \(i,j\) such that \(a_{i}=a_{j}\), then
\[[a_{1},\ldots,a_{k}]=0.\]
It follows that for every permutation \(\sigma\in\mathfrak{S}_{k}\), we have
\[[a_{\sigma(1)},a_{\sigma(2)},\ldots,a_{\sigma(k)}]=\operatorname{sign}(\sigma )\cdot[a_{1},\ldots,a_{k}].\]
Define the **Plucker algebra**\(\Bbbk[\operatorname{Gr}(k,n)]\) to be the homogeneous coordinate ring of \(\operatorname{Gr}(k,n)\). This is the subring of \(S\) generated by the \(k\times k\) (maximal size) minors of the matrix \(X\).
The Plucker algebra has a nice presentation in terms of generators and relations that can be described in terms of a certain poset called the **Plucker poset**, which will be described in the next subsection. The expressions \([c_{1},\ldots,c_{k}]\) with \(1\leq c_{1}<\cdots<c_{k}\leq n\) generate \(R\) as a \(\Bbbk\)-algebra; the polynomial relations between these generators are called the **Plucker relations**. If \(\mathcal{J}\) is the ideal of Plucker relations, we may write
\[\Bbbk[\operatorname{Gr}(k,n)]\cong\frac{R(k,n)}{\mathcal{J}}\]
where \(R(k,n)\coloneqq\Bbbk[[\mathbf{a}]:\mathbf{a}\in{[n]\choose k}]\).
Fix \(j\in\{1,\ldots,k-1\}\) and consider elements \(c_{1},\ldots,c_{j},d_{j+2},\ldots,d_{k},a_{1},\ldots,a_{k+1}\in[n]\). Then
\[\sum_{\sigma\in\mathfrak{S}_{k+1}}\operatorname{sign}(\sigma)\cdot[c_{1}, \ldots,c_{j},a_{\sigma(1)},\ldots,a_{\sigma(k-j)}]\cdot[a_{\sigma(k-j+1)}, \ldots,a_{\sigma(k+1)},d_{j+2},\ldots,d_{k}]=0.\]
For instance,
\[\Bbbk[\operatorname{Gr}(2,4)]\cong\frac{R(2,4)}{\mathcal{J}}=\frac{\Bbbk[[1,2],[1,3],[1,4],[2,3],[2,4],[3,4]]}{\langle[1,4][2,3]-[1,3][2,4]+[1,2][3,4] \rangle}.\]
We will often represent a product of \(d\) maximal minors as a \(k\times d\) tableau, where a column with entries \(c_{1},\ldots,c_{k}\) corresponds to the minor \([c_{1},\ldots,c_{k}]\).
**Example 3.3**.: The Plucker relation giving the single generator of the defining ideal for \(\operatorname{Gr}(2,4)\) could be represented as
\[\begin{array}{c}\framebox{1}\framebox{2}\\ \framebox{4}\framebox{3}\end{array}-\framebox{1}\framebox{2}\]
\[\begin{array}{c}\framebox{1}\framebox{2}\\ \framebox{3}\end{array}+\framebox{1}\framebox{3}\]
where in this case, \(c_{1}=1\), \((a_{1},a_{2},a_{3})=(4,2,3)\), and there are no \(d_{i}\). Another example of a Plucker relation in \(\operatorname{Gr}(4,8)\) would arise from the product of minors \([1,2,6,7]\cdot[3,4,5,8]\). In this context \((c_{1},c_{2})=(1,2)\), \((a_{1},\ldots,a_{5})=(6,7,3,4,5,8)\), and \(d_{4}=8\):
\[\begin{array}{c}\framebox{1}\framebox{3}\\ \framebox{2}\framebox{4}\\ \framebox{6}\framebox{5}\end{array}-\framebox{1}\framebox{3}\]
\[\begin{array}{c}\framebox{1}\framebox{3}\\ \framebox{5}\framebox{6}\end{array}+\framebox{1}\framebox{3}\\ \framebox{5}\framebox{6}\end{array}-\framebox{1}\framebox{3}\]
**Definition 3.4**.: A tableau is called **semistandard** if it is strictly increasing along columns and weakly increasing along rows.
It is not difficult to show that semistandard tableaux all correspond to linearly independent monomials; even better, the following is true; see, for instance, [1, Lemma 7.2.3]
**Theorem 3.5**.: _Every \(d\times k\) tableau \(T\) can be expressed modulo the Plucker relations as a linear combination of semistandard tableaux._
The following lemma will be central to computations in Section 5.
**Lemma 3.6** ([17, Exercise 1.1(8)]).: _For every \(1\leq j\leq k\), one has_
\[[c_{1},\ldots,c_{k}]\cdot[d_{1},\ldots,d_{k}]\] \[=\sum_{1\leq i_{1}<\cdots<i_{k-j}\leq k}[c_{1},\ldots,c_{j},d_{i_{ 1}},\ldots,d_{i_{k-j}}]\cdot[d_{1},\ldots,c_{j+1},\ldots,c_{k},\ldots,d_{k}]\]
_where \([d_{1},\ldots,c_{j+1},\ldots,c_{k},\ldots,d_{k}]\) denotes the tuple obtained from \([d_{1},\ldots,d_{k}]\) by replacing the elements \(d_{i_{1}},\ldots,d_{i_{k-j}}\) with \(c_{j+1},\ldots,c_{k}\) (in this order)._
### The Plucker poset
Let \(1\leq k\leq n\), consider \(\mathcal{P}=\{[c_{1},\ldots,c_{k}]\mid c\in{[n]\choose k}\}\subset\Bbbk[ \operatorname{Gr}(k,n)]\) with the partial order induced by
\[[c_{1},\ldots,c_{k}]\leq[d_{1},\ldots,d_{k}]\text{ if and only if }c_{i}\leq d_{i}\text{ for all }i=1,\ldots,k.\]
Call \(\mathcal{P}\) the **Plucker poset**.
**Remark 3.7**.: With respect to "\(<\)", the element \([1,\ldots,k]\) is the unique minimal element of the Plucker poset and the element \([n-k+1,n]\) is the unique maximal element of the Plucker poset. However, notice that the Plucker poset is self-dual-that is, \(\mathcal{P}\cong\mathcal{P}^{\mathrm{op}}\).
Using the Plucker relations described in the previous subsection, one can show that the Plucker algebra is a **Hodge algebra**:
**Definition 3.8**.: Let \(A\) be a \(\Bbbk\)-algebra for some ring \(\Bbbk\) and let \(H\subset A\) be poset such that the elements of \(H\) generate \(A\) as a \(\Bbbk\)-algebra. Then \(A\) is a **Hodge algebra** or an **algebra with straightening law (ASL)** if it satisfies the following axioms:
1. The algebra \(A\) is a free \(\Bbbk\)-module with basis given by the set of _standard monomials_ (a product of totally ordered elements in \(H\));
2. If \(\mathbf{e},\mathbf{f}\in H\) are incomparable and if \[\mathbf{e}\cdot\mathbf{f}=\sum_{i}c_{i}\cdot(\mathbf{g}_{i_{1}}\cdot\mathbf{g} _{i_{2}}\cdots)\] is the unique expression of \(\mathbf{e}\cdot\mathbf{f}\) as a linear combination of standard monomials (here \(c_{i}\neq 0\) and \(\mathbf{g}_{i_{1}}\leq\mathbf{g}_{i_{2}}\leq\dots\)) then \(\mathbf{g}_{i_{1}}<\mathbf{e},\mathbf{f}\) for all \(i\).
**Remark 3.9**.: The Plucker algebra is also a Hodge algebra with respect to the poset \(\mathcal{P}^{\mathrm{op}}\), with the exact same relations. This follows because the quadratic Plucker relations imply that for any incomparable \([\mathbf{c}]\) and \([\mathbf{d}]\) in the Plucker poset, one may write
\[[\mathbf{c}]\cdot[\mathbf{d}]=\sum\pm[\mathbf{e}]\cdot[\mathbf{f}]\]
where \(\mathbf{e}\leq\mathbf{c}\wedge\mathbf{d}\) and \(\mathbf{f}\geq\mathbf{c}\vee\mathbf{d}\), where \(\wedge\) and \(\vee\) are the usual meet and join poset operations, respectively. This follows from the fact that maximal minors form a SAGBI (**s**ubalgebra **a**nalogue of **G**robner **b**ases for **i**deals) basis for the subalgebra they generate, and the relations of the initial subalgebra with respect to a certain class term orders are known have the form \([\mathbf{c}]\cdot[\mathbf{d}]-[\mathbf{c}\vee\mathbf{d}]\cdot[\mathbf{c} \wedge\mathbf{d}]\); we refer the reader to [1, Section 6.2] for a more in-depth discussion.
**Remark 3.10**.: Given a poset \(H\) which defines \(A\) as a \(\Bbbk\)-algebra, there is a different Hodge algebra structure on the same poset \(H\) given by taking all relations to be of the form \(\alpha\cdot\beta=0\) for any \(\alpha,\beta\) incomparable in \(H\). This algebra is often referred to as the **discrete ASL** associated to the poset \(H\).
One may obtain the discrete ASL as a Grobner degeneration of any other ASL structure on \(H\) with respect to a weighted degree revlex ordering \(<_{\omega}\) associated to some ordering \(\prec\) giving a linear extension of the poset \(H\). This follows from the description of the relations in (ASL-2) in Definition 3.8. This particular Grobner degeneration is often referred to as a **Hodge degeneration**.
**Example 3.11**.: There are three natural Hodge algebra structures on the Plucker poset \(\mathcal{P}\). One is the usual Plucker algebra described above. Another comes from considering the algebra
\[\Bbbk[\mathrm{In}_{w}[a_{1},\dots,a_{k}]:1\leq a_{1}<a_{2}<\dots<a_{k}\leq n] \tag{10}\]
where \(w\) is any diagonal term order; that is, for any maximal minor \([a_{1},\dots,a_{k}]\), the monomial ordering \(w\) selects the product of the diagonal entries \(x_{1,a_{1}}x_{2,a_{2}}\dots x_{k,a_{k}}\) as the leading term. This algebra has relations given by \(\alpha\cdot\beta=(\alpha\vee\beta)\cdot(\alpha\wedge\beta)\) for \(\alpha,\beta\) incomparable in \(\mathcal{P}\). Notice that in the Plucker poset,
\[[a_{1},\dots,a_{k}]\vee[b_{1},\dots,b_{k}] =[\max(a_{1},b_{1}),\max(a_{2},b_{2}),\dots],\] \[[a_{1},\dots,a_{k}]\wedge[b_{1},\dots,b_{k}] =[\min(a_{1},b_{1}),\min(b_{1},b_{2}),\dots].\]
For instance, for \(k=2\) and \(n=4\), the single relation is given by
\[\begin{array}{|c|c|c|}\hline 1&2\\ \hline 4&3\\ \hline\end{array}-\begin{array}{|c|c|c|}\hline 1&2\\ \hline 3&4\\ \hline\end{array}=0\]
corresponding to the relation on monomials
\[(x_{11}x_{24})\cdot(x_{12}x_{23})-(x_{11}x_{23})\cdot(x_{12}x_{24})=0.\]
Finally, the simplest possible Hodge algebra one can take on the Plucker poset is the discrete ASL, given by taking the relations \(\alpha\cdot\beta=0\) for \(\alpha,\beta\) incomparable. This means the monomial relation \(\begin{array}{|c|c|}1&2\\ \hline 4&3\\ \hline\end{array}=0\) is the sole relation for the discrete ASL on the Plucker poset when \(k=2\) and \(\overline{n=4}\). Notice that if one takes a reverse lexicographic ordering with respect to the monomial order \([34]\prec[24]\prec[23]\prec[14]\prec[13]\prec[12]\), the discrete ASL is an initial algebra for both the Plucker algebra and the algebra (10).
**Remark 3.12**.: The standard monomials for the Plucker algebra correspond exactly to those monomials in \(R(k,n)\) that can be written as semistandard rectangular tableaux once the indices are ordered from left-to-right lexicographically. When we refer to "standard monomials" for \(\Bbbk[\operatorname{Gr}(k,n)]\), these are the ones we are referring to. Moreover, by the Borel-Weil theorem, the vector space of global sections \(\Gamma(Gr(k,n),\mathcal{O}(d))\cong\Bbbk[\operatorname{Gr}(k,n)]_{d}\) of a line bundle on the Grassmannian is isomorphic to the dual of the highest weight representation \(V(\lambda)\) with \(\lambda\) rectangular of shape \(k\times d\), whose basis elements are well known to be indexed by semistandard tableaux of shape \(\lambda\).
### \(\operatorname{Gr}(k,n)\) and \(\operatorname{Gr}(n-k,n)\)
For a positroid variety \(\Pi_{f}\subset\operatorname{Gr}(k,n)\), let \(\mathcal{J}_{f}\subset R(k,n)\coloneqq\Bbbk[[\mathbf{a}]:\mathbf{a}\in \binom{[n]}{k}]\) be the defining ideal of \(\Pi_{f}\) under the Plucker embedding, so \(\Bbbk[\Pi_{f}]=\dfrac{R(k,n)}{\mathcal{J}_{f}}\) is the homogeneous coordinate ring of \(\Pi_{f}\).
**Proposition 3.13**.: _[_13_, Theorem 5.15]_ _The defining ideal for \(\Bbbk[\Pi_{f}]\) is generated by the set of Plucker relations defining \(\operatorname{Gr}(k,n)\) plus the set of Plucker coordinates \([\mathbf{a}]\) that vanish on \(\Pi_{f}\)._
In particular, for a basic positroid variety \(X_{S\leq r}\), the ideal \(\mathcal{J}_{S\leq r}\) is generated by the set of Plucker relations and \(\{[\mathbf{a}]:|\mathbf{a}\cap S|\geq r+1\}\).
**Construction 3.14**.: Fix \(\operatorname{Gr}(k,n)\) and let \(S=[\alpha+1,\alpha+m]^{\circ}\) be a cyclic interval and \(r<m\), so that \(\mathcal{J}_{S\leq r}\) is the defining ideal for \(\Bbbk[\Pi_{S\leq r}]\) in the polynomial ring \(R(k,n)\). Define the following:
* \(S^{\vee}\coloneqq[\alpha+m+1,\alpha]^{\circ}\).
* \(r^{\vee}\coloneqq n-k-m+r\)
* For any monomial \(\mathbf{m}=\prod_{i=1}^{d}[\mathbf{a}^{(i)}]\in R(k,n)\), define \(\mathbf{m}^{\vee}\) to be \(\prod_{i=1}^{d}[\mathbf{a}^{(i)\vee}]\in R(n-k,n)\), where \(\mathbf{a}^{(i)\vee}\coloneqq[n]\setminus\mathbf{a}^{(i)}\) for all \(i\in[d]\).
* For any polynomial \(g\in R(k,n)\), define \(g^{\vee}\in R(n-k,n)\) to be the polynomial obtained from \(g\) by applying \(\vee\) to each monomial summand.
There is a stratification-preserving isomorphism \(\varphi\) of varieties between \(\operatorname{Gr}(k,n)\) and \(\operatorname{Gr}(n-k,n)\) which preserves the positroid stratification. In coordinates, the isomorphism is written as \(\varphi:[\mathbf{a}]\mapsto[\mathbf{a}^{\vee}]\). It is easy to check that the Plucker relations are preserved under this mapping of coordinates; in particular, if \(\mathcal{J}_{S\leq r}\subset R(k,n)\) is the defining ideal for a positroid variety \(\Pi_{S\leq r}\), then one may define the ideal \(\mathcal{J}_{S\leq r}^{\vee}\coloneqq\mathcal{J}_{S^{\vee}\leq r}\subset R(n- k,n)\), and it is exactly the defining ideal for \(\Bbbk[\Pi_{S^{\vee}\leq r^{\vee}}]\) in \(\operatorname{Gr}(n-k,n)\).
**Proposition 3.15**.: _Adopt notation and hypotheses as in Construction 3.14. Then_
\[\varphi(X_{S\leq r})=X_{S^{\vee}\leq r^{\vee}}.\]
Proof.: The ideal \(\mathcal{I}(X_{S\leq r})\) is generated by the Plucker relations for \(\operatorname{Gr}(k,n)\) as well as vanishing of coordinates \([\mathbf{a}]\) such that \(|\mathbf{a}\cap S|\geq r+1\). Therefore,
\[n-|\mathbf{a}\cap S|\leq n-r-1\] \[\implies |(\mathbf{a}\cap S)^{\vee}|\leq n-r-1\] \[\implies |\mathbf{a}^{\vee}\cup S^{\vee}|\leq n-r-1\] \[\implies |\mathbf{a}^{\vee}|+|S^{\vee}|-|\mathbf{a}^{\vee}\cap S^{\vee}| \leq n-r-1\] \[\implies n-k-m+r+1\leq|\mathbf{a}^{\vee}\cap S^{\vee}|.\]
It follows that \(\varphi(X_{S\leq r})=X_{S^{\vee}\leq r^{\vee}}\).
## 4. Initial ideals and standard monomials for positroid varieties
### The initial ideal of a basic positroid variety
The goal of this section is to describe the monomials that appear in the initial ideal \(\operatorname{In}(\mathcal{J}_{f})\coloneqq\operatorname{In}_{\omega}( \mathcal{J}_{f})\) for the defining ideal \(\mathcal{J}_{f}\) of any positroid variety \(\Pi_{f}\) with respect to the monomial order \(\omega\) discussed in Remark 3.10.
**Definition 4.1**.: Fix an interval \(S=[\alpha+1,\alpha+m]\) in \([n]\) and some \(r<m\). For a standard monomial \(\mathbf{m}=\prod_{i=1}^{d}[\mathbf{a}^{(i)}]\) of \(\Bbbk[\operatorname{Gr}(k,n)]\) such that the Plucker variables \(\mathbf{a}^{(1)}\leq\mathbf{a}^{(2)}\leq\cdots\leq\mathbf{a}^{(d)}\) are sorted lexicographically, a **generalized antidiagonal** of \(\mathbf{m}\) is a sequence \(a_{\rho_{1}}^{(\sigma_{\ell})}\dots a_{\rho_{\ell}}^{(\sigma_{1})}\) such that
1. \(\alpha+1\leq a_{\rho_{1}}^{(\sigma_{\ell})}<\cdots<a_{\rho_{\ell}}^{(\sigma_{ 1})}\leq\alpha+m\),
2. \(1\leq\sigma_{1}\leq\sigma_{2}\leq\cdots\leq\sigma_{\ell}\leq d\), and
3. \(1\leq\rho_{1}<\rho_{2}<\cdots<\rho_{\ell}\leq k\).
The main result of this section is the following theorem.
**Theorem 4.2**.: _A monomial \(\mathbf{m}=\prod_{i=1}^{d}[\mathbf{a}^{(i)}]\in R(k,n)\) is in \(\operatorname{In}(\mathcal{J}_{S\leq r})\) if and only if_
1. _For some_ \(i,j\in[d]\)_, the Plucker coordinates_ \(\mathbf{a}^{(i)}\) _and_ \(\mathbf{a}^{(j)}\) _are incomparable in the Plucker poset, or_
2. \(\mathbf{m}\) _is a standard monomial for_ \(\Bbbk[\operatorname{Gr}(k,n)]\) _containing a generalized antidiagonal of size_ \(r+1\) _with entries in_ \(S\)_._
_In particular, a monomial \(\mathbf{m}=\prod_{i=1}^{d}[\mathbf{a}^{(i)}]\) is a minimal generator of \(\operatorname{In}(\mathcal{J}_{S\leq r})\) if and only if every monomial strictly dividing \(\mathbf{m}\) satisfies neither condition (1) nor (2)._
When we write a standard monomial \(\mathbf{m}=\prod_{i=1}^{d}[\mathbf{a}^{(i)}]\) of \(\Bbbk[\operatorname{Gr}(k,n)]\) as a semistandard tableau \(T\), a generalized antidiagonal consists of \(r+1\) strictly increasing entries in the interval \(S\) running from northeast to southwest in the tableau with entries \((\rho_{i},\sigma_{\ell-i+1})\), weakly increasing in columns and strictly increasing in rows.
**Example 4.3**.: Let \(S=[4,7]\) and \(r=2\) in \(\operatorname{Gr}(5,10)\). Some minimal generators of \(\operatorname{In}(\mathcal{J}_{S\leq r})\) include:
\[\begin{array}{|c|c|}\hline 1&2\\ \hline 4&2&4\\ \hline 5&3&8\\ \hline 7&6&9\\ \hline 9&7&10\\ \hline\end{array}\qquad\begin{array}{|c|c|c|}\hline 1&2\\ \hline 2&4\\ \hline 3&5\\ \hline 6&8\\ \hline 7&9\\ \hline 8&10&10\\ \hline\end{array}\qquad\begin{array}{|c|c|c|}\hline 1&2&2\\ \hline 2&3&5\\ \hline 3&6&7\\ \hline 7&7&9\\ \hline 8&10&10\\ \hline\end{array}\]
In the second tableau above, the unique sequence forming a generalized antidiagonal of length \(r+1=3\) is
\[(a_{2}^{(2)},a_{4}^{(1)},a_{5}^{(1)})=(4,6,7).\]
The following monomials do _not_ contain any generalized antidiagonal of size \(3\) with entries in the interval \(S\):
\[\begin{array}{|c|c|c|}\hline 1&6\\ \hline 2&7\\ \hline 3&8\\ \hline 4&9\\ \hline 5&10\\ \hline\end{array}\qquad\begin{array}{|c|c|c|}\hline 1&1&1\\ \hline 2&2&2\\ \hline 4&5&6&7\\ \hline 7&7&7&8\\ \hline 8&8&8&9\\ \hline\end{array}\]
Our main tool for proving Theorem 4.2 will be the following result from [14], which allows us to understand standard monomials of \(\Bbbk[\Pi_{f}]\) in terms of chains of permutations in Bruhat order.
**Notation 4.4**.: For a permutation \(v\in\mathfrak{S}_{n}\) we will sometimes write \(v\) in one-line notation \(v=(v(1)\ v(2)\cdots v(n))\). For any positive integer \(p\leq n\), set \(v([p]):=\{v(1),\cdots,v(p)\}\).
**Proposition 4.5**.: _[_14_, Theorem 7.1]_ _Fix \(\mathbf{a}^{(1)},\cdots,\mathbf{a}^{(d)}\subset[n]\) such that \(|\mathbf{a}^{(i)}|=k\) for all \(i\in[d]\) and \(\mathbf{a}^{(i)}\leq\mathbf{a}^{(i+1)}\) for all \(i\in[d-1]\). Then the monomial \(\prod_{i=1}^{d}[\mathbf{a}^{(i)}]\) is not in \(\operatorname{In}(\mathcal{J}_{[v,u]})\) if and only if there exists a chain of \(\mathfrak{S}_{n}\)-permutations \(\{v^{(1)},\cdots,v^{(d)}\}\) such that_
\[v\leq v^{(1)}\leq\cdots\leq v^{(d)}\leq u\text{ and }v^{(i)}([k])=\mathbf{a}^{(i) }\text{ for all }i\in[d]. \tag{11}\]
The following proposition justifies why we restrict our attention to basic positroid varieties with \(S=[\alpha+1,\alpha+m]\) with \(r<m\); that is, we do not need to consider separately the case where the interval \(S\) "wraps around" when we prove Theorem 4.2.
**Proposition 4.6**.: _A monomial \(\mathbf{m}\) is in the ideal \(\operatorname{In}(\mathcal{J}_{S\leq r}))\) if and only if the monomial \(\mathbf{m}^{\vee}\) as defined in Construction 3.14 is in the ideal \(\operatorname{In}(\widetilde{\mathcal{J}}_{S\leq r}^{\vee}))\)._
**Example 4.7**.: Fix \(\operatorname{Gr}(2,6)\) and let \(S=[6,1]\) and \(r=1\). By Construction 3.14, we have \(S^{\vee}=[6]\setminus S=[2,5]\) and \(r^{\vee}=6-2-2+1=3\). Given a monomial \(\mathbf{m}\) in \(\operatorname{In}(\mathcal{J}_{S\leq r})\subset R(2,6)\), we may obtain the corresponding monomial in \(\operatorname{In}(\mathcal{J}_{S^{\vee}\leq r^{\vee}})\subset R(4,6)\) by taking complements of each of the Plucker coordinates dividing \(\mathbf{m}\). For instance,
\[\begin{array}{|c|c|}\hline 1&2&3&4\\ \hline 3&4&5&6\end{array}\xleftarrow{\text{\rm complement}}\begin{array}{|c|c|} \hline 2&3\\ \hline 6&5\end{array}\xleftarrow{\text{\rm complement}}\begin{array}{|c|c|} \hline 1&1\\ \hline 3&2\\ \hline 4&4\\ \hline 5&6\end{array}=\begin{array}{|c|c|}\hline 1&1\\ \hline 2&3\\ \hline 4&4\\ \hline 6&5\end{array},\]
\[\begin{array}{|c|c|}\hline 1&2&3&4\\ \hline 3&4&5&6\end{array}\xleftarrow{\text{\rm complement}}\begin{array}{|c|c|} \hline 2&1&1&1\\ \hline 4&3&2&2\\ \hline 5&5&4&3\\ \hline 6&6&6&5\end{array}=\begin{array}{|c|c|}\hline 1&1&2\\ \hline 2&3&4\\ \hline 3&4&5&5\\ \hline 5&6&6&6\end{array}.\]
Proof of Proposition 4.6.: Let \([v,u]\) be the Grassmann interval such that \(X_{S\leq r}=\Pi_{[v,u]}\) as in Proposition 2.9. Note that since \(\alpha+1\leq n<\alpha+m\), we are in case (3) and (4) of Proposition 2.9. Let \(z\in\mathfrak{S}_{k}\times\mathfrak{S}_{n-k}\subset\mathfrak{S}_{n}\) be such that \(vz\) is the maximal representative in the coset \(v\cdot\mathfrak{S}_{k}\times\mathfrak{S}_{n-k}\). Then \([vz,uz]_{k}\sim[v,u]_{k}\) and thus
\[\prod_{i=1}^{d}[\mathbf{a}^{(i)}]\in\operatorname{In}(\mathcal{J}_{[v,u]}) \iff\prod_{i=1}^{d}[\mathbf{a}^{(i)}]\in\operatorname{In}(\mathcal{J}_{[vz,uz ]}).\]
Let \(w_{0}\) be the longest permutation (\(n\)\(n-1\cdots 1\)). Since multiplying by \(w_{0}\) is an order reversing automorphism on \(\mathfrak{S}_{n}\) with respect to strong Bruhat order, by (11), we also have
\[\prod_{i=1}^{d}[\mathbf{a}^{(i)}]\in\operatorname{In}(\mathcal{J}_{[vz,uz]}) \iff\prod_{i=1}^{d}[\mathbf{a}^{(i)\vee}]\in\operatorname{In}(\mathcal{J}_{[ uzw_{0},vzw_{0}]}).\]
In particular, \([uzw_{0},vzw_{0}]\) is a Grassmann interval. Therefore we only need to verify that \(\Pi_{[uzw_{0},vzw_{0}]}=X_{S^{\vee}\leq r^{\vee}}\) through Proposition 2.9. Here we set \([\alpha^{\prime}+1,\alpha^{\prime}+m^{\prime}]:=S^{\vee}\) and \(X_{S^{\vee}\leq r^{\vee}}\subset\operatorname{Gr}(k^{\prime},n)\), then
\[\alpha^{\prime}=\alpha+m-n,r^{\vee}=n-k-m+r,m^{\prime}=n-m\text{ and }k^{ \prime}=n-k. \tag{12}\]
We now divide into four cases:
**Case I**\((n-m<\alpha\leq n-r\) and \(r\leq\alpha-n+m)\): Here we are in case (3) of Proposition 2.9 and thus \(u=w_{[\alpha-(k-r)+1,a]\cup[n-r+1,n]}\) and \(v=w_{[\alpha-(n-m)+k-r]\setminus[\alpha-n+m]}\). Since \(r\leq\alpha-n+m\), we have
\[v(1)\cdots v(k)=(\alpha-n+m+1)\cdots(\alpha-n+m+k-r)\ 1\cdots r.\]
and
\[z=((k-r)\cdots 1\ k\cdots(k-r+1)\ n\cdots(k+1)).\]
Since \(r\leq\alpha-n+m<m\), we have \(n-m^{\prime}-k^{\prime}+r^{\vee}\leq\alpha^{\prime}<n-m^{\prime}\) by (12) and \(X_{S^{\vee}\leq r^{\vee}}\) falls into case (2) of Proposition 2.9. We then have
\[vzw_{0}=w_{[r+1,n]\setminus[\alpha-n+m+1,\alpha-n+m+k-r]}=w_{[n-m^{\prime}-k^{ \prime}+r^{\vee}+1,\alpha^{\prime}]\cup[m^{\prime}-r^{\vee}+\alpha^{\prime}+1, n]}\]
and
\[uzw_{0}=w_{[n]\setminus[\alpha-k+r+1,\alpha]}=w_{[n]\setminus[r^{\vee}+\alpha^ {\prime}+1,m^{\prime}+\alpha^{\prime}]}\]
as desired.
**Case II.**\((n-m<a\leq n-r\) and \(r>\alpha-n+m)\): Here we are still in case (3) of Proposition 2.9 as in Case I. Since \(r>\alpha-n+m\), we have
\[v(1)\cdots v(k)=(\alpha-n+m+1)\cdots(\alpha-n+m+k-r)\;1\cdots(\alpha-n+m)\;( \alpha-n+m+k-r+1)\cdots k,\]
and
\[z=(k\cdots(\alpha-n+m+k-r+1)\;(k-r)\cdots 1\;(\alpha-n+m+k-r)\cdots(k-r+1)\;n \cdots(k+1)).\]
Since \(r>\alpha-n+m\), we have \(0\leq\alpha^{\prime}<n-m^{\prime}-k^{\prime}+r^{\vee}\) and \(X_{S^{\vee}\leq r^{\vee}}\) falls into case (1) of Proposition 2.9. We have
\[vzw_{0}=w_{[k+1,n]}=w_{[n-k^{\prime}+1,n]}\]
and
\[uzw_{0}=w_{[a+m-r]\setminus[\alpha-k+r+1,a]}=w_{[k^{\prime}+m^{\prime}-r^{ \vee}+\alpha^{\prime}]\setminus[r^{\vee}+a^{\prime}+1,m^{\prime}+\alpha^{ \prime}]}\]
as desired.
**Case III.**\((n-r<\alpha\leq n-1\) and \(r\leq\alpha-n+m)\): Here we are in case (4) of Proposition 2.9 and thus \(u=w_{[n-k+1,n]}\) and \(v=w_{[\alpha-n+m+k-r]\setminus[\alpha-n+r+1,\alpha-n+m]}\). Since \(r\leq\alpha-n+m\), \(X_{S^{\vee}\leq r^{\vee}}\) falls into case (2) of Proposition 2.9. A similar computation as in the previous two cases gives us
\[vzw_{0}=w_{[r+1,\alpha-n+m]\cup[\alpha-n+m+k-r+1,n]}=w_{[n-m^{\prime}-k^{ \prime}+r^{\vee}+1,\alpha^{\prime}]\cup[m^{\prime}-r^{\vee}+\alpha^{\prime}+1,n]}\]
and
\[uzw_{0}=w_{[n]\setminus[\alpha-k+r+1,\alpha]}=w_{[n]\setminus[r^{\vee}+\alpha^ {\prime}+1,m^{\prime}+\alpha^{\prime}]}\]
as desired.
**Case IV.**\((n-r<\alpha\leq n-1\) and \(r>\alpha-n+m)\): Here \(X_{S\leq r}\) is in case (4) of Proposition 2.9 and \(X_{S^{\vee}\leq r^{\vee}}\) is in case (1). A similar computation yields
\[vzw_{0}=w_{[k+1,n]}=w_{[n-k^{\prime}+1,n]}\]
and
\[uzw_{0}=w_{[\alpha+m-r]\setminus[a-k+r+1,\alpha]}=w_{[k^{\prime}+m^{\prime}-r^ {\vee}+\alpha^{\prime}]\setminus[r^{\vee}+\alpha^{\prime}+1,m^{\prime}+\alpha^ {\prime}]}\]
as desired.
Since \(n-m<\alpha\leq n-1\), we exhaust all possibilities and thus complete the proof.
**Notation 4.8**.: For ordered sets \(I=\{i_{1}<i_{2}<\cdots<i_{p}\}\) and \(J=\{j_{1}<j_{2}<\cdots<j_{p}\}\subset[n]\), we write \(I\leq J\) if \(i_{a}\leq j_{a}\) for all \(a\in[p]\).
The following classical result on strong Bruhat order is a key ingredient for proving Theorem 4.2.
**Lemma 4.9** ([1], Corollary 2.6.2).: _For \(u,v\in\mathfrak{S}_{n}\), the permutation \(v\) is less than or equal to \(u\) in Bruhat order if and only if for all \(\ell\in\operatorname{Des}(v)\),_
\[\{v(1),\cdots,v(\ell)\}\leq\{u(1),\cdots,u(\ell)\}.\]
_In particular, if \(u=w_{I},v=w_{J}\) are both \(k-\)Grassmannian permutations, then_
\[u\leq v\iff I\leq J.\]
We are now ready to prove the main theorem of this section.
_Proof of Theorem 4.2:_\((\Leftarrow):\) We first show that if the monomial \(\mathbf{m}=\prod_{i=1}^{d}[\mathbf{a}^{(i)}]\) satisfies either (1) or (2) and no monomial strictly dividing \(\mathbf{m}\) also satisfies one of the conditions, then \(\mathbf{m}\) does not lift to a chain in Bruhat order as in Proposition 4.5 and therefore is a minimal generator of \(\operatorname{In}(\mathcal{J}_{S\leq r}))\). Case (1) consists of monomials that are divisible by leading terms of the defining equations of the Grassmannian, so we focus on case (2). Assume \(\mathbf{m}\) satisfies condition (2) and no monomial dividing it satisfies either (1) or (2). Let
\[\mathbf{c}=(a_{\rho_{1}}^{(\sigma_{r+1})},a_{\rho_{2}}^{(\sigma_{r})},\ldots, a_{\rho_{r+1}}^{(\sigma_{1})})=(c_{1},\ldots,c_{r+1})\]
be some generalized antidiagonal of length \(r+1\) in \(\mathbf{m}\).
Write \(R=\sqcup_{i=1}^{d}R_{d}\) where
\[R_{i}=\{\rho_{j}:a_{j}^{(i)}=c_{h}\text{ for some }h\in[r+1]\}.\]
In words, \(R_{i}\) is the set of row indices of entries in the generalized antidiagonal whose column indices are \(i\).
Suppose, seeking contradiction, that there exist permutations \(v^{(1)},\cdots,v^{(d)}\) satisfying Equation (11). Set \(v^{(0)}=v\) and \(v^{(d+1)}=u\). We first consider the case where \(\alpha\leq n-m-(k-r)\). By Proposition 2.9,
\[u=w_{[n-k+1,n]}\text{ and }v=w_{[k+m-r+\alpha]\setminus[r+\alpha+1,m+\alpha]}.\]
Here \(v\) has a unique descent at \(k+\alpha\).
Set \(\mathbf{b}^{(i)}=v^{(i)}[k+\alpha]=\{v^{(i)}(1),\cdots,v^{(i)}(k+\alpha)\}\) for \(0\leq i\leq d+1\). Then \(u=\mathbf{b}^{(d+1)}=[\alpha]\cup[n-k+1,n]\) and \(v=\mathbf{b}^{(0)}=[r+\alpha]\cup[m+\alpha+1,k+m-r+\alpha]\). Notice that \(b_{r+\alpha+1}^{(0)}=m+\alpha+1\) and \(b_{\alpha}^{(d+1)}=\alpha\). Furthermore, \(\mathbf{a}^{(i)}\subset\mathbf{b}^{(i)}\) for all \(1\leq i\leq d\) and \(\mathbf{b}^{(i)}\leq\mathbf{b}^{(i+1)}\).
Set \(s_{i}=\sum_{j=i}^{d}|R_{j}|\). Since \(\mathbf{b}^{(d)}\leq\mathbf{b}^{(d+1)}\), we have \(\alpha\leq b_{\alpha}^{(d)}\leq b_{\alpha}^{(d+1)}=\alpha\). Thus \(b_{\alpha}^{(d)}=\alpha\). Since \(b^{(d)}\supset\{a_{\rho}^{(d)}:\rho\in R_{d}\}\) and \(a_{\rho}^{(d)}>\alpha=b_{\alpha}^{(d)}\) for all \(\rho\in R_{d}\), we have
\[b_{\alpha+s_{d}}^{(d)}\leq a_{\rho_{s_{d}}}^{(d)}.\]
Similarly, since \(\mathbf{b}^{(d-1)}\leq\mathbf{b}^{(d)}\), we have \(b_{\alpha+s_{d}}^{(d-1)}\leq b_{\alpha+s_{d}}^{(d)}\leq a_{\rho_{s_{d}}}^{(d) }<a_{\rho_{s_{d}+1}}^{(d-1)}\), and thus \(b_{\alpha+s_{d}}^{(d-1)}<a_{\rho}^{(d-1)}\) for all \(\rho\in R_{d-1}\). Since \(\mathbf{b}^{(d-1)}\supset\{a_{j}^{(d-1)}:j\in R_{d-1}\}\), we get
\[b_{\alpha+s_{d-1}}^{(d)}\leq a_{\rho_{s_{d-1}}}^{(d)}.\]
Continuing this pattern, we eventually have
\[b_{\alpha+r+1}^{(1)}=b_{\alpha+s_{1}}^{(1)}\leq a_{\rho_{s_{1}}}^{(1)}=a_{ \rho_{r+1}}^{(1)}\leq\alpha+m.\]
This contradicts \(b^{(0)}_{\alpha+r+1}=\alpha+m+1\) as \(b^{(0)}_{\alpha+r+1}\leq b^{(1)}_{\alpha+r+1}\).
Now consider the case where \(n-m-(k-r)<\alpha\leq n-m\). By Proposition 2.9,
\[u=w_{[n-m-(k-r)+1,\alpha]\cup[m-r+\alpha+1,n]}\text{ and }v=w_{[n]\setminus[r+ \alpha+1,m+\alpha]}.\]
When \(\alpha+m<n\), \(v\) has a unique descent at \(r+n-m\).
Set \(\mathbf{b}^{(i)}=\{v^{(i)}(1),\cdots,v^{(i)}(r+n-m)\}\) for \(0\leq i\leq d+1\). Then \(\mathbf{b}^{(d+1)}=[\alpha]\cup[m-r+a+1,n]\) and \(\mathbf{b}^{(0)}=[r+\alpha]\cup[m+\alpha+1,n]\). The rest of the argument is identical to the previous case.
\((\implies):\) We now show that if the monomial \(\mathbf{m}=\prod_{i=1}^{d}[\mathbf{a}^{(i)}]\) satisfies neither condition (1) nor (2), then \(\prod_{i=1}^{d}[\mathbf{a}^{(i)}]\notin\operatorname{In}(\mathcal{J}_{S\leq r})\). By condition (1), \(\mathbf{a}^{(i)}\) and \(\mathbf{a}^{(j)}\) are always comparable, so we may order the Plucker coordinates dividing \(\mathbf{m}\) lexicographically so that \(\mathbf{a}^{(i)}\leq\mathbf{a}^{(i+1)}\). Moreover, by condition (2), each individual Plucker variable dividing \(\mathbf{m}\) does not contain a generalized antidiagonal; that is, \((\mathbf{a}^{(i)}\cap S)\leq r\) for all \(i\in[d]\). In particular, \([\mathbf{a}^{(i)}]\notin\operatorname{In}(\mathcal{J}_{S\leq r})\) and, by Proposition 4.5, \(v([k])\leq\mathbf{a}^{(i)}\leq u([k])\) for all \(i\in[d]\).
Suppose first that \(\alpha\leq n-m-(k-r)\); that is, we are in case (1) of Proposition 2.9. Here we have \(\operatorname{Des}(u)=\{k\}\) and \(\operatorname{Des}(v)=\{k+\alpha\}\). By Proposition 4.5, we will show that there exists permutations \(v^{(1)}\dots v^{(d)}\) satisfying conditions in (11). By Lemma 4.9, it suffices to construct \(\{\mathbf{b}^{(i)}:i\in[d]\}\) such that
\[\mathbf{a}^{(i)}\subset\mathbf{b}^{(i)}\subset[n],\ |\mathbf{b}^{(i)}|=k+\alpha \text{ and } \tag{13}\] \[[k+m-r+\alpha]\setminus[r+\alpha+1,m+a]\leq\mathbf{b}^{(1)}\leq \dots\leq\mathbf{b}^{(d)}\leq[\alpha]\cup[n-k+1,n].\]
Indeed, for each \(1\leq i\leq d\), we can construct \(v^{(i)}\) from \(\mathbf{a}^{(i)}\) and \(\mathbf{b}^{(i)}\) by setting
\[\{v^{(i)}(1)<\dots<v^{(i)}(k)\}=\mathbf{a}^{(i)},\]
\[\{v^{(i)}(k+1)<\dots<v^{(i)}(k+\alpha)\}=\mathbf{b}^{(i)}\setminus\mathbf{a}^ {(i)},\text{ and }\]
Set \(\mathbf{b}^{(d+1)}:=u=[\alpha]\cup[n-k+1,n]\). For each \(1\leq i\leq d\) we iteratively construct \(\mathbf{b}^{(i)}\) from \(\mathbf{a}^{(i)}\) and \(\mathbf{b}^{(i+1)}\) as follows:
1. Initialize \(p=1\);
2. For each \(1\leq j\leq k+\alpha\): * If \(b^{(i+1)}_{j}<a^{(i)}_{p}\), set \(b^{(i)}_{j}:=b^{(i+1)}_{j}\) * Otherwise, set \(b^{(i)}_{j}:=a^{(i)}_{p}\), and increment \(p\) by \(1\).
Finally, set \(\mathbf{b}^{(0)}:=[r+\alpha]\cup[\alpha+m+1,k+m-r+\alpha]\).
It is easy to see by construction that for all \(1\leq i\leq d\) we have \(\mathbf{b}^{(i)}\leq\mathbf{b}^{(i+1)}\) and \(\mathbf{a}^{(i)}\subset\mathbf{b}^{(i)}\). By (13), we are left to show that \(\mathbf{b}^{(1)}\geq\mathbf{b}^{(0)}\coloneqq v\). Notice that since \([\alpha]\subset\mathbf{b}^{(d+1)}\), by construction, we have \([\alpha]\subset\mathbf{b}^{(0)},\mathbf{b}^{(1)}\). Since \(\mathbf{b}^{(0)}=[r+\alpha]\cup[\alpha+m+1,k+m-r+\alpha]\), it is enough to show that
\[|\mathbf{b}^{(1)}\cap S|\leq r.\]
For \(1\leq i\leq d,1\leq j\leq k\), define
\[\text{antidiag}(i,j):=\left\{((\rho_{1},\sigma_{h}),\cdots,(\rho_{h},\sigma_ {1})):\begin{array}{c}1\leq\rho_{1}<\dots<\rho_{h}\leq j,\\ i\leq\sigma_{1}\leq\dots\leq\sigma_{h}\leq d,\\ \alpha+1\leq a^{(\rho_{h})}_{\sigma_{1}}<\dots<a^{(\rho_{h})}_{\sigma_{1}}\leq \alpha+m\end{array}\right\}\]
to be the set of generalized antidiagonals in the first \(j\) rows and last \(d-i+1\) columns. For any column \(i\), define \(\operatorname{lad}(i)\) to be the longest subsequence of \(\operatorname{antidiag}(i,k)\) in column \(i\), that is,
\[\operatorname{lad}(i)\coloneqq\max\{\operatorname{length}(D):D\in \operatorname{antidiag}(i,k)\}.\]
**Claim 4.10**.: _The following holds:_
\[|\mathbf{b}^{(1)}\cap S|\leq\max\left\{|\mathbf{b}^{(d+1)}\cap S|\,\ \operatorname{ lad}(1)\right\}. \tag{14}\]
Proof.: Fix \(h\) to be the largest number such that \(b^{(1)}_{\alpha+h}\in S\). By definition,
\[|\mathbf{b}^{(1)}\cap S|=h. \tag{15}\]
Observe that \(b^{(d+1)}_{\alpha+h}\geq\alpha+m\), since otherwise we must have \(b^{(1)}_{\alpha+h+1}\leq b^{(d+1)}_{\alpha+h+1}=b^{(d+1)}_{\alpha+h}+1\leq \alpha+m\), contradicting the choice of \(h\). We must check two cases:
**Case I.** Suppose \(b^{(d+1)}_{\alpha+h}=\alpha+m\). Then we argue that
\[|\mathbf{b}^{(1)}\cap S|=|\mathbf{b}^{(d+1)}\cap S|, \tag{16}\]
and thus (14) holds. Indeed, since \([\alpha]\subset\mathbf{b}^{(d+1)}\) and \(b^{(d+1)}_{\alpha+h}=\alpha+m\), we have
\[\mathbf{b}^{(d+1)}\cap S=\{b^{(d+1)}_{\alpha+1},\cdots,b^{(d+1)}_{\alpha+h}\}.\]
Since \([\alpha]\subset\mathbf{b}^{(1)}\), by the definition of \(h\), \(\mathbf{b}^{(1)}\cap S=\{b^{(1)}_{\alpha+1},\cdots,b^{(1)}_{\alpha+h}\}\). Therefore both sides of (16) equal \(h\).
**Case II.** Suppose \(b^{(d+1)}_{\alpha+h}>\alpha+m\). Then for each \(0\leq j<h\), \(b^{(d+1)}_{\alpha+h-j}>\alpha+m-j\). There a exists a largest \(i_{h}\geq 1\) such that \(b^{(i_{h})}_{\alpha+h}=b^{(1)}_{\alpha+h}\). Since \(b^{(d+1)}_{\alpha+h}>\alpha+m\geq b^{(i_{h})}_{\alpha+h}\), it must be the case that \(i_{h}\leq d\). By construction, \(b^{(i_{h})}_{\alpha+h}=a^{(i_{h})}_{j_{h}}\) for some \(j_{h}\). We now inductively construct \((i_{y},j_{y})\) for \(y=h-1,\cdots,1\).
Suppose \((i_{y},j_{y})\) has been constructed with \(y\geq 2\). Let \(i_{y-1}\) be the largest index such that \(b^{(i_{y-1})}_{\alpha+y-1}=b^{(i_{y})}_{\alpha+y-1}\). In particular, we have
\[i_{y}\leq i_{y-1}\ \text{and}\ b^{(i_{y-1})}_{\alpha+y-1}<b^{(i_{y})}_{\alpha+y}.\]
Since \(b^{(i_{h})}_{\alpha+h}=\mathbf{b}^{(1)}_{\alpha+h}\leq\alpha+m\), we have \(b^{(i_{y-1})}_{\alpha+y-1}\leq\alpha+m-h+y-1\). Since \(b^{(d+1)}_{\alpha+y-1}>\alpha+m-h+y-1\), it must be the case that \(i_{y-1}\leq d\).
Since \(b^{(i_{y-1}+1)}_{\alpha+y-1}>b^{(i_{y-1})}_{\alpha+y-1}\), by construction, \(b^{(i_{y-1})}_{\alpha+y-1}=a^{(i_{y-1})}_{j_{y-1}}\) for some \(j_{y-1}\). Thus we have
\[a^{(i_{y-1})}_{j_{y-1}}=b^{(i_{y-1})}_{\alpha+y-1}=b^{(i_{y})}_{\alpha+y-1}<b^ {(i_{y})}_{\alpha+y}=a^{(i_{y})}_{j_{y}}.\]
Since \(i_{y-1}\geq i_{y}\), by semistandardness, we must have \(j_{y-1}<j_{y}\). Therefore the pair \((i_{y-1},j_{y-1})\) we constructed satisfy
\[i_{y}\leq i_{y-1}\leq d,\ j_{y-1}<j_{y},\ \text{and}\ b^{(i_{y-1})}_{\alpha+y-1}=a^ {(i_{y-1})}_{j_{y-1}}<a^{(i_{y})}_{j_{y}}=b^{(i_{y})}_{\alpha+y}. \tag{17}\]
Let \((i_{1},j_{1}),\cdots,(i_{h},j_{h})\) be pairs as inductively constructed above. By (17), \(a^{(i_{1})}_{j_{1}}\cdots a^{(i_{h})}_{j_{h}}\) is a generalized antidiagonal of length \(h\). By (15) and the definition of lad,
\[|\mathbf{b}^{(1)}\cap S|=h\leq\operatorname{lad}(1), \tag{18}\]
which implies (14).
Since (14) holds in both cases, our proof is complete.
Returning to the proof of our main theorem in this section, since \((b^{(d+1)}\cap S)=[n-k+1,n]\cap[\alpha+1,\alpha+m]\), we have
\[|\mathbf{b}^{(d+1)}\cap S|=\left\{\begin{aligned} & 0&\text{ if }n-k+1>\alpha+m\\ &\alpha+m-n+k&\text{ otherwise}\end{aligned}\right.,\]
and therefore \(|\mathbf{b}^{(d+1)}\cap S|\leq r\) by the assumption that \(\alpha\leq n-m-(k-r)\). Since \(\prod_{i=1}^{d}[\mathbf{a}^{(i)}]\) does not satisfy condition (2), we get \(\operatorname{\mathrm{lad}}(1)\leq r\). By Claim 4.10, we conclude \(|\mathbf{b}^{(1)}\cap S|\leq r\).
We are left with case (2) of Proposition 2.9, where \(n-m-(k-r)\leq\alpha\leq n-m\),
\[u=w_{[n-m-(k-r)+1,\alpha]\cup[m-r+\alpha+1,n]}\text{ and }v=w_{[n]\setminus[r+ \alpha+1,m+\alpha]}.\]
The only difference is that we now set
\[\mathbf{b}^{(d+1)}:=[\alpha]\cup[m-r+\alpha+1,n]\text{ and }\mathbf{b}^{(0)}=[r+ \alpha]\cup[m+\alpha+1,n].\]
We construct \(\mathbf{b}^{(i)}\) for \(1\leq i\leq d\) using the same algorithm as before; now \(\mathbf{b}^{(i)}\) has length \(r+n-m\geq k\). The statement and proof of Claim 4.10 go through the same as before. The rest of the argument also follows since now \(|\mathbf{b}^{(d+1)}\cap S|=r\).
**Example 4.11**.: Let \(n=11\), \(k=5\), \(a=2\), \(m=6\), \(r=3\) as in Example 2.11(1), so the basic interval positroid variety in consideration is \(X_{[3,8]\leq 3}\). Recall that \(u=789\underline{10}\,\underline{11}123456\) and \(v=123459\underline{10}678\underline{11}\). The monomial given by the following tableau
\[\begin{array}{|c|c|c|}\hline 1&1&2&\mathbf{4}\\ \hline 2&2&\mathbf{5}&7\\ \hline 3&4&8&8\\ \hline 5&\mathbf{6}&9&9\\ \hline\mathbf{8}&9&10&10\\ \hline\end{array}\]
contains a generalized antidiagonal with entries 4,5,6,8 labeled and boldfaced in red, and hence is an element of \(\operatorname{\mathrm{In}}(\mathcal{J}_{[3,8]\leq 3})\).
If we remove the second column the tableau becomes
\[\begin{array}{|c|c|c|}\hline 1&2&4\\ \hline 2&5&7\\ \hline 3&8&8\\ \hline 5&9&9\\ \hline 8&10&10\\ \hline\end{array}\]
which no longer contains a generalized antidiagonal of length 4. Let \(\prod_{i=1}^{3}\mathbf{a}^{(i)}\) be the monomial given by this tableau where \(\mathbf{a}^{(i)}\) corresponds to the \(i\)-th column. We also show \(\mathbf{b}^{(0)},\cdots,\mathbf{b}^{(4)}\) in the following tableau by following the algorithm given in the
( \(\implies\) ) direction of the proof of Theorem 4.2, where \(\mathbf{b}^{(i)}\) corresponds to the \(i+1\)-th column
\[\begin{array}{|c|c|c|c|c|}\hline 1&1&1&1&1\\ \hline 2&2&2&2&2\\ \hline 3&3&4&4&7\\ \hline 4&5&5&7&8\\ \hline 5&8&8&8&9\\ \hline 9&9&9&9&10\\ \hline 10&10&10&10&11\\ \hline\end{array}\]
Then \(v^{(3)}=4789\underline{10}12356\underline{11}\), \(v^{(2)}=2589\underline{10}14367\underline{11}\), and \(v^{(1)}=123589\underline{10}467\underline{11}\).
### Standard monomials of arbitrary positroid varieties
Let \(B(n,k,d)\) denote the set of rectangular semistandard tableaux of shape \(k\times d\) with entries \(\leq n\). As mentioned in Remark 3.12, the set \(B(n,k,d)\) can be identified with standard monomials for \(\operatorname{Gr}(k,n)\) under the Hodge degeneration. We write \(\mathbf{m}\in B(n,k,d)\) both for the monomial and its tableau. For a positroid variety \(\Pi_{f}\), let
\[B_{f}(n,k,d):=\{\mathbf{m}\in B(n,k,d):\mathbf{m}\not\in\operatorname{In}( \mathcal{J}_{f})\}.\]
Then \(B_{f}(n,k,d)\) forms a basis for \(\Bbbk[\Pi_{f}]_{d}\) and this is the set of degree-\(d\) standard monomials for \(\Pi_{f}\).
**Proposition 4.12**.: _Let \(\mathbf{m}:=\prod_{i=1}^{d}[\mathbf{a}^{(i)}]\in B(n,k,d)\), where \(\mathbf{a}^{(1)}\leq\cdots\leq\mathbf{a}^{(d)}\). Then there exists a unique minimal positroid variety \(\Pi_{f}\) such that \(\mathbf{m}\in B_{f}(n,k,d)\). In other words, if \(\Pi_{f^{\prime}}\) is a positroid variety such that \(\mathbf{m}\in B_{f^{\prime}}(n,k,d)\), then \(\Pi_{f}\subseteq\Pi_{f^{\prime}}\)._
Proof.: Let \(v^{(1)}\) be the \(k\)-_anti_-Grassmannian permutation (meaning \(v^{(1)}\) has a unique ascent at position \(k\)) such that \(\{v^{(1)}(1),\cdots v^{(1)}(k)\}=\{\mathbf{a}^{(1)}_{1},\cdots,\mathbf{a}^{(1 )}_{k}\}\). By [14, Proposition 2.1] for \(2\leq i\leq d\), we can let \(v^{(i)}\) to be the unique minimum permutation such that
* \(\{v^{(i)}(1),\cdots v^{(i)}(k)\}=\{\mathbf{a}^{(i)}_{1},\cdots,\mathbf{a}^{(i )}_{k}\}\), and
* \(v^{(i)}\geq v^{(i-1)}\).
Let \(\Pi_{f}\) be the positroid variety \(\Pi_{[v^{(1)},v^{(d)}]}\).
Now suppose \(\mathbf{m}\in B_{f^{\prime}}(n,k,d)\) and \(\Pi_{f^{\prime}}=\Pi_{[v^{\prime},u^{\prime}]}\). By Proposition 4.5, there exist permutations \(w^{(1)},\cdots,w^{(d)}\) such that \(v^{\prime}\leq w^{(1)}\leq\cdots\leq w^{(k)}\leq u^{\prime}\) and \(w^{(i)}([k])=\{\mathbf{a}^{(i)}_{1},\cdots,\mathbf{a}^{(i)}_{k}\}\) for all \(1\leq i\leq d\). Naturally, \(\mathbf{m}\) is a standard monomial for \(\Pi_{[w^{(1)},w^{(k)}]}\subseteq\Pi_{f^{\prime}}\). By [14, Proposition 3.3], we may without loss of generality assume \(w^{(1)}\) is anti-Grassmannian; namely \(w^{(1)}=v^{(1)}\). We now argue by induction that \(w^{(i)}\geq v^{(i)}\) for each \(1\leq i\leq d\). Since \(w^{(i)}\geq w^{(i-1)}\geq v^{(i-1)}\) by the induction hypothesis, and \(v^{(i)}([k])=w^{(i)}([k])=\{\mathbf{a}^{(i)}_{1},\cdots,\mathbf{a}^{(i)}_{k}\}\), we must have \(v^{(i)}\leq w^{(i)}\) by the construction of \(v^{(i)}\). Therefore, \(\Pi_{f^{\prime}}\supseteq\Pi_{[w^{(1)},w^{(i)}]}\supseteq\Pi_{f}\).
**Theorem 4.13**.: _Suppose \(\Pi_{f}=\bigcap_{i}\Pi_{f_{i}}\) where each \(\Pi_{f_{i}}\) is a basic positroid variety. Then \(B_{f}(n,k,d)=\bigcap_{i}B_{f_{i}}(n,k,d)\)._
Proof.: First notice that whenever \(\Pi_{a}\subseteq\Pi_{b}\), we have \(B_{a}(n,k,d)\subseteq B_{b}(n,k,d)\). Indeed, Since \(\mathcal{J}_{a}\supseteq\mathcal{J}_{b}\), we have \(\mathrm{In}(\mathcal{J}_{a})\supseteq\mathrm{In}(\mathcal{J}_{b})\), so \(B_{a}(n,k,d)\subseteq B_{b}(n,k,d)\). It follows that \(B_{f}(n,k,d)\subseteq\bigcap_{i}B_{f_{i}}(n,k,d)\).
We now show that if \(\mathbf{m}\in\bigcap_{i}B_{f_{i}}(n,k,d)\), then \(\mathbf{m}\in B_{f}(n,k,d)\). By Proposition 4.12, there is a unique minimum positroid variety \(\Pi_{g}\) such that \(\mathbf{m}\in B_{g}(n,k,d)\) and \(\Pi_{g}\subseteq\Pi_{f_{i}}\) for each \(i\). Therefore \(\Pi_{g}\subseteq\Pi_{f}\). It follows that \(\mathbf{m}\in B_{f}(n,k,d)\).
**Example 4.14** (Standard monomials of a positroid variety).: Let \(\Pi_{f}\) be the positroid variety defined in Example 2.3. Then \(\Pi_{f}=X_{[2]\leq 0}\cap X_{[2,4]\leq 1}\cap X_{[1,5]\leq 2}\). Equipped with Theorem 4.2, we are ready to describe the Stanley-Reisner complex, denoted \(\Delta(\Pi_{f})\), of the Hodge degeneration of this positroid variety directly, instead of considering it as a projection of an order complex in the Bruhat order. The vertices of \(\Delta(\Pi_{f})\) are labeled by the Plucker coordinates \([136],[146],[156],[356],[456]\). Notice that these form a chain in the Plucker poset, but they do not form a face of \(\Delta(\Pi_{f})\) since \(\begin{array}{|c|c|c|c|}\hline 1&3\\ \hline 4&5\\ \hline 6&6\\ \hline\end{array}\) contains a generalized antidiagonal for \(X_{[2,4]\leq 1}\). The facets of \(\Delta(\Pi_{f})\) correspond to the monomials
\[\begin{array}{|c|c|c|c|c|}\hline 1&1&1&4\\ \hline 3&4&5&5\\ \hline 6&6&6&6\\ \hline\end{array}\quad\text{and}\quad\begin{array}{|c|c|c|c|c|}\hline 1&1&3&4\\ \hline 3&5&5&5\\ \hline 6&6&6&6\\ \hline\end{array},\]
each of dimension \(3\).
## 5. Grobner bases for positroid varieties
The goal of this section is to give explicit equations for the Grobner basis of a positroid variety.
**Proposition 5.1**.: _Let \(S=[a+1,a+m]\) where \(a+m\leq n\) and consider the positroid variety \(X_{S\leq r}\). Let \(c_{1},\cdots,c_{r+1}\subseteq[a+1,a+m]\), and let \(\lambda=(\lambda_{1},\cdots,\lambda_{d})\) be a composition of \(r+1\). Denote the partial sums of \(\lambda\) by \(p_{j}=\sum_{i=1}^{j}\lambda_{i}\). Then in \(\mathcal{I}(X_{S\leq r})\) we have the relation_
\[\sum_{\sigma\in\mathfrak{S}_{r+1}}\text{sign}(\sigma)\prod_{j=1}^{d}[a_{j,1} \cdots a_{j,\alpha_{j}}c_{\sigma(p_{j-1}+1)}\cdots c_{\sigma(p_{j})}b_{j,1} \cdots b_{j,\beta_{j}}]=0\]
_where \(\alpha_{j}\geq\alpha_{j-1}+\lambda_{j-1}\) and \(\beta_{j}=k-\lambda_{j}-\alpha_{j}\)._
**Example 5.2**.: Let \(S=[3,7]\) and let \(r=2\). Consider \(\mathbf{c}=\{3,5,7\}\subset S\). If \(\lambda=(1,2)\), then the following degree \(2\) relation appears in \(\mathcal{I}(X_{S\leq r})\) when \(k=4\) and \(n=11\):
\[\begin{array}{|c|c|c|c|}\hline 1&3\\ \hline 5&8\\ \hline 7&9\\ \hline 10&11\\ \hline\end{array}\quad\text{-}\quad\begin{array}{|c|c|c|c|}\hline 1&5\\ \hline 3&8\\ \hline 7&9\\ \hline 10&11\\ \hline\end{array}\quad\text{+}\quad\begin{array}{|c|c|c|c|}\hline 1&7\\ \hline 3&8\\ \hline 5&9\\ \hline 10&11\\ \hline\end{array}\quad\text{=}\quad 0. \tag{19}\]
If \(\lambda=(1,1,1)\), then the following degree 3 relation appears in \(X_{S\leq r}\):
\[\begin{array}{|c|c|}\hline 1&2&3\\ \hline 2&5&8\\ \hline 7&8&9\\ \hline 8&9&10\\ \hline\end{array}\.\.
\[-\] \[-\]
Notice that the last two terms in each grouping is equal to zero by the definition of a positroid variety. Collecting like terms, we see that the righthand side is equal to \((-2)\) times the lefthand side, so the desired relation is zero. One could then analogously use the relation (21) to prove relation (19) from Example 5.2, and so on.
We are now ready to give explicit equations for the Grobner basis of a basic interval positroid variety.
**Theorem 5.4**.: _Let \(S=[\alpha+1,\alpha+m]\) where \(\alpha+m\leq n\) and consider the positroid variety \(X_{S\leq r}\). Let \(<_{\omega}\) be a degree revlex monomial order with respect to some linear extension of the Plucker poset as described in Remark 3.10. Then the defining ideal of \(X_{S\leq r}\), denoted \(\mathcal{J}_{S\leq r}\), has a minimal Grobner basis with two classes of generators:_
1. _degree_ \(2\) _generators coming from the classcal Plucker relations. The leading terms of these generators are of the form_ \([\mathbf{a}]\cdot[\mathbf{b}]\) _where_ \([\mathbf{a}]\) _and_ \([\mathbf{b}]\) _are incomparable elements of the Plucker poset and neither_ \(\mathbf{a}\) _nor_ \(\mathbf{b}\) _contains_ \(r+1\) _elements of_ \(S\)_;_
2. _generators of the form_ \[\sum_{\sigma\in\mathfrak{S}_{r+1}}\text{sign}(\sigma)\prod_{j=1}^{d}[a_{j,1} \cdots a_{j,\alpha_{j}}c_{\sigma(p_{j-1}+1)}\cdots c_{\sigma(p_{j})}b_{j,1}, \cdots,b_{j,\beta_{j}}]\] (22) _where the following conditions are satisfied:_ * \(\mathbf{c}=\{c_{1}<c_{2}<\cdots<c_{r+1}\}\subseteq S\)_,_ * \(\lambda=(\lambda_{1},\cdots,\lambda_{d})\) _is a composition of_ \(r+1\) _with_ \(\lambda_{i}>0\) _for all_ \(i\)_,_ * \(a_{j,1}<\cdots<a_{j,\alpha_{j}}<c_{p_{j-1}+1}<\cdots<c_{p_{j}}<b_{j,1}\cdots<b_ {j,\beta_{j}}\)_,_ * \(\alpha_{j}\geq\alpha_{j-1}+\lambda_{j-1}\) _and_ \(\beta_{j}=k-\lambda_{j}-\alpha_{j}\)_._ _The leading terms of the generators of the form (_22_) contain the generalized antidiagonal_ \(\mathbf{c}\) _as described in Theorem_ 4.2_._
Proof.: The first class of relations is obviously in the ideal, and relations of the form (22) are in the ideal by Proposition 5.1. Moreover, each of these equations has exactly one term of the form described in Theorem 4.2.
It remains to check that \(<_{\omega}\) picks out the desired leading terms. This is clear for the usual Plucker relations. To see this for the relations of the form (22), observe that swapping any \(c_{i}\) from the bottom row to a higher row will automatically force the bottom row to contain a larger number, so it will have higher rank in the Plucker poset \(\mathcal{P}\) than the original smallest variable did before swapping. Since \(<_{\omega}\) is a revlex monomial ordering on a linear extension of \(\mathcal{P}\), this implies that the leading term must be of the form described in Theorem 4.2.
**Proposition 5.5**.: _Let \(S=[\alpha+1,\alpha+m]\) where \(\alpha\geq 0\), \(\alpha+m\leq n\), \(r<m\), and define \(S^{\vee}\) and \(r^{\vee}\) as in Construction 3.14. Suppose \(\operatorname{gb}(\mathcal{J}_{S\leq r}))\) is the minimal Grobner basis of \(\mathcal{J}_{S\leq r}\) given in Theorem 5.4. Then \(\{g^{\vee}:g\in\operatorname{gb}(\mathcal{J}_{S\leq r})\}\) (where \(g^{\vee}\) is defined in Construction 3.14) is a minimal Grobner basis for \(\mathcal{J}_{S^{\vee}\leq r^{\vee}}\)._
Proof.: Let \(g\in\operatorname{gb}(\mathcal{J}_{S\leq r}))\). Then \(g^{\vee}\in\mathcal{J}_{S^{\vee}\leq r^{\vee}}\) by Proposition 3.15. We first show that the leading term of \(g^{\vee}\), denoted \(\operatorname{In}(g^{\vee})\), is indeed in the ideal \(\operatorname{In}(\mathcal{J}_{S^{\vee}\leq r^{\vee}})\). Suppose the initial term of \(g\) is \(\mathbf{a}=\Pi_{i=1}^{d}[\mathbf{a}^{(i)}]\) and let \(\mathbf{b}=\Pi_{i=1}^{d}[\mathbf{b}^{(i)}]\) be an arbitrary term in \(g\) that is not the initial term. Assume also \(\mathbf{a}^{(i)}<_{\omega}\mathbf{a}^{(i+1)}\) and \(\mathbf{b}^{(i)}<_{\omega}\mathbf{b}^{(i+1)}\) for each \(i\). Let \(\mathbf{a}^{\vee},\mathbf{b}^{\vee}\) be defined as in Construction 3.14. For \(i\in[d]\), write \((\mathbf{a}^{\vee})^{(i)}=\mathbf{a}^{(d+1-i)\vee}\) and similarly \((\mathbf{b}^{\vee})^{(i)}=\mathbf{b}^{(d+1-i)\vee}\). Let \(j\in[d]\) be the largest index such that \(\mathbf{a}^{(j)}\neq\mathbf{b}^{(j)}\). By the description of the Grobner basis in Theorem 5.4 we have that \(\mathbf{a}^{(j)}<_{\omega}\mathbf{b}^{(j)}\), so
\[(\mathbf{a}^{\vee})^{(d+1-j)}=\mathbf{a}^{(j)\vee}>_{\omega}\mathbf{b}^{(j) \vee}=(\mathbf{b}^{\vee})^{(d+1-j)},\]
and \((\mathbf{a}^{\vee})^{(j^{\prime})}=(\mathbf{b}^{\vee})^{(j^{\prime})}\) for all \(j^{\prime}<d+1-j\). By the definition of revlex we see that \(\mathbf{a}^{\vee}=\Pi_{i=1}^{d}[\mathbf{a}^{(i)\vee}]>_{\omega}\Pi_{i=1}^{d}[ \mathbf{b}^{(i)\vee}]=\mathbf{b}^{\vee}\), so \(\mathbf{a}^{\vee}\) is the leadterm of \(g^{\vee}\). By Proposition 4.6 we have that \(\operatorname{In}(g^{\vee})\in\operatorname{In}(\mathcal{J}_{S^{\vee}\leq r^ {\vee}})\). Moreover, it is straightforward to verify that every element in \(\operatorname{In}(\mathcal{J}_{S^{\vee}\leq r^{\vee}})\) is the initial term of some \(g^{\vee}\) where \(g\in\operatorname{gb}(\mathcal{J}_{S\leq r})\). The result then follows.
**Example 5.6**.: Suppose \(n=5\), \(k=2\), and consider \(X_{[5,1]^{\circ}\leq 1}\). Then \(\varphi(X_{[5,1]^{\circ}\leq 1})=X_{[2,4]\leq 2}\subset\operatorname{Gr}(3,5)\). There is a single degree \(3\) element of \(\operatorname{gb}(\mathcal{J}_{[2,4]\leq 2})\)):
\[\begin{array}{|c|c|c|}\hline 1&1&2\\ \hline 2&3&4\\ \hline 4&5&5\\ \hline\end{array}-\begin{array}{|c|c|c|}\hline 1&1&2\\ \hline 2&4&4\\ \hline 3&5&5\\ \hline\end{array}-\begin{array}{|c|c|c|}\hline 1&1&3\\ \hline 2&2&4\\ \hline 4&5&5\\ \hline\end{array}\]
Therefore, there is a single degree \(3\) generator of \(\operatorname{gb}(\mathcal{J}_{[5,1]^{\circ}\leq 1})\)):
\[\begin{array}{|c|c|c|}\hline 1&2&3\\ \hline 3&4&5\\ \hline\end{array}-\begin{array}{|c|c|c|}\hline 1&2&4\\ \hline 3&3&5\\ \hline\end{array}-\begin{array}{|c|c|c|}\hline 1&3&3\\ \hline 2&4&5\\ \hline\end{array}.\]
Finally, we have the following statement for the Grobner basis of an arbitrary positroid variety.
**Theorem 5.7**.: _Suppose \(\Pi_{f}=\bigcap_{i}\Pi_{f_{i}}\) where each \(\Pi_{f_{i}}\) is a basic positroid variety. Then \(\bigcup_{i}\operatorname{gb}(\mathcal{J}_{f_{i}})\) is a minimal Grobner basis for \(\mathcal{J}_{f}\)._
Proof.: It suffices to show \(\mathrm{In}(\mathcal{J}_{f})=\sum_{i}\mathrm{In}(\mathcal{J}_{f_{i}})\), and this follows directly from Theorem 4.13.
**Remark 5.8**.: In [10] there is a general result proven about concatenating Grobner bases, based on degeneration of splittings, where both splittings are on the same affine space. Based on discussions with Knutson, we expect that there should be a generalization in which the ambient space degenerates too (as in the Hodge degeneration of the Grassmannian) that would produce our Theorem 5.7 about concatenation.
## 6. Promotion on standard monomials
Let **promotion** be the map \(\mathrm{prom}:B(k,n,d)\longrightarrow B(k,n,d)\) defined as follows:
* If \(\mathbf{m}\in B(k,n,d)\) does not contain \(n\), then increase each entry of \(\mathbf{m}\) by \(1\).
* If \(\mathbf{m}\) contains \(n\), replace each \(n\) with \(\bullet\) and perform the the following **jeu de taquin (jdt)** slide: \[\begin{array}{c}\framebox{$a$}\cbox{$a$}\cbox{$b$}\bullet\\ \framebox{$b$}\bullet\end{array}\longrightarrow\left\{\begin{array}{c} \framebox{$a$}\bullet\\ \framebox{$b$}\cbox{$c$}\\ \framebox{$a$}\cbox{$b$}\end{array}\right.\text{ if }b\leq c\text{ or }a,b\text{ do not exist},\\ \framebox{$a$}\cbox{$b$}\end{array}\right.\text{ if }b>c\text{ or }a,c\text{ do not exist}\] until \(\bullet\) occupy a straight shape in \(\mathbf{m}\). Replace each \(\bullet\) with \(0\) and increase all entries by \(1\).
See [1, Section 2] for an in-depth description of promotion on semistandard tableaux. Our main theorem of this section is the following. Recall the notation for \(B(k,n,d)\) and \(B_{f}(k,n,d)\) introduced at the start of Subsection 4.2.
**Theorem 6.1**.: _For any \(f\in\mathrm{Bound}(k,n)\) and \(\mathbf{m}=\prod_{i=1}^{d}[\mathbf{a}^{(i)}]\in B(k,n,d)\). Then \(\mathbf{m}\in B_{f}(k,n,d)\) if and only if \(\mathrm{prom}(\mathbf{m})\in B_{\chi(f)}(k,n,d)\). In particular, promotion gives a bijection between \(B_{f}(n,k,d)\) and \(B_{\chi(f)}(n,k,d)\) for all \(k,n,d\) and all \(f\in\mathrm{Bound}(k,n)\)._
**Example 6.2**.: Consider the basic positroid variety \(\Pi_{f}=X_{[2,4]\leq 2}\subset\mathrm{Gr}(3,5)\). We have \(\mathbf{m}=[1,2,4]\cdot[2,3,5]\in\mathrm{In}(\mathcal{J}_{f})\) and \(\mathrm{prom}(\mathbf{m})=[1,2,3]\cdot[3,4,5]\in\mathrm{In}(\mathcal{J}_{ \chi(f)})\). Note that not only does promotion take a standard monomial of \(\Pi_{f}\) to a standard monomial of \(\Pi_{\chi(f)}\), but it also takes semistandard tableaux in the initial ideal \(\mathrm{In}(\mathcal{J}_{f})\) to monomials in the ideal \(\mathrm{In}(\mathcal{J}_{\chi(f)})\). We demonstrate this process below:
\[\begin{array}{c}\framebox{$1$}\framebox{$2$}\\ \framebox{$2$}\framebox{$3$}\rightarrow\\ \framebox{$4$}\cbox{$5$}\end{array}\rightarrow\framebox{$1$}\framebox{$2$}\\ \framebox{$2$}\framebox{$3$}\\ \framebox{$4$}\bullet\end{array}\rightarrow\framebox{$1$}\framebox{$2$}\\ \framebox{$2$}\framebox{$3$}\\ \framebox{$\bullet$}\begin{array}{c}\framebox{$1$}\framebox{$2$}\\ \framebox{$2$}\framebox{$3$}\\ \framebox{$\bullet$}\end{array}\rightarrow\framebox{$1$}\framebox{$2$}\\ \framebox{$\bullet$}\begin{array}{c}\framebox{$1$}\framebox{$2$}\\ \framebox{$\bullet$}\framebox{$3$}\\ \framebox{$2$}\framebox{$4$}\end{array}\rightarrow\framebox{$1$}\framebox{$3$}\\ \framebox{$2$}\framebox{$4$}\end{array}\rightarrow\framebox{$0$}\framebox{$2$} \\ \framebox{$1$}\framebox{$3$}\\ \framebox{$2$}\framebox{$4$}\end{array}\rightarrow\framebox{$1$}\framebox{$3$}\\ \framebox{$2$}\framebox{$4$}\rightarrow\framebox{$2$}\framebox{$4$}\]{$1$} \framebox{$3$}\framebox{$2$}\framebox{$4$}\]
**Lemma 6.3**.: _Let \(\varphi:\mathbb{K}[\mathrm{Gr}(k,n)]\rightarrow\mathbb{K}[\mathrm{Gr}(n-k,n)]\) be defined as in Construction 3.14 and let \(\mathbf{m}=\prod_{i=1}^{d}[\mathbf{a}^{(i)}]\in B(k,n,d)\) be any semistandard tableau. Then_
\[\varphi\circ\mathrm{prom}(\mathbf{m})=\mathrm{prom}\circ\varphi(\mathbf{m}).\]
Proof.: Let \(T\) and \(T^{\vee}\) be the semistandard tableau corresponding to \(\mathbf{m}\) and \(\mathbf{m}^{\vee}\) with entries \(n\) replaced with "\(\bullet\)". Let \(\operatorname{\mathsf{infusion}}(T)\) and \(\operatorname{\mathsf{infusion}}(T^{\vee})\) be the resulting tableaux after moving all "\(\bullet\)" entries to the top left corner using jdt slides.
**Case I.** (No "\(\bullet\)" in \(T\)): In this case, all entries in the bottom row of \(T^{\vee}\) are "\(\bullet\)". Therefore all jdt slides occurring in \(T^{\vee}\) move "\(\bullet\)" up but not to the left. Promotion simply adds one to all entries in \(\mathbf{m}\). For \(\mathbf{m}^{\vee}\), the effect of promotion is equivalent to adding one cyclically to each entry (so \(n\) becomes \(1\)) and then sorting within each column. As a result, \(\varphi\circ\operatorname{\mathsf{prom}}(\mathbf{m})=\operatorname{\mathsf{ prom}}(\mathbf{m}^{\vee})\) in this case.
**Case II.** (No "\(\bullet\)" in \(T^{\vee}\)): This follows from the same reasoning as in Case I.
**Case III.** ("\(\bullet\)" in both \(T\) and \(T^{\vee}\)): Let \(\delta\) be the number of "\(\bullet\)" in \(T\). By assumption, \(1\leq\delta\leq d-1\) and there are \((d-\delta)\) "\(\bullet\)" in \(T^{\vee}\). By definition, the set of (non-"\(\bullet\)") entries in column \(i\) of \(T\) is disjoint from the set of entries in column \(d+1-i\) of \(T^{\vee}\). We will show that this property still holds for \(\operatorname{\mathsf{infusion}}(T)\) and \(\operatorname{\mathsf{infusion}}(T^{\vee})\).
We use the following two different orders of jdt slides for \(T\) and \(T^{\vee}\):
* For \(T\), we always slide into the \(\bullet\) that is in the leftmost column. Equivalently, we first move the \(\bullet\) in column \(d-\delta+1\) to position \((1,1)\), then the \(\bullet\) in column \(d-\delta+2\) to position \((1,2)\) and so on.
* For \(T^{\vee}\), we move each \(\bullet\), from left to right, until it changes column. When all \(\bullet\) have changed columns, we start with the leftmost \(\bullet\) again and repeat the process until all \(\bullet\) end up in first row and column \(1\) through \(\delta\).
Notice that there are exactly \((\delta d-\delta^{2})\) jdt slides that change the column index of \(\bullet\) in both \(T\) and \(T^{\vee}\). Moreover, for any \(i\in[\delta d-\delta^{2}]\), the \(\bullet\) in \(T\) moved from column \(j+1\) to \(j\) in the \(i\)-th slide if and only if the \(\bullet\) in \(T^{\vee}\) moved from column \(d+1-j\) to \(d-j\).
Let \(T_{i}\) and \(T_{i}^{\vee}\) be the tableaux after the \(i\)-th such jdt slides.
**Claim 6.4**.: _For all \(j\in[d]\), the set of entries in column \(j\) of \(T_{i}\) is disjoint from the set of entries in column \(d+1-j\) of \(T_{i}^{\vee}\)._
Proof.: We proceed by induction on \(i\). For \(i=0\), \(T_{0}=T\) and \(T_{0}^{\vee}=T^{\vee}\). The sets of entries are disjoint by definition. Suppose the statement holds for \(i-1\). Let \(j\) be the column index such that \(\bullet\) moved from column \(j+1\) to \(j\) in \(T\). Let \(\alpha\) and \(\beta\) be the entry that are switched with \(\bullet\) in \(T_{i-1}\) and \(T_{i-1}^{\vee}\) respectively. Let \(C_{1},C_{2}\) be the sets of entries in column \(j\) and \(j+1\) in \(T_{i-1}\) respectively. Then
\[\alpha=\max\{a\in[n]:|C_{1}\cap[a,n]|>|C_{2}\cap[a,n]|\}. \tag{23}\]
Similarly, since \(C_{1}^{\vee}:=[n]\setminus C_{1}\) and \(C_{2}^{\vee}:=[n]\setminus C_{2}\) are the sets of entries in column \(d-j\) and \(d+1-j\) in \(T_{i-1}^{\vee}\), we have
\[\beta=\max\{b\in[n]:|C_{2}^{\vee}\cap[b,n]|>|C_{1}^{\vee}\cap[b,n]|\}. \tag{24}\]
Since
\[|C_{1}\cap[a,n]|+|C_{1}^{\vee}\cap[a,n]|=|C_{2}\cap[a,n]|+|C_{2}^{\vee}\cap[a, n]|,\]
it is easy to see that the right hand side of (23) and (24) agrees. Therefore \(\alpha=\beta\) and thus Claim holds for \(i\)
By Claim 6.4, the set of entries in column \(j\) of \(T_{\delta d-\delta^{2}}\) is disjoint from entries in column \(d+1-j\) of \(T_{\delta d-\delta^{2}}^{\vee}\). Since \(T_{\delta d-\delta^{2}}\) and \(\operatorname{infusion}(T)\) have the same set of entries for each column and so do \(T_{\delta d-\delta^{2}}^{\vee}\) and \(\operatorname{infusion}(T^{\vee})\), we are done.
**Example 6.5**.: We demonstrate Case I and III of Lemma 6.3 in \(\operatorname{Gr}(3,5)\). The following two diagrams commutes:
\[\begin{array}{c}\includegraphics[width=142.364pt]{Case I:}\end{array}\]
\[\begin{array}{c}\includegraphics[width=142.364pt]{Case II:}\end{array}\]
\[\begin{array}{c}\includegraphics[width=142.
\(A^{\prime}\) is a generalized antidiagonal in \(T_{j-1}\) with \(b\in A^{\prime}\) being the top entry in column \(\sigma\). Therefore sliding \(b\) to the right does not violate any condition for \(A^{\prime}\) to be a generalized antidiagonal.
**Case IV.**\((a,b\in A)\): Since \(a,b\in A\), we have \(b>a\) and \(A\) is still a generalized antidiagonal after sliding \(b\) to the right.
By Claim 6.7, there is a generalized antidiagonal with entries in \(S\) after all jdt slides. Since all entries will increase by \(1\) after jeu de taquin, \(\operatorname{prom}(\mathbf{m})\) contains a generalized antidiagonal of size \(r+1\) with entries in \(\chi(S)\). By Theorem 4.2, \(\operatorname{prom}(\mathbf{m})\in\operatorname{In}(\mathcal{J}_{\chi(S) \leq r})\).
( \(\Longleftarrow\)): This direction follows from the same reasoning as above, with the slight change of replacing \(\operatorname{prom}\) with \(\operatorname{prom}^{-1}\).
**Proposition 6.8**.: _For any cyclic interval \(S\subset[n]\) and any \(\mathbf{m}\in B(k,n,d)\), we have \(\mathbf{m}\in\operatorname{In}(\mathcal{J}_{S\leq r})\) if and only if \(\operatorname{prom}(\mathbf{m})\in\operatorname{In}(\mathcal{J}_{\chi(S) \leq r})\)._
Proof.: Since \(S\subset[n]\) is a cyclic interval, either \(S\subset[n-1]\) or \(S^{\vee}\subset[n-1]\). If \(S\subset[n-1]\), this is the content of Lemma 6.6. If \(S^{\vee}\subset[n-1]\), by Proposition 4.6,
\[\varphi(\mathbf{m})=\mathbf{m}^{\vee}\in\operatorname{In}(\mathcal{J}_{S^{ \vee}\leq r^{\vee}}).\]
By Lemma 6.3 and Lemma 6.6,
\[\varphi\circ\operatorname{prom}(\mathbf{m})=\operatorname{prom}\circ\varphi( \mathbf{m})\in\operatorname{In}(\mathcal{J}_{\chi(S^{\vee})\leq r^{\vee}})).\]
Since \(\chi(S^{\vee})=\chi(S)^{\vee}\), by Proposition 4.6,
\[\operatorname{prom}(\mathbf{m})\in\operatorname{In}(\mathcal{J}_{\chi(S) \leq r}).\qed\]
Proof of Theorem 6.1:.: Let \(\{S_{i}\leq r_{i}\}\) be the set of interval rank conditions associated to \(f\). By Theorem 4.13,
\[\sum_{i}\operatorname{In}(\mathcal{J}_{S_{i}\leq r_{i}})=\operatorname{In}( \mathcal{J}_{f}),\]
where both sides are square free monomial ideals. Therefore for \(\mathbf{m}\in B(k,n,d)\),
\[\mathbf{m}\in\operatorname{In}(\mathcal{J}_{f})\iff\mathbf{m}\in \operatorname{In}(\mathcal{J}_{S_{i}\leq r_{i}})\]
for some \(i\). Since \(\sum_{i}\operatorname{In}(\mathcal{J}_{\chi(S_{i})\leq r_{i}})=\operatorname{ In}(\mathcal{J}_{\chi(f)})\), by Proposition 6.8,
\[\mathbf{m}\in\operatorname{In}(\mathcal{J}_{f})\iff\operatorname{prom}( \mathbf{m})\in\operatorname{In}(\mathcal{J}_{\chi(f)})\]
which is equivalent to
\[\mathbf{m}\in B_{f}(k,n,d)\iff\operatorname{prom}(\mathbf{m})\in B_{\chi(f)}(k,n,d).\qed\]
|
2309.05197 | Learning Sequential Acquisition Policies for Robot-Assisted Feeding | A robot providing mealtime assistance must perform specialized maneuvers with
various utensils in order to pick up and feed a range of food items. Beyond
these dexterous low-level skills, an assistive robot must also plan these
strategies in sequence over a long horizon to clear a plate and complete a
meal. Previous methods in robot-assisted feeding introduce highly specialized
primitives for food handling without a means to compose them together.
Meanwhile, existing approaches to long-horizon manipulation lack the
flexibility to embed highly specialized primitives into their frameworks. We
propose Visual Action Planning OveR Sequences (VAPORS), a framework for
long-horizon food acquisition. VAPORS learns a policy for high-level action
selection by leveraging learned latent plate dynamics in simulation. To carry
out sequential plans in the real world, VAPORS delegates action execution to
visually parameterized primitives. We validate our approach on complex
real-world acquisition trials involving noodle acquisition and bimanual
scooping of jelly beans. Across 38 plates, VAPORS acquires much more
efficiently than baselines, generalizes across realistic plate variations such
as toppings and sauces, and qualitatively appeals to user feeding preferences
in a survey conducted across 49 individuals. Code, datasets, videos, and
supplementary materials can be found on our website:
https://sites.google.com/view/vaporsbot. | Priya Sundaresan, Jiajun Wu, Dorsa Sadigh | 2023-09-11T02:20:28Z | http://arxiv.org/abs/2309.05197v2 | # Learning Sequential Acquisition Policies
###### Abstract
A robot providing mealtime assistance must perform specialized maneuvers with various utensils in order to pick up and feed a range of food items. Beyond these dexterous low-level skills, an assistive robot must also plan these strategies in sequence over a long horizon to clear a plate and complete a meal. Previous methods in robot-assisted feeding introduce highly specialized primitives for food handling without a means to compose them together. Meanwhile, existing approaches to long-horizon manipulation lack the flexibility to embed highly specialized primitives into their frameworks. We propose Visual Action Planning OveR Sequences (**VAPORS**), a framework for long-horizon food acquisition. **VAPORS** learns a policy for high-level action selection by leveraging learned latent plate dynamics in simulation. To carry out sequential plans in the real world, **VAPORS** delegates action execution to visually parameterized primitives. We validate our approach on complex real-world acquisition trials involving noodle acquisition and bimanual scooping of jelly beans. Across 38 plates, **VAPORS** acquires much more efficiently than baselines, generalizes across realistic plate variations such as toppings and sauces, and qualitatively appeals to user feeding preferences in a survey conducted across 49 individuals. Code, datasets, videos, and supplementary materials can be found on our website.
Deformable Manipulation, Dexterous Manipulation
## 1 Introduction
Millions of people are impacted logistically, socially, and physically by the inability to eat independently due to upper mobility impairments or age and health-related changes [1; 2; 3]. Robot-assisted feeding has the potential to greatly improve the quality of life for these individuals while reducing caregiver burden. However, realizing a performant system in practice remains challenging. For instance, humans eat spaghetti noodles as shown in Fig. 1 using nuanced fork-twirling motions. Dishes like ramen require even more diverse strategies like scooping soup or acquiring meat and noodles. Thus, not only must an autonomous feeding system employ various utensils and strategies to handle different foods and quantities, but it must also operate over long horizons to finish a meal.
Figure 1: **Visual Action Planning OveR Sequences (VAPORS) employs a high level policy \(\pi_{H}\) to select amongst discrete manipulation strategies \(h\), such as grouping and twirling, and a low-level vision-parameterized policy \(\pi_{L}\) to execute these actions \(a_{t}\) for long-horizon dexterous food acquisition.**
Prior assistive feeding work has focused on learning individual low-level vision-parameterized primitives for food manipulation. Examples include separate policies for skewering [4; 5; 6], scooping [7], bite transfer [8; 9; 10], cutting [11; 12; 13], and pushing food piles [14]. While highly specialized, these policies cannot reason over an extended horizon or make use of multiple strategies for more effective plate clearance. Humans, on the other hand, interleave _acquisition_ and _rearrangement_ actions with ease--pushing multiple peas together before scooping instead of painstakingly acquiring each individual pea or gathering noodles closer to each other before twirling with a fork. Replicating this long-horizon foresight in robotic feeding has yet to be demonstrated.
Recent work in skill-based reinforcement learning (RL) provides a natural way to model long-horizon manipulation sequences hierarchically. This entails first learning a high-level policy for composing skills [15; 16; 17], and then optionally inferring the parameters of low-level skills separately [18; 19; 20]. These approaches tend to favor learning from simulation to scale data collection [21], but current state-of-the-art simulators lack high-fidelity models for food deformation, visuals, and cutlery interaction. This complicates learning food manipulation policies in simulation and transferring them to real [13]. Existing hierarchical approaches also assume that the low-level skills come from a general-purpose library of primitives such as grasping and path planning [19; 22; 18; 15; 23], limiting their applicability to the food domain which requires highly specialized behaviors. Thus, we seek to find an appropriate layer of abstraction for feeding, which can leverage the benefits of (1) hierarchical planning for long-horizon manipulation and (2) vision-based primitives for fine-grained control. Our key insight is that learning from simulated experience only at a _high-level_, which need not capture the intricacies of food dynamics, and incorporating visual planning to instantiate _low-level_ specialized primitives, yields a powerful approach to dexterous, multi-step food manipulation.
In this work, we present **VAPORS**: Visual Action Planning OveR Sequences, a unified framework for food manipulation. Our approach is decoupled into a high-level planner, which sequentially composes low-level primitives. We first learn a policy in simulation that models latent dynamics of plates from images. Specifically, we use segmented image observations as a representation space, which captures the distribution of food items and is transferable between simulation and reality for high-level plans. We train the policy using model-based RL with a reward that encourages both acquisition and rearrangement. Separately, we instantiate a library of specialized primitives in the real world from learned food pose estimation and segmentation. Finally, we use the learned high-level planner on segmented real food images to plan sequences of primitives for long-horizon acquisition.
We experimentally validate our approach on two real food manipulation tasks: robotic noodle acquisition and bimanual scooping. Across both real-world trials and a comprehensive user study of 49 users, **VAPORS** achieves the highest efficiency, plate clearance, and qualitative user ratings compared to heuristic and single-primitive baselines, all while generalizing to unseen plates.
## 2 Related Work
**Robot-Assisted Feeding.** Recently, a number of devices for mealtime assistance have become available on the market [24; 25], but are limited in functional reach due to reliance on pre-programmed trajectories or teleoperation by users. While _bite transfer_ of a food item to a user's mouth is the eventual goal of autonomous feeding [9; 8; 10], we focus on _bite acquisition_ as a primary initial step for downstream feeding. Prior work in bite acquisition demonstrates the effectiveness of visual planning for precise manipulation. Feng et al. [6], Gordon et al. [5; 26] and Sundaresan et al. [4] leverage bounding box localization, food pose estimation, and visual servoing to geometrically plan precise fork skewering motions. Similarly, Grannen et al. [7] and Suh and Tedrake [14] plan bimanual scooping and grouping actions, respectively, for segmented food piles. These works focus only on developing a specialized individual primitive for food manipulation. In isolation, this does not capture many long-horizon real-world feeding scenarios with multiple utensils and strategies.
**Long-Horizon Planning and Control.** Several recent frameworks tackle long-horizon manipulation by separating motion-level decision-making from sequential plans. Traditionally, task-and-motion-planning (TAMP) approaches tend to assume extensive domain knowledge including after-effects of actions and fixed task plans [27; 28; 29; 30; 31; 28]. In feeding, plate dynamics can be highly
uncertain, and state estimation is notoriously challenging, rendering these approaches ineffective. An alternative approach is model-based planning and control, with recent impressive results on complex tasks like dough manipulation [16; 32; 33]. This family of methods leverage learned environment dynamics over visual states like images [34; 35; 36; 37; 33], keypoints [38], or particle-based representations [16; 32] to sample and plan action sequences that maximize predicted rewards. However, these methods do not scale well to high-dimensional continuous action spaces such as that of food acquisition. To address this, hierarchical RL decouples policies into a high-level planner which selects amongst discrete but parameterized low-level primitives [39]. These works have demonstrated promising results on simulated long-horizon tabletop manipulation [18; 19; 15], but have yet to consider (1) real-world deployment beyond carefully controlled experimental setups, or (2) complex manipulation beyond commonplace primitives like pick-place, path-planning, and grasping. In contrast, we consider highly diverse plates requiring specialized primitives and tools.
**Learning and Control for Manipulation in the Real World.** A large body of robotics research focuses on learning real-world policies for manipulation either through sim-to-real transfer or exclusively from real interactions. With sufficient domain randomization, sim-to-real transfer has proven effective for tasks involving rigid objects or a limited set of deformable items like cloth, which state-of-the-art simulators support [40; 41; 42]. However, adapting these simulators to modeling food appearance and deformation is highly non-trivial. Meanwhile, learning exclusively from real data has been shown to work well in challenging domains such as semantic grasping [43] or cable untangling [44; 45; 46]. These approaches rely on state representations that are scalable to learn, such as descriptors learned from self-supervised interaction [43] or keypoints learned from a small amount of manually annotated images [47; 48; 49; 50]. In our setting, it is difficult to scale real-world data collection across the range of food shapes, appearances, and properties a robot may encounter. Self-supervised learning is also complicated due to resets and utensil interchange. We instead take a hybrid approach which takes advantage of simulation for modeling high-level plate dynamics from large-scale interactions, but leverages visual planning at the low level for precise real manipulation.
## 3 Problem Statement
We formalize the long-horizon food acquisition setting by considering an agent interacting in a finite-horizon Partially Observable Markov Decision Process (POMDP). This is defined by the tuple \((\mathcal{S},\mathcal{O},\mathcal{A},\mathcal{T},\mathcal{R},T,\rho_{0})\). We assume access to plate image observations \(o_{t}\in\mathbb{R}_{+}^{W\times H\times C}=\mathcal{O}\) of unknown plate states \(\mathcal{S}\), with the initial state distribution given by \(\rho_{0}\). Here, \(W\), \(H\), and \(C\) denote the image dimensions. \(\mathcal{A}\) denotes the action space, and \(\mathcal{T}:\mathcal{S}\times\mathcal{A}\rightarrow\mathcal{S}\) represents the unknown transition function mapping states and actions to future states. The time horizon \(T\) denotes the discrete budget of actions to clear the plate and \(\mathcal{R}(s,a)\) refers to the reward which measures progress towards plate clearance. Our goal is to learn a policy \(\pi(a_{t}|o_{t})\) that maximizes expected total return: \(\mathbb{E}_{\pi,\rho_{0},\mathcal{T}}[\sum_{t}R(s_{t},a_{t})]\), with \(t\leq T\).
To do so, we decouple \(\pi\) into separate high and low-level sub-policies. We assume access to \(K\) discrete manipulation primitives \(h^{k}\), \(k\in\{1,\dots,K\}\), and learn a high-level policy \(\pi_{H}\) which selects amongst these primitives. Additionally, we learn a low-level policy \(\pi_{L}\) which continuously parameterizes a selected primitive according to visual input. The components we aim to learn are summarized below, where \(h^{k}\) denotes a discrete primitive type and \(a_{t}\) denotes its continuous low-level instantiation:
\[\text{High-level policy}:\pi_{H}(h^{k}|o_{\leq t},a_{\leq t-1})\qquad\text{ Low-level policy}:\pi_{L}(a_{t}|o_{t},h^{k})\]
We consider low-level actions \(a_{t}\), parameterized by the position of the tip of a utensil \((x,y,z)\) and utensil roll and pitch \((\gamma,\beta)\). Here, \(\beta=0^{\circ}\) corresponds to an untilted fork handle, for instance, and \(\gamma=180^{\circ}\) corresponds to the fork tines being horizontal when viewed top-down (Fig. 2).
### State-Action Representations
In this section, we outline the visual state and action representations which are at the core of our learning approach introduced in Section 4.
**Visual State Space.** Our approach makes use of RGB-D images and segmented plate observations, \(I_{t}\in\mathbb{R}_{+}^{W\times H\times 3}\), \(D_{t}\in\mathbb{R}^{W\times H}\), \(M_{t}\in\mathbb{R}_{+}^{W\times H}\) at different levels of abstraction. We leverage binary
segmentation masks to capture the spread of food items on a plate, informing high-level planning with \(\pi_{H}\), and RGB-D observations as input to \(\pi_{L}\) which better capture fine geometric details of food.
**Action Parameterization.** We consider an agent that may either perform _acquisition_ or _rearrangement_ actions, parameterized below. Acquisition actions attempt to pick up a bite of food, and rearrangement actions consolidate items. For example, as a plate of noodles becomes more empty, the robot may need to employ a rearrangement action by pushing multiple strands together before twirling (acquiring) for a satisfactory bite size.
In _acquisition_, a robot with a utensil-mounted end-effector approaches the position \((x_{d},y_{d},z_{d})\) in the workspace, and executes an acquisition motion parameterized by roll \(\gamma\) and pitch \(\beta\) (i.e. twirling, skewering, scooping, etc.). Here, \((x_{d},y_{d},z_{d})\) denotes the _densest_ location of the plate, where food is most closely packed to encourage a high-volume bite. Specifically, \(a_{t,\mathrm{acquis}}=(x_{d},y_{d},z_{d},\gamma,\beta)\) (1).
The intent of _rearrangement_ is to bring food items from the sparsest plate region to the densest by pushing from \((x_{f},y_{f},z_{f})\) ro \((x_{d},y_{d},z_{d})\), while maintaining contact with the plate throughout. As this is a planar push, we simply orient the tool orthogonal to the push direction, such that \(\gamma=\arctan\left(\frac{y_{f}-y_{d}}{x_{f}-x_{d}}\right)\), and is untilted (\(\beta=0^{\circ}\)): \(a_{t,\mathrm{rearrange}}=(x_{d},y_{d},z_{d},x_{f},y_{f},z_{f})\) (2).
## 4 VAPORS: Visual Action Planning OveR Sequences
Within the visual state and action space outlined in Section3.1, we present our approach **VAPORS** for tackling long-horizon food acquisition. First, **VAPORS** learns a policy \(\pi_{H}\), detailed in Section4.1, to select amongst high-level strategies for long-horizon plate clearance via model-based planning. Finally, **VAPORS** learns a low-level policy \(\pi_{L}\), which leverages visually-parameterized primitives to carry out generated sequential plans for real-world food acquisition detailed in Section4.2.
### Learning High-Level Plans from Simulation
Our goal is to first learn a policy \(\pi_{H}\) for selecting amongst \(K\) discrete acquisition or rearrangement strategies without concern for the low-level action parameters. To do so, we learn a latent dynamics model of the plate from segmented image observations, and instantiate \(\pi_{H}(h^{k}|M_{\leq t},a_{\leq t-1})\), \(k\in\{1,\dots,K\}\) with model-based planning over this learned dynamics model. In this section, \(\tau\) denotes the running counter of high-level primitives executed so far, and \(t\) denotes the current timestep.
**Simulator Overview.** We train \(\pi_{H}\) entirely in simulation, where interactions can be collected at scale as opposed to the real world where manual plate resets and potential food waste are prohibitively expensive. As current simulators lack out-of-the-box support for many feeding scenarios, we develop a custom simulated food manipulation environment, visualized in Figure3 in Blender 2.92 [40], further detailed in C.1. The simulator exposes RGB images \(I_{t}\), binary food segmentation masks \(M_{t}\), and food item positional states \(s_{t}=\{(x_{i},y_{i},z_{i})\}_{i\in(1,\dots,N)}\). Using this information, we design rewards for food acquisition in terms of ground truth plate state and collect transitions to train \(\pi_{H}\).
**Reward Design.** With access to a simulated testbed for feeding, we train \(\pi_{H}\) to select amongst strategies via model-based reinforcement learning (RL). Our goal of efficient plate clearance can be
Figure 3: **Simulation vs. Real:** We visualize the task of bimanual scooping of jelly beans. Due to the sim-to-real gap, we merely leverage simulation to learn high-level food dynamics, and leave low-level action planning to real vision-parameterized primitives.
Figure 2: **Action Parameterization:** We parameterize _acquisition_ and _rearrangement_ actions relative to the densest \((x_{d},y_{d},z_{d})\) and furthest \((x_{f},y_{f},z_{f})\) regions on the plate, as well as the utensil roll \(\gamma\) and pitch \(\beta\).
specified with a reward that incentivizes either (1) successfully picking up food, or (2) reducing the spread of items on a plate. Optimizing for the first objective alone might lead to plate clearance, but at a slow pace of taking low-volume bites. The second objective encourages rearrangement when the plate is sparse to aid downstream acquisition. Concretely, we express this as a weighted reward with tunable weight \(\alpha\in[0,1]\): \(r_{t}=\alpha(\texttt{PICKUP GAIN})+(1-\alpha)(\texttt{COVERAGE LOSS})\) (3). Here, PICKUP measures the quantity of food items picked up. COVERAGE measures the spread of items on the plate, illustrated in blue in Fig. 2). We provide the details for computing both in Appendix C.2.
Learning Latent Plate Dynamics.With a means of measuring task progress via \(r_{t}\) and access to plate observations \(M_{t}\), we propose a model-based agent that learns plate dynamics from segmented observations and uses the learned model to plan actions that maximize reward. We achieve this by training a multi-headed latent dynamics model with the following (Fig. 4): (1) An _encoder_\(q(z_{t}|M_{\leq t},a_{\leq t-1})\) compressing high-dimensional segmented images \(M_{t}\) to compressed latent states \(z_{t}\), (2) A _transition function_ over the latent states \(p(z_{\tau}|z_{\tau-1},h^{k}_{\tau-1})\) with which to imagine rollouts, and (3) A decoded _reward model_ given by \(p(r_{t}|z_{t})\), such that at test time, we can sample action sequences and determine which maximize predicted rewards. We note that the transition function learns to predict high-level plate state changes between \(\tau-1\) and \(\tau\) as a result of executing a primitive \(h^{k}_{\tau}\), rather than between individual timesteps \(t-1\) and \(t\) due to \(a_{t}\).
During training, we collect simulated transitions consisting of the masked image, high-level primitive, low-level action, and reward \(\{(M_{t},h^{k}_{\tau},a_{t},r_{r})\}\). We train each head of this network using the objectives detailed in Appendix D.1.
We note that this approach is highly related to [37] with several crucial design choices. First, we learn plate dynamics over _segmented_ image observations \(M_{t}\) of food items on a plate, as opposed to raw RGB observations. This allows the dynamics model to attend to food items rather than the whole plate, provides an easily transferable representation between simulation and reality, and eases pressure for latent representations to capture irrelevant details in pixel space. Additionally, we learn a policy within an action space of discrete but continuously parameterized primitives as opposed to a high-dimensional space like joint-motor commands. This encourages actions that induce meaningful and perceptible plate changes likely to be encountered in downstream feeding.
Model-Based Planning.Once trained, we leverage the learned encoder, transition model, and reward model towards instantiating \(\pi_{H}\) as an MPC-style planner with a receding \(T\)-step horizon. At timestep \(t\), we enumerate all \(K^{T}\) future candidate action sequences for the small library of primitives \(K\). Conditioned on a history of observations \(M_{1:t}\) and actions \(a_{1:t-1}\), we imagine the future latent states \(z_{\tau:\tau+T+1}\) under each action sequence \(h^{k}_{\tau:\tau+T}\) via the transition function. Next, we predict decoded rewards according to the reward model \(p(r_{t}|z_{t})\) for each candidate sequence: \(R=\sum_{i=\tau+1}^{\tau+T+1}\ \mathbb{E}\left[p(r_{i}|z_{i})\right]\). Given the sequence of actions \((\hat{h}^{k}_{\tau},\hat{h}^{k}_{\tau+1},\dots,\hat{h}^{k}_{T})\) which maximizes predicted cumulative reward \(R\), we take \(\pi_{H}(M_{\leq t},a_{\leq t-1})=\hat{h}^{k}_{\tau}\), the first primitive in the predicted sequence. After executing this action, we replan with \(\pi_{H}\), terminating when \(\tau=T\). Details of the full planning pipeline, adapted from [37], are provided in Appendix D.2.
### Visual Policies for Low-Level Real Manipulation
Our learned simulated task dynamics model from Section 4.1 relies on segmented images \(M_{t}\) as an observation space and parameterized primitives as an action space. In this section, we describe the visual state estimation pipelines we use to instantiate our state-action representations on real data.
Food Segmentation.To define acquisition and rearrangement actions relative to the poses of food, we learn to segment food items on a plate as shown in Fig. 1. We learn a binary segmentation
Figure 4: **Latent Plate Dynamics Model: We learn a latent dynamics model of the plate comprised of an encoder \(q\), transition model \(p(z_{\tau}|z_{\tau-1},h^{k}_{\tau-1})\), and a reward model \(p(r_{t}|z_{t})\). We use this model to select action sequences that maximize future rewards.**
**Food Orientation.** Although segmentation provides a means to sense global _positional_ information about food on the plate, we also care about precisely _orienting_ a utensil with respect to the local geometry of a food item. For instance, using a fork to pick up a group of noodles requires orienting the fork tines opposite the grain of the strands. This is crucial to preventing slippage during twirling (Fig. 12), which tends to occur when the tines and strands run parallel. To address this, we also learn a network \(f_{\mathrm{ori}}:\mathbb{R}_{+}^{W^{\prime}\times H^{\prime}\times 3}\to \mathbb{R}\) mapping a local RGB crop of a food item of dimensions \(W^{\prime}\times H^{\prime}\) to the desired roll orientation of the utensil \(\gamma\). Prior work has shown that acquiring a food item orthogonal to its main principal axis, such as skewering a carrot against its length-wise axis rather than width-wise, can improve acquisition stability [4, 6]. Thus, we implement \(f_{\mathrm{ori}}\) as a fully convolutional network with a ResNet backbone and train it from a small amount of real food item crops (\(200\)), manually annotated with keypoints defining the principal food item axis as in [4].
**Action Instantiation.** With the visual state estimation pipelines \(f_{\mathrm{seg}}\) and \(f_{\mathrm{ori}}\) trained offline, we can instantiate \(\pi_{L}(a_{t}|o_{t},h^{k})\) for real-world manipulation. Given an RGBD image observation \(I_{t},D_{t}\), we first infer the segmentation mask \(\hat{M}_{t}=f_{\mathrm{seg}}(I_{t})\). Next, we query \(\pi_{H}\) to obtain a selected primitive \(\hat{h}^{k}=\pi_{H}(\hat{M}_{\leq t},\hat{a}_{\leq t-1}^{(H)})\).
If \(\hat{h}^{k}\) is an acquisition primitive, we instantiate the continuous action \(a_{t,\mathrm{acquis}}\) according to Eq.1 by estimating the densest plate region \((\hat{x}_{d},\hat{y}_{d},\hat{z}_{d})\) and utensil orientation \(\hat{\gamma}\). To do so, we apply a standard 2D Gaussian kernel over \(\hat{M}_{t}\) yielding \(\hat{M}_{t}^{\prime}\). This blurs the image such that high-density regions in the original segmentation mask remain saturated but sparse regions have lower intensity. From this, we take the 2D argmax \(\hat{u}_{d},\hat{v}_{d}=\arg\max_{(u,v)\in\hat{M}_{t}^{\prime}}\hat{M}_{t}^{ \prime}[u,v]\) to be the densest pixel in the image, deprojected to a 3D location \((\hat{x}_{d},\hat{y}_{d},\hat{z}_{d})\) via \(D_{t}\) and known camera intrinsics. Given a food item crop centered at the densest pixel, \(I_{t}^{\prime}\) (Fig. 2) we also infer the utensil orientation with \(\hat{\gamma}=f_{\mathrm{ori}}(I_{t}^{\prime})\). For a rearrangement primitive, we parameterize \(a_{t,\mathrm{rearrange}}\) according to Eq.2. In addition to sensing the densest plate region, we sense the furthest region \((\hat{x}_{f},\hat{y}_{f},\hat{z}_{f})\) by finding the lowest intensity pixel in \(\hat{M}_{t}^{\prime}\). This yields the following instantiations:
\[\pi_{L}(o_{t},h^{k,\mathrm{acquis}})=(\hat{x}_{d},\hat{y}_{d},\hat{z}_{d},\hat {\gamma})\qquad\left|\qquad\pi_{L}(o_{t},h^{k,\mathrm{rearrange}})=(\hat{x}_{ d},\hat{y}_{d},\hat{z}_{d},\hat{x}_{f},\hat{y}_{f},\hat{z}_{f})\right.\]
Finally, **VAPORS** operates in a perception-action loop using \(\pi_{H}\) to generate sequential plans and \(\pi_{L}\) to execute them. The full algorithm can be found in Algorithm1 of the Appendix.
## 5 Experiments
We seek to evaluate **VAPORS** ability to clear plates, by effectively leveraging diverse strategies and planning over long horizons. Thus, we compare against a single-strategy baseline with no
Figure 5: Across 10 trials for spaghetti (a) and jelly bean (b) acquisition, we visualize the cumulative amount acquired across individual trials (left) and averaged overall (right). Shading denotes the standard error.
long-horizon reasoning and a multi-strategy approach that plans long-term actions heuristically rather than via learned plate dynamics. We consider two challenging real-world feeding scenarios to test the capabilities of **VAPORS** compared to other approaches: noodle acquisition and bimanual scooping.
**Experimental Setup:** In noodle acquisition (Fig. 10), a Franka robot with a wrist-mounted custom motorized fork and RGBD camera must decide amongst _twirling_ (acquisition) or _grouping_ (rearrangement) to clear a plate of noodles. In bimanual scooping (Fig. 11), two Franka robots operating from overhead RGBD cameras must select amongst _scooping_ (acquisition) or _grouping_ (rearrangement) to clear a plate of jelly beans. For both tasks, we consider a _half-full_ initial plate distribution ( 50 g. noodles, 15 jelly beans) and a hard count of \(\tau=10\) actions for spaghetti and \(\tau=8\) actions for jelly beans, encouraging the acquisition of multiple items at once to finish a plate. For both tasks, we assume access to hand-eye calibration between the RGB-D camera and robot end-effector. In Appendix E we outline the hardware setup and control stack, low-level action instantiations, and training details for each task.
**Baselines:**_Acquire-Only_ is identical to **VAPORS** in terms of \(\pi_{L}\), but does not perform any long-horizon reasoning. Instead, at each timestep, this approach only acquires via twirling or scooping, with no rearrangement in between. _Heuristic_ also utilizes \(\pi_{L}\) in the same manner, but replaces \(\pi_{H}\) with a naive group-then-acquire strategy. This method senses the \(\mathtt{COVERAGE}\), as defined in Eq. (3), over \(\hat{M}_{t}\) to heuristically determine when acquiring or rearrangement is appropriate. When the area exceeds a pre-defined threshold, the policy defaults to rearrangement and otherwise acquires.
**Plate Clearance Results:** We evaluate **VAPORS**, Acquire-Only, and Heuristic on clearing plates across 10 trials for each task (Fig. 5). We see that **VAPORS** achieves the most efficient and highest cumulative plate clearance. As expected, Acquire-Only optimizes only for acquisition in the current instant, without exploiting the benefits of grouping for a more substantial pickup of multiple items at once. Scooping one jelly bean at a time or attempting to twirl just a few strands of noodles repeatedly leads to the observed slow rate of overall clearance. Heuristic's greedy group-then-acquire approach plans based on detected coverage thresholds, which we find is brittle in practice especially for any artifacts in segmentation mask predictions. This naive metric also does not encourage acquiring any bite-sized piles that may form intermittently, but rather aims to amass everything into one large pile before acquiring. This delays acquisition gains and wastes the action budget.
**User Evaluation:** Additionally, we conducted a user study with 49 non-disabled participants (age range \(27.0\pm 9.5\), \(46.9\%\) female and \(53.1\%\) male) to gauge user preferences across methods. Of this pool, \(77.6\%\) reported prior experience interacting with robots before, \(75.5\%\) reported having fed someone before, and \(28.6\%\) reported having been fed as an adult. We hypothesized: **H1.**_Compared to baselines, **VAPORS** use of multiple strategies and long horizon foresight will lead to more preferable feeding in terms of quantitative and qualitative metrics._
We used a within-subjects design where we presented each participant with videos of all 10 plate clearance trials per each of the three methods, for either noodle acquisition or bimanual scooping. For each participant, we randomized the method order, the order of trials per method, and the food group. In the study, we ask participants to rate efficiency, bite size, similarity to human feeding, practicality, likelihood for reuse, safety, and generalization.
Figure 6: **Likert Ratings: We administer a 7-point Likert survey to users after observing 10 trials per method. VAPORS elicits the most positive feedback across all criteria. ‘*’ indicates statistical significance (\(p<0.05\)).**
After watching all trials, we provided users with a 7-point Likert survey to assess these criteria (Fig. 6). **VAPORS** incurs the highest qualitative user ratings across criteria, compared to the Acquire-Only and Heuristic baselines, and with statistical significance for certain categories (\(p<0.05\), denoted '*'). Users noted that **VAPORS** "mimicked natural feeding," and "showed a capacity for clustering as the plate got more and more empty, which felt like a great and efficient approach," while Heuristic and Acquire-Only "seem like extreme policies, where [Acquire-Only] never tries to cluster and [Heuristic] focus too much on making big piles." These results align with the hypothesis that **VAPORS**' use of multiple strategies and ability to reason over long horizons benefits a user's mealtime experience. We provide additional user study findings in Appendix E.
**Generalization Testing:** Finally, we stress-test **VAPORS'** generalization capabilities by experimenting with noodle dishes prepared with sauces and garnishes as well as ordered from DoorDash (Fig. 7). We conduct 18 additional trials of plate clearance on unseen plates, separated into three tiers of difficulty with 6 trials per tier. We summarize our findings in Table 1.
**VAPORS** achieves near full plate clearance for Tier 1 noodles, demonstrating generalization to (a) a node shapes and sizes (Table 1). While **VAPORS** is still able to make decent progress towards plate clearance in Tier 2, we observe the occurrence of more slip failures (D) and misplanned actions (A, B) due to the addition of sauce and distractor food items. Somewhat surprisingly, the performance gap between Tier 2 and Tier 3 is minimal, with **VAPORS** being able to clear well over half the noodles for a fully out-of-distribution plate. The main challenges include misperceiving cabbage for noodles in the chow mein, as well as dropping twirled noodles heavily coated in pesto or soy sauce (D) (Fig. 12). Regardless, **VAPORS** demonstrates promising signs of zero-shot generalization.
## 6 Discussion
We present **VAPORS**, which to our knowledge is the first framework to address the multi-step food acquisition problem in robot-assisted feeding. Our hybrid approach leverages simulation to learn to model high-level plate dynamics at scale, and uses visual pose estimation in order to perform dexterous maneuvers for complex low-level food pickup. We experimentally validate **VAPORS** on a complex suite of real-world food acquisition tasks such as noodle acquisition and bimanual scooping of beans. **VAPORS** demonstrates the ability to clear plates efficiently over non-learned baselines while appealing to the feeding preferences of real users.
**Limitations and Future Work.** The largest current limitation is a lack of user testing on individuals with mobility impairments that affect their ability to eat independently, discussed in detail in Appendix A. Additionally, although this work highlights promising initial results toward generalization across food variations such as shape, sauces, and toppings, we acknowledge that our library of low-level primitives is currently limited. One actionable future direction is expanding our library with prior work on skewering, cutting, and even toppling unstable items to tackle a more expansive set of plates. Our initial prototypes for dexterous food acquisition, such as the motorized fork, also open up interesting possibilities for future designs of dexterous interchangeable utensils which would enable rapid strategy switching. Currently, the system also executes primitives in an open-loop fashion, but we hope to use reactive control in the future to adapt online to slippage or imprecision.
\begin{table}
\begin{tabular}{c c c|c c|c c|c} & & & & & & \multicolumn{3}{c}{_Failure Categorization_} \\ _Tier_ & _Description_ & **\% Cleared** & A & B & C & D & Failure Rate \\ \hline
1 & Plain Noodles & \(90\%\pm 65\) & 2 & 7 & 2 & 0 & 18\% \\
2 & Noodles w/ Sauce & \(68\%\pm 16\%\) & 2 & 8 & 1 & 4 & 25\% \\
3 & DoorDash Noodles & \(64\%\pm 13\%\) & 3 & 5 & 2 & 4 & 23\% \\ \end{tabular}
\end{table}
Table 1: **OOD Results and Categorization of Failure Modes:** (A) Misperception, (B) Wrong Action, (C) Imprecision, (D) Slip.
Figure 7: **Noodle Acquisition Tiers of Difficulty: Tier 1 consists of plain noodle varieties: Dan Dan, Udon, and Pappardelle noodles. Tier 2 includes Tier 1 plates along with soy sauce, maninara sauce, and garnishes such as parsley or cilantro. Tier 3 plates include noodle dishes such as pesto pasta and chow mein ordered from DoorDash.**
#### Acknowledgments
This work is in part supported by funds from NSF Awards 2132847, 2006388, 2218760, as well as Stanford HAI, the Office of Naval Research, AFOSR YIP FA9550-23-1-0127, and Ford. We thank Lorenzo Shaikewitz for designing the motorized fork used in this work which made real-world experimentation possible. We also thank Rajat Kumar Jenamani, Suneel Belkhale, Jennifer Grannen, Yuchen Cui, and Yilin Wu for their helpful feedback and suggestions. Priya Sundaresan is supported by an NSF GRFP.
|
2309.04289 | Programmable Real-Time Magnon Interference in Two Remotely Coupled
Magnonic Resonators | Magnon interference is a signature of coherent magnon interactions for
coherent information processing. In this work, we demonstrate programmable
real-time magnon interference, with examples of nearly perfect constructive and
destructive interference, between two remotely coupled yttrium iron garnet
spheres mediated by a coplanar superconducting resonator. Exciting one of the
coupled resonators by injecting single- and double-microwave pulse leads to the
coherent energy exchange between the remote magnonic resonators and allows us
to realize a programmable magnon interference that can define an arbitrary
state of coupled magnon oscillation. The demonstration of time-domain coherent
control of remotely coupled magnon dynamics offers new avenues for advancing
coherent information processing with circuit-integrated hybrid magnonic
networks. | Moojune Song, Tomas Polakovic, Jinho Lim, Thomas W. Cecil, John Pearson, Ralu Divan, Wai-Kwong Kwok, Ulrich Welp, Axel Hoffmann, Kab-Jin Kim, Valentine Novosad, Yi Li | 2023-09-08T12:23:00Z | http://arxiv.org/abs/2309.04289v1 | # Programmable Real-Time Magnon Interference in Two Remotely Coupled Magnonic Resonators
###### Abstract
Magnon interference is a signature of coherent magnon interactions for coherent information processing. In this work, we demonstrate programmable real-time magnon interference, with examples of nearly perfect constructive and destructive interference, between two remotely coupled yttrium iron garnet spheres mediated by a coplanar superconducting resonator. Exciting one of the coupled resonators by injecting single- and double-microwave pulse leads to the coherent energy exchange between the remote magnonic resonators and allows us to realize a programmable magnon interference that can define an arbitrary state of coupled magnon oscillation. The demonstration of time-domain coherent control of remotely coupled magnon dynamics offers new avenues for advancing coherent information processing with circuit-integrated hybrid magnonic networks.
In condensed matter physics, hybridization describes the formation of new eigenstates by mixing two or more excited states. This process, which has been used to describe the coupling of two atomic orbitals for chemical bonds [1], is now being extensively cultivated in hybrid dynamic systems [2; 3], where dynamic excitations from disparate physical platforms are strongly coupled and form new hybrid modes in order for them to coherently exchange information while preserving the phase correlation. Hybrid dynamic systems have successfully combined diverse physical systems, such as photons, phonons, spins and magnons, and leveraged their individual advantages for implementing novel functionalities such as coherent transduction[4], storage[5], nonreciprocity[6], and sensing [7] in quantum information science.
The use of magnetic excitations, or magnons, in coherent information processing has been well explored in magnonics[8; 9; 10]. Propagating magnons are controlled and engineered for spin wave signal processing, with examples of spin wave logic gate[11; 12; 13], directional coupler [14; 15], and multiplexing operations [16; 17]. These functionalities are based on the interference of two propagating magnons [11; 12; 13; 13; 18; 19; 20; 21], where constructive interference leads to high amplitude or logic "1" and destructive interference leads to low amplitude or logic "0". However, the fundamental disadvantages of short magnon propagation distance and inefficient magnon excitations have limited the development of coherent magnon spintronics. The emerging field of hybrid magnonics [22; 23; 24; 25] provides a solution to the spatial and efficiency limitations for coherent magnonic information processing. By achieving strong coupling between magnons and microwave photons [26; 27; 28; 29; 30; 31; 32; 33; 34; 35], the conversion efficiency between magnons and photons can reach 100% with cavity-enhanced magnon interactions. In addition, using microwave photons as coherent information transducer in a microwave resonator, remote coupling of two magnonic resonators can be achieved with macroscopic separation[36; 37; 38; 39; 40; 41; 42], showing the potential of building spatially distributed coherent magnonic network for interference operations.
In this work, we demonstrate nearly perfect constructive and destructive time-domain interference of magnon excitations between two remote yttrium iron garnet (YIG) spheres, with their strong coupling mediated by a coplanar superconducting resonator [40] (Fig. 1a). Using two vertical antennas (Figs. 1b, c), we can directly excite the magnon mode of one YIG sphere and detect the other with minimal crosstalk to the superconducting resonator. By measuring the real-time signal output from the second YIG sphere, we show that the magnon excitation from the first microwave pulse can be coherently transferred back and forth between the two YIG sphere, and interfere with the second microwave pulse excitation. The independent control of both the time and phase delay allow for arbitrary final hybrid magnonic state, ranging from constructive (2,2) or destructive (0,0) interference, to the in-phase (2,0) or out-of-phase (0,2) single mode, and to the intermediate states, thus enabling programmable control of hybrid magnonic states. Our results experimentally show that magnons preserve their full coherence by coherently transferring between remote magnonic resonators, which lays a foundation for coherent information processing with hybrid magnonics and developing functionalities in quantum magnonics.
Figure 1(d) shows the continuous-wave measurements of microwave transmission between the two vertical antennas (\(S_{21}\)) under a global field of \(\mu_{0}H_{B}=0.2\) T. A NbTi superconducting coil applies a local magnetic field to YIG sphere 1 [40] with the field direction parallel to the global field. By sweeping the coil current, clear mode anticrossing is observed between the two magnon modes, with \(I_{\rm coil}=-0.4\) A marking the frequency degeneracy condition for the two magnon modes (\(\omega_{m1}/2\pi=\omega_{m2}/2\pi=\omega_{0}/2\pi=5.405\) GHz). The magnon-magnon avoided crossing yields a coupling strength of \(g_{mm}/2\pi=14.8\) MHz. This is mediated by the dispersive coupling of both YIG spheres to the nearest superconducting resonator mode (\(\omega_{r}/2\pi=5.27\) GHz) [43; 44; 45; 46], with a magnon-photon coupling of \(g_{mr}/2\pi=46\) MHz. From the frequency detuning, the theoretical magnon-magnon coupling strength is [40]\(g_{mr}^{2}/|\omega_{m}-\omega_{r}|2\pi=16.3\) MHz, which is close to \(g_{mm}\) extracted from the experiments (14.8 MHz). Shown in Fig. 1(e), we also highlight that the microwave transmission is significantly enhanced when the two magnon modes are degenerate and strongly coupled (\(I_{\rm coil}=-0.4\) A), as compared with the case when they are decoupled (\(I_{\rm coil}=0.4\) A). This shows that additional microwave signal is transmitted from YIG sphere 1 to 2 via the superconducting resonator, as compared to the decoupled case where the transmitted signal is dominated by the free-space radiation between the two vertical antennas. The large signal-to-noise ratio of magnon excitations is crucial for observing the real-time evolution of magnon states.
To investigate the real-time dynamics of the two coupled magnonic resonators, we excite the magnon mode of YIG sphere 1 with a microwave pulse at \(\mu_{0}H_{B}=0.2\) T, and measured the signal output of YIG sphere 2 by a fast oscilloscope [Fig. 2(a)]. The microwave pulse frequency is set to \(\omega_{0}\) for optimal magnon excitation. The pulse duration is 10 ns and is much shorter than the magnon-magnon transduction period \(\pi/g_{mm}=35\) ns, so that the pulse can be treated as an instantaneous excitation of magnons in YIG sphere 1. Shown in Fig. 2(b), a clear Rabi-like oscillation of magnon amplitude in YIG sphere 2 is measured at port 2. At \(I_{\rm coil}=-0.4\) A where the two magnon modes are degenerate [Fig. 2(d)], we find that the magnon population oscillates with a period of \(T_{R}=34\) ns, agreeing well with \(\pi/g_{mm}\). The relaxation time \(T_{1}=161.50\) ns corresponds to a magnon damping rate of \(\kappa_{m}/2\pi=1/(2\pi T_{1})=0.98\) MHz. When the magnon frequencies are detuned, e.g. at \(I_{\rm coil}=0\) A, the magnon oscillation period becomes shorter [Fig. 2(c)] due to the larger frequency difference between the two hybrid modes, which is also reflected by the increasing magnon mode difference in Fig. 1(d). The quantitative analysis of the Rabi-like oscillation between the two YIG spheres is important for the study of magnon interference below.
Figure 1: (a) Illustration of magnon-magnon coupling (\(\omega_{m1}\), \(\omega_{m2}\)) via dispersive coupling to a microwave resonator (\(\omega_{r}\)). (b), Experimental schematics for the transmission measurement, with two vertical antennas adjacent to the two distant YIG spheres for selective microwave excitation and detection. (c) Pictures of the circuit setup and the superconducting resonator chip with two embedded YIG spheres (\(d=250\)\(\mu\)m) separated by 12 mm. (d) VNA power transmission spectra from one vertical antenna to the other vertical antenna as a function of \(I_{\rm coil}\). Power spectral traces for \(I_{\rm coil}=-0.4\) A and \(0.4\) A. The amplitude is plotted as \(V_{2}/V_{1}=10^{(S_{21}/10)}\).
Figure 2: (a) A schematic illustration of the pulse microwave measurements, where the pulse input is generated by mixing a continuous microwave to a square function with a width of 10 ns, and the output is measured by a fast oscilloscope without any averaging (single-shot measurement). (b) Time traces of the output signals for different \(I_{\rm coil}\) under a global field of \(\mu_{0}H_{B}=0.2\) T. Inset: Time trace measured at \(I_{\rm coil}=0\) A and \(I_{\rm coil}=-0.4\) A, respectively.
In order to investigate magnon interference between the two remotely coupled YIG spheres, we inject two consecutive microwave pulses with the same amplitude to excite YIG sphere 1 [Fig. 3(a)] under the condition of \(I_{\rm coil}=-0.4\) A and \(\mu_{0}H_{B}=0.2\) T (\(\omega_{m1}=\omega_{m2}=\omega_{0}\)). Because the microwave pulses are generated by mixing a continuous-wave microwave signal with square waves, they maintain strict phase coherence, which provides a good phase reference for studying coherent magnon interactions. In the first example, we set the pulse microwave frequency to be equal to the magnon frequency as \(\omega=\omega_{0}\). Fig. 3(b) shows the evolution of two-pulse interference as a function of time for different \(\Delta\tau\). The diagonal boundary defines where the excitation of the second pulse starts. Before the second pulse, the magnon excitation in YIG sphere 2 shows Rabi-like oscillation defined by \(T_{R}\), which is the same as measured in Fig. 2(b). However, after the second pulse, an interference pattern emerged, showing near-perfect construction or destruction of the Rabi-like oscillation. In particular, the time trace shows maximal amplitude which is twice the amplitude of single-pulse excitation at \(\Delta\tau=2nT_{R}\), and near-zero amplitude at \(\Delta\tau=(2n+1)T_{R}\) [Fig. 3(c)]. This shows that the magnon excitation maintains its coherence after being fully transduced back and forth between two spatially separated YIG spheres, and can interfere with the second microwave pulse with its full amplitude. By slightly changing the frequency of the microwave pulses from \(\omega_{0}\) to \(\omega_{0}+g_{mm}\), as shown in Fig. 3(d), the interference pattern has completed changed. The period of the interference pattern is reduced in half, with strong and weak magnon outputs at \(\Delta\tau=2n(T_{R}/2)\) and \(\Delta\tau=(2n+1)(T_{R}/2)\), respectively [Fig. 3(e)]. The strong output still shows Rabi-like oscillation with twice the maximal amplitude as the single-pulse excitation. However, for the weak output, the Rabi-like oscillation disappears and the magnon excitation shows monotonous relaxation with time, with an exponential relaxation time of \(T_{1}^{*}=161.50\) ns, same as \(T_{1}\) measured above. Same feature is observed in the case of \(\omega=\omega_{0}-g_{mm}\); see the Supplemental Information for more details [47].
The magnon interference can be analytically described by the time evolution of two coupled magnon resonators \(\vec{m_{1}}\) and \(\vec{m_{2}}\). Their hybridized eigenmodes can be formu
Figure 3: (a) Schematic of two-pulse excitations of coherent magnonic states. (b) Time traces of the output signals for different \(\Delta\tau\) with two-pulse excitations, measured at \(\omega=\omega_{0}\). (c) Individual time traces of (b) at \(\Delta\tau=0\), 34, 68 and 102 ns, with the time \(t=0\) starting right after the second pulse. (d) Same as (b) measured at \(\omega=\omega_{0}+g_{mm}\). (e) Individual time traces of (d) at \(\Delta\tau=0\), 17, 34 and 51 ns, with the time shifted by \(\Delta\tau\), starting right after the second pulse. (f-g) FFT spectra of the time traces from (c) and (e), showing alternating final states (f) between (0,0) and (2,2), and (g) between (2,0) and (2,2). Orange shades indicate the range for \(m_{\pm}=2\). Dashed vertical line at \(\omega_{0}/2\pi=5.405\) GHz splits the regimes of \(m_{+}\) and \(m_{-}\). (h) Table of (\(m_{+}\), \(m_{-}\)) states for different \(\delta\) and \(\Delta\tau\).
lated as:
\[\vec{m_{\pm}}(t)=\frac{1}{\sqrt{2}}(\vec{m_{1}}+\vec{m_{2}})e^{-i(\omega_{0}\pm g _{mm})t} \tag{1}\]
where \(\vec{m_{+}}\) and \(\vec{m_{-}}\) denote the in-phase and out-of-phase modes, respectively, with eigenvalues of \(\omega_{0}\pm g_{mm}\). The magnon state can be labelled by the amplitudes of the two eigenmodes, (\(m_{+}\), \(m_{-}\)). After the first microwave pulse, which excite only the dynamics in YIG sphere 1, the magnon state evolves from the initial state (0,0) to (1,1), where the amplitudes of the two eigenmodes have the same weight and are renormalized as one. The change of states happens after the second pulse. By controlling \(\Delta\tau\) and the frequency detuning \(\delta=\omega-\omega_{0}\), the final magnon state can be modified to arbitrary combination with the amplitude range between 0 and 2 as a result of destructive or constructive interference. Here \(\delta\) introduces a phase shift between the microwave pulse and the magnon dynamics as \(\Delta\phi=\delta\Delta\tau\). The final state after two pulses can be derived as:
\[m_{\pm}(\Delta\tau)=\left|1+e^{-i(\delta\mp g_{mm})\Delta\tau}\right|=2\left| \cos\frac{(\delta\mp g_{mm})\Delta\tau}{2}\right| \tag{2}\]
The table in Fig. 3(h) lists the final magnon states for a few different \(\delta\) and \(\Delta\tau\). In the case of \(\delta=0\) (Fig. 3b), Eq. (3) is reduced to \(m_{\pm}(\Delta\tau)=2|\cos(g_{mm}\Delta\tau/2)|\), which leads to synchronized constructive (2,2) or destructive (0,0) interference at \(g_{mm}\Delta\tau=2n\pi\) or \(g_{mm}\Delta\tau=(2n+1)\pi\). In the case of \(\delta=g_{mm}\), Eq. (3) is reduced to \(m_{+}=2\), \(m_{-}(\Delta\tau)=2|\cos(g_{mm}\Delta\tau)|\), leading to a constant amplitude of \(m_{+}\) and an oscillating \(m_{-}\) with half the oscillation period. The theoretical predictions of Eq. (2) nicely agree the experiments in Figs. 3(b-e), with a Rabi frequency of \(\pi/g_{mm}=33.8\) ns that matches with \(T_{R}=34\) ns. We also conduct the fast-Fourier transform (FFT) of the time traces from Figs. 3 (c,e) and show the peaks in Figs. 3 (f,g), respectively. The peaks are located at \(\omega_{0}\pm g_{mm}\), indicating the magnon states of \(m_{+}\) and \(m_{-}\). At different \(\Delta\tau\), the magnon states oscillates between (0,0) and (2,2) for \(\delta=0\) and between (2,0) and (2,2) for \(\delta=g_{mm}\), which are indicated by the similar FFT peak height around 0.25 a.u. (orange shade, \(m_{\pm}=2\)) or zero peak (\((m_{\pm}=0)\)) at \(\omega_{0}\pm g_{mm}\). We also note that weak peaks appear when the magnon state is supposed to be zero at longer delay time, e.g. \(\Delta\tau=102\) ns in Fig. 3(f) and \(\Delta\tau=51\) ns in Fig. 3(g). This is due to the magnon relaxation and decoherence with \(\Delta\tau\), as will be discussed later. Similarly, when \(\delta=-g_{mm}\), according to Eq. (2), the magnon states oscillate between (0,2) and (2,2); see the Supplemental Materials for details [47].
To further demonstrate the capability of programming magnon states with two-pulse interference, we show in Figs. 4(a) and (b) the experimentally extracted final state of \(m_{+}\) and \(m_{-}\) as a function of \(\omega\) and \(\Delta\tau\). The value of each pixel corresponds to the FFT amplitude of the time trace after two microwave pulses. Clear interference patterns are observed and nicely match with the calculation of Eq. (2) as shown in Figs. 4(c) and (d). As a characteristic of the mode selection, for \(\omega/2\pi=(\omega_{0}-g_{mm})/2\pi=5.390\) GHz [\(=(\omega_{0}+g_{mm})/2\pi=5.420\) GHz], \(m_{+}\) (\(m_{-}\)) always shows the maximum value of 2 while \(m_{-}\) (\(m_{+}\)) oscillates between 0 and 2, as also shown in Figs. 3(e) and (g). It can be derived that any magnon state (\(m_{+}\), \(m_{-}\)) between (0,0) and (2,2) can be achieved for \(0\leq\Delta\tau\leq\pi/g_{mm}\) and \(0\leq\delta\leq g_{mm}\); see the Supplemental Materials for details [47].
The magnon interference also allows us to quantify the magnon decoherence time \(T_{2}\) during the remote magnon transduction process. Different from \(T_{1}\) which characterizes the relaxation time of magnon states, \(T_{2}\) describes how long magnons preserve their phase correlation for interference. The two-pulse experiments set the condition for magnon-magnon interference with a delay time \(\Delta\tau\) for free magnon evolution. The external microwave source, which is used to create the two consecutive pulses, also defines an excellent phase reference for magnon interference analysis. In addition, the hybrid magnonic system
Figure 4: (a-b) Magnon states (a) \(m_{+}\) and (b) \(m_{-}\) extracted from the FFT of the time traces measured at different \(\omega_{c}\) and \(\Delta\tau\). Colorbars denote the FFT peak amplitude with examples shown in Fig. 2(e) and (f). (c-d) Theoretical predictions of (c) \(m_{+}\) and (d) \(m_{-}\) from Eq. (2) using \(g_{mm}/2\pi=14.3\) MHz and \(\omega_{0}/2\pi=5.405\) GHz. (e) Plot of \(V_{max}\) as a function of \(\Delta\tau\) for \(\omega_{c}/2\pi=5.390\), 5.405 and 5.420 GHz. The fit equation is \(V_{max}=V_{0}[m_{+}(\Delta\tau)+m_{-}(\Delta\tau)]\) by taking \(V_{0}\), \(g_{mm}\) and \(T_{2}\) as fit parameters.
enables a dephasing test of nonlocal magnon excitations, which is directly related to coherent magnon information processing. In Fig. 4(e), we plot the maximal output voltage amplitude after the second pulse, \(V_{max}\), as a function of \(\Delta\tau\) for \(\delta=0\) (red) and \(\delta=\pm g_{mm}\) (black and green). \(V_{max}\) represents \(m_{+}+m_{-}\). The dephasing process can be described by adding the dephasing term in Eq. (2):
\[m_{\pm}(\Delta\tau)=\left|1+e^{-i(\delta\mp g_{mm})\Delta\tau-\Delta\tau/T_{2}}\right| \tag{3}\]
The fitting curves to Eq. (3) nicely reproduce the experimental results for all the three frequency detunings. The extracted \(T_{2}=139.0\) ns is slightly smaller than \(T_{1}=161.5\) ns, suggesting that the magnon spin dynamics are highly phase coherent due to their exchange coupling. We note that since the magnon-magnon interference is conducted in the dispersive magnon-photon coupling regime, neither \(T_{1}\) nor \(T_{2}\) are sensitive to the resonator damping rate. The two-pulse interference process is similar to Ramsey interference of a single spin with two \(\pi/2\) pulses [48]. For \(\delta=0\), the spin-up and spin-down states are represented by (0,0) and (2,2) states, with one pulse setting the magnon state to (1,1) and two pulses driving the magnon states to oscillate between (0,0) and (2,2) depending on \(\Delta\tau\). This oscillating state can be restricted to \(m+\) or \(m_{-}\) only for \(\delta=-g_{mm}\) or \(g_{mm}\), respectively. We note another work which also measures magnon Ramsey interference [49]. The difference is that our work directly characterizes the interference of pure magnon hybrid states whereas in Ref. [49] the dephasing process occurs with magnon-photon hybrid states.
In conclusion, we have demonstrated real-time interference of two remotely coupled magnon resonators (YIG spheres) that are embedded and dispersively coupled to a coplanar superconducting resonator. By controlling the frequency and time delay of two microwave pulses, we can obtain arbitrary combination of the hybrid magnon states (\(m_{+}\), \(m_{-}\)) from constructive or destructive magnon interference excited by the two pulses. From the time-delay dependence of magnon interference, we also obtain the magnon dephasing time which is close to the magnon relaxation time. Our results provide a realistic circuit-integrated hybrid magnonic system for implementing coherent magnon operations in the time domain. The control of hybrid magnon states can be used to realize remote magnon-magnon entanglement and distributed magnon quantum gate operations by integrating superconducting qubits on the same superconducting circuit [50; 51; 52]. Additionally, the two-magnon-resonator system can be extended towards coupled magnon networks with distributed magnon oscillators, which is applicable to many magnonic computing ideas with ultra-long magnon coherence time, tunable magnon frequency and coupling strength.
_Acknowledgments._ We thank Wolfgang Pfaff, Juliang Li, Volodymyr G. Yefremenko and Margarita Lisovenko for discussion and support on the experiment. This work was supported by the U.S. DOE, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division under contract No. DE-SC0022060. Work performed at the Center for Nanoscale Materials, a U.S. Department of Energy Office of Science User Facility, was supported by the U.S. DOE, Office of Basic Energy Sciences, under Contract No. DE-AC02-06CH11357. K.-J.K. is supported by KAIST-funded Global Singularity Research Program for 2021 and the National Research Foundation of Korea (NRF) funded by the Korean Government (MSIP) under grant No. 2020R1A2C4001789, 2016R1A5A1008184. M.S. was supported by the education and training program of the Quantum Information Research Support Center, funded through the National research foundation of Korea (NRF) by the Ministry of Science and ICT (MSIT) of the Korean government under grant No. 2021M3H3A103657313.
|
2309.07958 | Characterising the true descendants of the first stars | The metal-poor stars in the Galactic halo are thought to show the imprints of
the first (PopIII) stars, and thus provide a glance at the first episodes of
star formation. In this work, we aim at understanding whether all very
metal-poor stars formed in environments polluted by PopIII supernovae (SNe) and
at what level. With a general parametric model for early metal enrichment, we
study the chemical abundances (from C to Zn) of an environment imprinted by a
single PopIII SN. We investigate how these abundances depend on the initial
mass and internal mixing of PopIII stars, as well as on their SN explosion
energy. We then study how subsequent generations of normal (PopII) SNe affect
the PopIII chemical signatures. By comparing the observed chemical abundances
with our model predictions, we show that stars with [C/Fe]$>+2.5$ form in
environments polluted purely by low-energy PopIII SNe ($E_{\rm SN}<2\times
10^{51}$erg). At lower [C/Fe], stars can be imprinted either by PopIII only, or
also by normal PopII SNe. The probability of being enriched by PopII SNe
increases as [C/Fe] decreases. When PopII stars contribute more to the
pollution, they wash out the diverse chemical peculiarities left by the
different PopIII SNe, and the chemical dispersion between their descendants
decreases. We conclude that C-normal stars ($\rm [C/Fe] \leq +0.7$) have likely
been enriched by PopII SNe at a $\geq 50\%$ level and we identify in the
abundance scatter a key diagnostic to pinpoint the signature of PopIII SNe. | Irene Vanni, Stefania Salvadori, Ása Skúladóttir, Martina Rossi, Ioanna Koutsouridou | 2023-09-14T18:00:02Z | http://arxiv.org/abs/2309.07958v3 | # Characteristics the true descendants of the first stars
###### Abstract
The metal-poor stars in the Galactic halo are thought to show the imprints of the first (Pop III) stars, and thus provide a glance at the first episodes of star formation. In this work, we aim at understanding whether all very metal-poor stars formed in environments polluted by Pop III supernovae (SNe) and at what level. With a general parametric model for early metal enrichment, we study the chemical abundances (from C to Zn) of an environment imprinted by a single Pop III SN. We investigate how these abundances depend on the initial mass and internal mixing of Pop III stars, as well as on their SN explosion energy. We then study how subsequent generations of normal (Pop II) SNe affect the Pop III chemical signatures. By comparing the observed chemical abundances with our model predictions, we show that stars with [C/Fe] \(>+2.5\) form in environments polluted purely by low-energy Pop III SNe (\(E_{\rm SN}<2\times 10^{51}\) erg). At lower [C/Fe], stars can be imprinted either by Pop III only, or also by normal Pop II SNe. The probability of being enriched by Pop II SNe increases as [C/Fe] decreases. When Pop II stars contribute more to the pollution, they wash out the diverse chemical peculiarities left by the different Pop III SNe, and the chemical dispersion between their descendants decreases. We conclude that C-normal stars ([C/Fe] \(\leq+0.7\)) have likely been enriched by Pop II SNe at a \(\geq 50\%\) level and we identify in the abundance scatter a key diagnostic to pinpoint the signature of Pop III SNe.
keywords: stars: abundances - ISM: abundances - Galaxy: halo - cosmology: first stars
## 1 Introduction
The first (Pop III) stars are believed to form about 200-400 Myrs after the Big Bang (at redshift \(\mathrm{z}\sim 30-20\)) in primordial composition gas clouds dwelling in low-mass (\(M\sim 10^{6}\mathrm{M}_{\odot}\)) dark matter structures called mini-halos (e.g. Tegmark et al., 1997; Abel et al., 2002; Yoshida et al., 2003; Bromm, 2013; Glover, 2013; Greif, 2015). Given the lack of heavy elements (i.e. metals) and the low virial temperature of the mini-halos (\(T_{\rm vir}\leq 10^{4}\) K), the only available channel to cool down the gas is the molecular hydrogen (H\({}_{2}\)), which is not an effective coolant and refrigerates the collapsing gas clouds to higher central temperatures (\(T_{\rm c}\sim 200\) K, see Bromm & Yoshida, 2011) than what is seen in present-day star-forming metal-rich clouds (\(T_{\rm c}\sim 10\) K). The higher primordial gas temperature leads to both more massive proto-stellar gas clouds (\(M\sim 1000\mathrm{M}_{\odot}\), e.g. see Abel et al., 2002; Yoshida et al., 2003) and higher gas accretion rates onto the protostars (e.g. Omukai & Palla, 2003; Hosokawa & Omukai, 2009; Ohkubo et al., 2009). These results, which only depend on the lack of heavy elements, are very robust and suggest that stars formed in primordial composition environments (i.e. Pop III stars) are more massive than present-day normal (Pop II/I) stars (e.g. Hosokawa et al., 2011; Susa et al., 2014; Hirano et al., 2014).
Hydrodynamic cosmological simulations (see e.g. Hosokawa et al., 2011; Hirano et al., 2015; Greif, 2015; Skinner & Wise, 2020) and indirect studies (e.g. Tumlinson, 2006; Ballero et al., 2006; Salvadori et al., 2007; Hartwig et al., 2015; Salvadori et al., 2015; Ma et al., 2017; Sarmento et al., 2017; Salvadori et al., 2019; Rossi et al., 2023) have provided some hints on the still debated mass range and distribution of Pop III stars. Many factors play a role in the fragmentation and star-formation processes, like turbulence (e.g. Greif et al., 2011) or radiative feedback from the protostar (e.g. Hosokawa et al., 2011), and, depending on their assumptions, cosmological simulations often find different results. In conclusion, Pop III stars can form either isolated (if massive) or in very small groups (if sub-solar) and their predicted initial masses range between 0.1 and 1000 \(\mathrm{M}_{\odot}\), with a characteristic mass probably \(\gtrsim 10\,\mathrm{M}_{\odot}\)(e.g. Hirano et al., 2014; Susa et al., 2014; Hirano & Bromm, 2017; De Bennassuti et al., 2017; Sarmento et al., 2019; Rossi et al., 2021). Therefore, the majority of Pop III stars are short-lived, with lifetimes of a few Myrs.
Pop III stars with masses in the range [10; 100] \(\mathrm{M}_{\odot}\) end their lives as supernovae (SNe), exploding with a variety of energies (e.g. Kobayashi et al., 2006; Heger & Woosley, 2010); and stars in the range [140; 260] \(\mathrm{M}_{\odot}\) explode as Pair Instability Supernovae (PISN, see Heger & Woosley, 2002), both enriching the interstellar medium (ISM) with newly produced metals. The abundances of the different chemical elements depend on the properties of the primordial star, i.e. the initial mass and the SN explosion energy. When the total metallicity of the ISM overcomes the critical metallicity (\(\mathrm{Z}_{\rm cr}\sim 10^{-54}\mathrm{Z}_{\odot}\); e.g., Omukai, 2000; Bromm et al., 2001; Maio et al., 2010; Schneider & Omukai, 2010) the gas efficiently cools down and normal Pop II stars form, with lower masses in the range [0.1; 100] \(\mathrm{M}_{\odot}\). Since stars with masses \(<0.8\,\mathrm{M}_{\odot}\) have lifetimes longer than the age of the Universe, _second generation_ (2G) stars might have survived and be observable today as pure descendants of Pop III stars. In principle, their photospheres should represent the chemical composition of the material ejected from the first SNe. The
study of these old fossils in order to retrieve information about the properties of the first stars is called _stellar archaeology_.
The Milky Way stellar halo is its most metal-poor component and it has been intensively studied during the last 20 years (e.g. Beers et al., 1992; Cayrel et al., 2004; Yong et al., 2013; Roederer et al., 2014; Bonifacio et al., 2021): here the most metal-poor (Caffau et al., 2011), iron-poor (Keller et al., 2014) and old (\(\tau\sim 13\times 10^{9}\) yrs, see Cowan et al., 2002) stars have been observed. The stars in the Galactic halo have iron abundances1, [Fe/H], and that span more than 6 orders of magnitude, from less than -7 to almost 0 dex. Stars with [Fe/H] \(\leq-1\) are called metal-poor (MP) stars. They can be divided into two categories depending on their carbon-to-iron elemental abundances.
Footnote 1: [A/B]=\(\log\left(\frac{N_{A}}{N_{\rm B}}\right)-\log\left(\frac{N_{A}}{N_{\rm B}} \right)_{\odot}\)
The C-normal stars, [C/Fe]\(\leq 0.7\), have relatively homogeneous chemical abundance patterns (e.g. Cayrel et al., 2004; Yong et al., 2013). The MP stars with [C/Fe]\(>+0.7\) are called Carbon-Enhanced Metal-Poor (CEMP) stars and are again sub-categorized depending on the abundances of the elements produced in slow (s-), or rapid (r-) neutron capture processes (see Beers & Christlieb, 2005). The stars with overabundance of these elements, like barium, are called CEMP-s/(r) ([Ba/Fe] \(>1\)), while the other are called CEMP-no stars ([Ba/Fe] \(<0\)). The presence of s-process elements is consistent, and confirmed by observations, with the scenario where the star is, or has been, in a binary system with an Asymptotic Giant Branch star, which transferred mass (see Abate et al., 2015) and altered the chemical composition of the photosphere. Therefore, the C-enhancement measured in the photospheres of CEMP-s/(r) stars has been acquired during their lifetimes. On the other hand, the observed CEMP-no stars are not primarily in binary systems (see Starkenburg et al., 2014; Arentsen et al., 2019) and their high values of \({}^{12}\)C/\({}^{13}\)C show that their surface composition has not been altered by mass transfer (see Aguado et al., 2022). Hence, evidence suggests that CEMP-no stars most likely inherited their C-enhancement from their birth clouds.
Both the fraction of CEMP-no stars and their [C/Fe] ratio increase towards low [Fe/H], denoting a likely connection to the descendants of the first stars. Indeed, the chemical abundances of the most iron-poor CEMP-no halo stars are consistent with being enriched by a single intermediate-mass (\(M\gtrsim 20\,\rm M_{\odot}\)) Pop III SN exploding with low-energy, the so-called faint SNe (\(E_{\rm SN}<10^{51}\) erg), experiencing mixing and fallback during the explosion (e.g. Umeda & Nomoto, 2003; Iwamoto et al., 2005; Marassi et al., 2014). The CEMP-no halo stars at higher [Fe/H], instead, appear to be consistent with being polluted either by only Pop III stars exploding with different energies (see Welsh et al., 2021) or by a combination of Pop III SNe and normal Pop II stars (see De Bennasuti et al., 2017; Koutsouridou et al., 2023). In conclusion, many theoretical works find that CEMP-no halo stars are linked to Pop III stars. However, it is still unclear whether they are all pure descendants of the first stars or if they are also contaminated by Pop II stars. Eventually, what fraction of the metals in the birth environment of CEMP-no stars has been contributed by Pop II stars?
Only recently the imprints of a single very energetic Pop III SN, the so-called hypernovae (\(E_{\rm SN}>5\cdot 10^{51}\) erg), have been detected in C-normal metal-poor stars residing in the Galactic halo (Placco et al., 2021) and in the dwarf galaxy Sculptor (Skalodiffovic et al., 2021, 2023). In addition, stars with signs of an enrichment by an extremely energetic Pair Instability Supernova (PISN, \(E_{\rm SN}>10^{52}\) erg) combined with normal Pop II SNe have been found in the Galactic halo (Aoki et al., 2014; Salvadori et al., 2019). Contrary to faint SNe, the true descendants of very energetic Pop III SNe seem extremely rare. Can they only be found at low [C/Fe]? Finally, are all metal-poor halo stars predominantly polluted by Pop III stars?
The aim of this paper is to answer these questions and chemically characterize the descendants of the first stars. In order to achieve these aims we exploit the chemical abundances of _all_ the available elements for CEMP-no and C-normal metal-poor halo stars and interpret their different abundance patterns with new theoretical models. This has never been explored before since previous theoretical studies only focused on specific chemical abundances.
## 2 Very metal-poor stars in the Galactic halo
In this Section we analyse the chemical composition of very metal-poor stars (VMP; [Fe/H] \(<-2\)) observed in the Galactic halo, for which high-resolution observations (\(R=\frac{\Delta\lambda}{\lambda}>30\,000\)) are available. This sample of more than 200 stars includes the 190 and 35 metal-poor stars from Yong et al. (2013) and Cayrel et al. (2004), respectively, and additional 25 stars with [Fe/H] \(\leq-3\), among which 21 have [Fe/H] \(\leq-4\), from: Christlieb et al. (2004); Norris et al. (2007); Caffau et al. (2011); Hansen et al. (2014); Keller et al. (2014); Frebel et al. (2015); Bonifacio et al. (2015); Li et al. (2015); Bonifacio et al. (2018); Francois et al. (2018); Starkenburg et al. (2018); Aguado et al. (2019); Ezzeddine et al. (2019). The stellar abundances for the majority of these stars are not corrected for 3D and non-LTE effects, and this will be discussed in Sect. 5. We exclude CEMP-s/(r) stars, i.e. with [C/Fe] \(>+0.7\) and [Ba/Fe] \(>0\), whose abundances are not representative of their birth environment (see Sect. 1). We end up with the chemical abundances, of 16 elements in total (including the iron abundance), of 132 stars: for all of them we have the iron and carbon abundances, but measurements for other elemental abundances are available only for some of them (see Figs. 2 and 3). We have corrected the carbon abundances of all the stars in this sample to account for the effect of evolutionary status by exploiting the online tool2 presented in Placco et al. (2014).
Footnote 2: [http://vplacco.pythonanywhere.com/](http://vplacco.pythonanywhere.com/)
### The iron and carbon abundances
The carbon abundance is one of the most important and most extensively exploited diagnostics in stellar archaeology (e.g. Cooke & Madau, 2014; Salvadori et al., 2015; De Bennasuti et al., 2017; Hartwig et al., 2018; Chiaki et al., 2020; Liu et al., 2021; Rossi et al., 2023). In Fig. 1, we present the main properties of our literature sample. The metallicity distribution function (MDF; top panel) peaks at [Fe/H] \(\sim-3\), but extends down to [Fe/H] \(<-7\) (Keller et al., 2014: [Fe/H] \(<-7.1\) with 3D corrections). In the middle panel, we show [C/Fe] with respect to [Fe/H], with observational errors or upper/lower limits, distinguishing between C-normal ([C/Fe] \(\leq+0.7\)) and CEMP-no stars ([C/Fe] \(>+0.7\)). The stars that come from literature sources different from Yong et al. (2013) are color-coded with their sources. Two ultra metal-poor stars ([Fe/H] \(<-4\)) only have upper limits of [C/Fe] \(<+0.7\) and \(<+1.0\) with 3D corrections (Caffau et al., 2011; Starkenburg et al., 2018), we thus consider them as C-normal stars. The [C/Fe] abundance ratios increase as [Fe/H] decreases, and all stars at [Fe/H] \(\lesssim-5\) are C-enhanced. In the bottom panel of Fig. 1 we present the CEMP fraction, with the poissonian error computed as \(\sqrt{N_{\rm CEMP}}/N\), where N is the total number of stars and \(N_{\rm CEMP}\) the number of CEMP-no stars in each [Fe/H] bin. In the cases where we have less than five stars in the bin (i.e. at the
lowest [Fe/H]) we use the values reported in Gehrels (1986), and if the fraction is 0 we don't show the error bars. We see in Fig. 1 that the CEMP fraction increases strongly with decreasing [Fe/H]: it is around 1 for [Fe/H] \(\leq-5\), 0.7 for [Fe/H] \(\sim-4\), and 0.3 for [Fe/H] \(\sim-3\), which is consistent with other studies (e.g. Placco et al., 2014; Arentsen et al., 2022).
### The complete abundance pattern
In addition to the carbon abundances, we also compared the measured abundances of other chemical elements to those predicted by our model. In Fig. 2 we show how many abundances are reported for each star of our literature sample, color-coded with the source. The average number of chemical abundances provided per star is 11, but 12 abundances is the most numerous. None of the stars in this literature sample have all the 15 chemical abundances, but for all of them we have at least 3: iron, carbon and another element. The most reported chemical abundances, apart from carbon and iron, are magnesium, calcium and titanium. In addition to the abundances shown in Fig 2, we take O and Zn separately from the stellar sample of Cayrel et al. (2004), but these are not counted in the histogram.
In Fig. 3, we show the mean chemical abundance ratios with respect to iron, [X/Fe], for the CEMP-no (red) and C-normal (grey) stars of our literature sample, distinguishing between [Fe/H] \(\leq-4\) (left) and [Fe/H]\(\in(-4;-2]\) (right). In the case of [Fe/H] \(\leq-4\), we include both abundance averages excluding (filled points, error bars) and including (open symbols, arrows) upper/lower limits on [X/Fe]. When [X/Fe] has more upper than lower limits we put an arrow pointing down (or vice versa). We never include measurements that only have upper limits in both [Fe/H] and [X/H]. For [Fe/H] \(\in(-4;-2]\) we have at least 18 measured abundances for both C-normal and CEMP-no stars for all the chemical elements, therefore no upper/lower limits are included. For the Cayrel et al. (2004) sample, only stars are included that have not undergone internal mixing of C, according to Spite et al. (2005). Yong et al. (2013) do not provide the abundances [O/Fe] and [Zn/Fe], thus they are missing for the CEMP-no stars in Fig. 3.
We first notice from Fig. 3 that CEMP-no stars have mean C, N and O abundance ratios that are typically much higher than those of C-normal stars. However, we should point out that more than 60% of the measured [N/Fe] and [O/Fe] for the CEMP-no stars are upper limits, hence their real values can be considerably lower than the ones represented here. Furthermore, at [Fe/H]\(<-4\), we see that also the mean abundance ratios of many alpha-elements (e.g. Mg, Al, Si) over iron is higher in CEMP-no stars than in C-normal stars, while heavier elements (Co, Ni, Zn) are comparable between CEMP-no and
Figure 1: Properties of our literature sample (Sect. 2). _Top panel_: Metallicity Distribution Function (MDF). _Middle panel_: [C/Fe] vs [Fe/H] for CEMP-no (red) and C-normal (grey) stars, corrected for the evolutionary status (Placco et al., 2014). The stars have circles color-coded with their literature sources (if they don’t have circles, they pertain to Yong et al. 2013’s sample); Christlieb et al. (2004, blue); Norris et al. (2007, purple); Caffau et al. (2011, orange); Hansen et al. (2014, brown); Keller et al. (2014, medium green); Frebel et al. (2015, dark green); Bonifacio et al. (2015, darker cyan); Li et al. (2015, pink); Bonifacio et al. (2018, medium cyan); Francois et al. (2018, light green); Starkenburg et al. (2018, yellow); Aguado et al. (2019, light cyan); Ezzreddine et al. (2019, fukcisa). The observational errors are shown, and the upper/lower limits are presented with arrows. _Bottom panel_: CEMP-no fraction, with the poissonian errors.
Figure 2: Number of available chemical abundances for the stars of our literature sample. The histogram is color-coded with the literature source of each star: grey for all the stars from Yong et al. (2013), whose CEMP-no stars are highlighted with the red area, and the other colors are the 21 additional sources as in Fig. 1.
C-normal stars. Finally, at \(-4<\)[Fe/H]\(<-2\), we see that, with the only exception of C and N, the mean abundance ratios of CEMP-no and C-normal stars are consistent among each other. Still, we notice that CEMP-no stars typically have larger star-to-star scatter3 than C-normal stars, even when the number of observed stars is similar or of the same order of magnitude (e.g. Na, Mg, Si, Al).
Footnote 3: Here the star-to-star scatter is quantified with the standard deviation among different measurements
We made several checks to ensure that the scatter is not driven by systematic errors of the different literature sources. For [Fe/H] \(\in(-4;-2]\), the stars from Yong et al. (2013) are more numerous than stars from other literature sources (eight). Thus the average abundance ratios and the standard deviations in this case are driven by the stars of Yong et al. (2013)'s stellar sample. This assures that the scatter between the abundances of different stars is not because of the different literature sources and analysis methods, it is instead an intrinsic property of our stellar sample. We also emphasize that the scatter in abundance ratios at the lowest [Fe/H]\(<-4\) is very large, often exceeding 1 dex, which is more than what is expected from different analysis methods. This is also confirmed by the uniform carbon-analysis of Norris and Yong (2019) of the most metal-poor stars known.
In conclusion, the star-to-star scatter in the chemical abundance ratios of CEMP-no stars is large at \(-4<\)[Fe/H]\(<-2\) and huge for [Fe/H]\(<-4\) (see also Fig. 1 in Vanni et al. 2023). Conversely, C-normal stars show a small dispersion in their abundance ratios as already pointed out by Cayrel et al. (2004). For this reason, the star-to-star scatter can be used as a new path to understand which stars polluted the birth environment of metal-poor halo stars. Indeed, a group of stars which show a small scatter in the abundance ratios, same as the C-normal stars, might have formed in an environment chemically enriched by multiple stellar populations. On the contrary, the stars that exhibit, one from each other, different chemical abundances might have been enriched by one or few SNe.
## 3 The semi-analytical model
In order to characterise the descendants of Pop III stars, we adopt the parametric model described and developed in Salvadori et al. (2019). The model follows the chemical properties of primordial galaxies in which a Pop III Pair Instability Supernova (PISN) exploded. Since no star polluted only by PISNe has been discovered to date, the authors aim at finding which of these chemical features are maintained after also Pop II stars have contributed up to 50% of the total amount of metals in the ISM. Here we expand this model to study the chemical imprints of Pop III SNe covering a larger range of masses and explosion energies.
### Model recap
We summarize here the principal features of the parametric study presented in Salvadori et al. (2019).
1. _The first stellar generation._ The original model assumes that a single4 Pop III star with initial mass in the range M e \([150;260]\,\mathrm{M}_{\odot}\) forms in a primordial proto-galaxy. In this mass range the stars end their lives as energetic Pair Instability SN (PISN), \(\mathrm{E_{SN}}\in[10^{52};10^{53}]\) erg, which completely destroys the star. The yields adopted for these stars are the ones from Heger and Woosley (2002). The uncertainties linked to the first episodes of star formation are enclosed in two parameters: the star-formation efficiency, \(f_{*}\), and the dilution factor, \(f_{\mathrm{dil}}\). The star-formation efficiency is the fraction of the gas in the primordial galaxy which
Figure 3: Mean abundance patterns for the stars in our literature literature sample (see text for the references) for CEMP-no (red points) and C-normal (grey points) stars, for [Fe/H] \(\leq-4\) (left), and \(\in(-4;-2]\) (right). On top are written the number of CEMP-no (red) and C-normal (grey) stars included for each chemical element. Filled points exclude limits, while open points include upper/lower limits. The error bar is the standard deviation of measurements, when there is only one observation we use the observational error. Measurements that present upper limits in both [Fe/H] and [X/H] are not included. The arrows point down if [X/Fe] has more upper than lower limits, and vice versa.
is converted in stars. The dilution factor is the ratio between the fraction of the initial gas mass in which the metals are diluted (which can also be \(>1\) if, e.g., primordial gas is infalling in the galaxy) and the fraction of the newly produced metals that are effectively retained by the galaxy. In simple terms, the dilution factor quantifies how much the metals are diluted in the hydrogen gas.
2. _The chemical abundances of the Pop III-polluted gas._ With just one star exploding per primordial proto-galaxy, the yield of a certain element, \(Y_{\rm X}^{\rm III}\), is simply the ratio between the mass of this element X ejected by the star and the initial mass of the star. The abundance of a chemical element X with respect to hydrogen in the ISM of the hosting primordial halo, after the first generation of stars polluted the gas, can be expressed as \[\left.\left[\frac{\rm X}{\rm H}\right]_{\rm ISM}=\log\left[\frac{f_{\rm s}}{f_{ \rm dil}}\,Y_{\rm X}^{\rm III}\right]-\log\left[\frac{M_{\rm X,\odot}}{M_{\rm H,\odot}}\right]\right.,\] (1) \(M_{X,\odot}\) (\(M_{H,\odot}\)) is the mass of the element X (hydrogen) in the photosphere of the Sun.
3. _Normal Pop II stars contribution._ After the pollution from a Pop III star, the ISM metallicity usually overcomes the critical value, \(Z_{\rm cr}\sim 10^{-5}\)\(\times 12\)\({}^{-6}\)\(\times\) and normal Pop II stars form with masses in the range \([0.1;100]\)\(\,\)M\({}_{\odot}\). Here we use the critical metallicity \(Z_{\rm cr}=10^{-4.5}\)\(Z_{\odot}\) (see De Bennassuti et al. 2017). After \(\sim 3\) Myr from their birth, normal Pop II core-collapse SNe start contaminating the ISM with newly synthesized metals, and in \(\approx 30\) Myr all Pop II SNe contribute to the ISM enrichment. The total yields of Pop II stars, \(Y_{\rm X}^{\rm II}\), are thus computed by integrating over a Larson IMF5 with \(m_{ch}=0.35\)\(\,\)M\({}_{\odot}\) in the range \((0.1;100]\)\(\,\)M\({}_{\odot}\) and adopting the yields from Woosley & Weaver (1995). We consider only the stars that pollute the gas on short time scales (\(\lesssim 30\)\(\,\)Myrs), which is equivalent to assuming that the yields of Pop II stars with \(M_{*}\lesssim 10\)\(\,\)M\({}_{\odot}\) are zero. By quantifying the amount of metals ejected by Pop III stars with respect to the total metal's amount in the ISM through the free parameter \(f_{\rm PopIII}\), we can compute the chemical abundances of the ISM after the contribution of both Pop III and Pop II SNe as:
Footnote 5: \(\phi(m_{\star})\propto m_{\star}^{-2.35}\times\exp\left(-\frac{m_{\rm ch}}{m_ {\star}}\right)\)
\[\left.\left[{\rm X/H}\right]_{\rm ISM}=\log\left[\frac{f_{\star}}{f_{\rm dil} }\left[Y_{\rm X}^{\rm III}+\beta\frac{Y_{\rm X}^{\rm III}Y_{\rm Z}^{\rm II}}{ Y_{\rm Z}^{\rm II}}\right]\right]-\log\left[\frac{M_{\rm X,\odot}}{M_{\rm H,\odot}} \right]\right., \tag{2}\]
where \(\beta=(1-f_{\rm PopIII})/f_{\rm PopIII}\). Note that [X/H] is affected by \(f_{\star}/f_{\rm dil}\), \(f_{\rm PopIII}\), and the yields of Pop III and Pop II stars, while the abundances ratios, [X/Y], only depend on \(f_{\rm PopIII}\) and the yields.
4. _The parameter space._ The model has three free parameters: the star-formation efficiency, \(f_{\star}\), the dilution factor, \(f_{\rm dil}\), and the fraction of metals contributed by Pop III stars, \(f_{\rm PopIII}\). From different physical arguments (see Salvadori et al. 2019 for details) the range of \(f_{\star}\) can be assumed6 to be \([10^{-4};10^{-1}]\), while \(f_{\rm dil}\in[0.02;10]\). The star-formation efficiency and the dilution factor are dependent on one another and their ratio, \(f_{\star}/f_{\rm dil}\), ranges in \([10^{-4};10^{-1}]\); we assume that all values within this range are equally probable. To ensure a predominant contribution from Pop III stars to the chemical enrichment the model explores the values \(f_{\rm PopIII}=(0.5-1.0)\).
Footnote 6: Its minimum values are related to the minimum mass required to form stars in the coolest first star-forming mini-halos, its maximum values correspond to the more star-forming Ly\(\,\alpha\)-cooling halos.
### Model implementation
#### 3.2.1 Pop III mass range and explosion energies
We extend the Pop III mass range to [0.8;1000]\(\,\)M\({}_{\odot}\) (see Rossi et al. 2021, for the constraints on the minimum mass of Pop III stars) in order to account for all the possible (initial) masses, and assume that Pop III stars of different masses have equal probability to form. Since we are interested in understanding the chemical properties of the environment primarily polluted by Pop III stars, here we only consider the contribution of stars with masses \(M\geq 10\)\(\,\)M\({}_{\odot}\), which have lifetimes \(r<30\) Myr and quickly enrich the interstellar medium if exploding as SNe. Stars with masses up to \(\sim 8\)\(\,\)M\({}_{\odot}\) (AGB stars) typically contribute to the chemical enrichment on timescales longer than hundreds of Myrs. Therefore their pollution is negligible during the first \(\sim\)30\(\,\)Myr of star formation.
We use the yields of Heger & Woosley (2010), which provide the metal yields for Pop III stars with initial masses between 10 and \(100\)\(\,\)M\({}_{\odot}\), explosion energies, \(E_{\rm SN}\), between \(0.3\times 10^{51}\) and \(10\times 10^{51}\) erg, and different internal mixing efficiencies, \(f_{\rm mix}\). In Fig. 4 we visualise the relationship between the explosion energy and the initial Pop III stellar masses. In the range \(10-100\)\(\,\)M\({}_{\odot}\), stars can be divided into four classes: _faint SNe_ (blue) with \(E_{\rm SN}=0.3-0.6\times 10^{51}\) erg; _core-collapse SNe_ (CC SN, green) with \(E_{\rm SN}=0.9-1.5\times 10^{51}\) erg; _high-energy SNe_ (HE SN, yellow) with \(E_{\rm SN}=1.8-3.0\times 10^{51}\) erg; and _hypernovae_ (orange) with \(E_{\rm SN}=5.0-10.0\times 10^{51}\) erg. Examples of Pop III yields with different explosion energies are shown in Sect. A.
In our model we adopt four values for the mixing efficiency, \(f_{\rm mix}=0.039,0.0631,0.100,0.158\), and a single representative explosion energy for each class: 0.6 for faint SNe, 1.2 for CC SNe, 3.0 for HE SNe, and 10.0 for hypernovae, in units of \(10^{51}\) erg. All the adopted mixing efficiencies and explosion energies are assumed to be equally probable. With these \(f_{\rm mix}\), the predicted [C/Fe] for a second-generation star imprinted by a single Pop III low-energy SN is consistent with what proposed by Iwamoto et al. (2005) and Keller et al. (2014). Notice that for these massive Pop III stars the explosion energy does not depend on the mass of the star (Fig. 4). The stars
Figure 4: The extended mass range for Pop III stars and the corresponding energies as used in the implemented model.
which explode with low energies (faint and CC SNe) only succeed in expelling the most external layers which are mainly composed by the lighter elements, C, O and Mg. However, these layers also carry heavier elements, like Fe, which are moved outwards by the mixing acting before the internal layers fall back onto the remnant. Therefore, the mixing is fundamental for low-energy SNe to expel also some amounts of heavy metals: if we suppose \(f_{\rm mix}=0\), which is the case where the layers are not mixed at all, the mass of iron ejected by a 25 \(\rm M_{\odot}\) star exploding as faint SN would be \(\approx 10^{-7}\) \(\rm M_{\odot}\). Conversely, when \(f_{\rm mix}=0.1\) the iron yielded is \(Y_{\rm Fe}\approx 10^{-3}\) \(\rm M_{\odot}\). On the other hand, the yields of more energetic SN explosions are much less dependent on the mixing efficiency. See Sect. C in the Appendix for an in-depth study of the effects of the mixing efficiency on the [C/Fe] abundance ratio.
In Fig. 4 we also show the explosion energies of PISNe, i.e., stars with initial masses \([140-260]\) \(\rm M_{\odot}\): in this case the explosion energy is proportional to the initial mass of the star and ranges between \(10^{52}\) and \(10^{53}\) erg. PISNe with initial masses between 150 and 260 \(\rm M_{\odot}\) were already included in the model of Salvadori et al. (2019), but here we also include masses between 140 and 150 \(\rm M_{\odot}\), which have the lowest explosion energies and produce a peculiar abundance pattern, with [C/Fe] \(>\) +2, in the polluted ISM.
#### 3.2.2 Pop II stellar yields
The computed nucleosynthetic yields can change significantly depending on the stellar evolution model, as demonstrated in Romano et al. (2010), thus affecting the results of the chemical evolution models that employ them and generating an intrinsic uncertainty. A direct comparison between the yields adopted in this work is shown in Sect. A.
The model developed in Salvadori et al. (2019) used the yields of Woosley & Weaver (1995) (hereafter WW95) for Pop II stars, which have initial metallicities \(Z=(0,10^{-4},10^{-2},10^{-1},1)Z_{\odot}\), initial masses between 11 and 40 \(\rm M_{\odot}\) and explosion energy of \(\sim 10^{51}\) erg. For Pop II stars with \(m_{\star}>40\rm M_{\odot}\) WW95 did not compute the yields since those are supposed to directly collapse in a black hole.
To relieve the uncertainty due to the choice of the model, we also use the recommended set of yields (set R), without rotation, from Limongi & Chieffi (2018) (hereafter LC18). These yields are computed for Pop II stars with initial metallicities \(Z=(2.31\times 10^{-3},10^{-2},10^{-1},1)Z_{\odot}\) and initial masses between 13 and 120 \(\rm M_{\odot}\). In this case, the stars with initial masses higher than 25 \(\rm M_{\odot}\) end their lives collapsing into black holes. These very massive stars therefore only contribute to the chemical enrichment through stellar winds.
To be consistent while using the yields from the two models and avoid extrapolation we only consider the contribution to the chemical enrichment of Pop II stars with masses \(\geq 13\,\rm M_{\odot}\), which pollute the ISM with SN explosions or stellar winds. In other words, we compute the total yields of an element X provided by Pop II stars, \(Y_{\rm X}^{\rm H}\), by integrating over the Larson IMF (see Sect. 3.1) between \(m_{\star}=[13,100]\)\(\rm M_{\odot}\) for both WW95 and LC18 (data sets. Finally, since the minimum metallicity of the LC18 yields, \(Z_{\rm min}=2.31\times 10^{-3}Z_{\odot}\), is larger than the critical value we use these yields for all Pop II stars with initial metallicity, \(Z_{\rm cr}<Z_{\star}<Z_{\rm min}\).
Even though the yields of Pop III and Pop II SNe are handled differently, i.e. Pop II SNe have their yields integrated while Pop III SNe are treated singularly, the difference in Pop III and Pop II SNe enriched environments is intrinsic in their yields, as shown in Sect. A.
## 4 Results
Our main aim is to study the chemical enrichment of an ISM imprinted by the first stars, i.e. the natal environment of the descendants of Pop III stars. Hereafter, _Pop III star descendants_ will refer to the low-mass long-lived Pop II stars which formed in an ISM where at least 50% of the metals have been contributed by Pop III stars. As we will show, a major Pop II contribution to the metal supply (\(>\)50%) would completely wash out the chemical imprint of Pop III stars.
### Birth environments of second generation stars
We start by inspecting the chemical abundances of an ISM solely imprinted by Pop III stars, i.e. the natal environment of the so-called _second generation stars_. In Figs. 5 and 6, we show the density maps for their chemical abundance ratios of C, O, Mg, Si, Ca, and Zn with respect to iron, as a function of iron abundance, [Fe/H]. Additional abundance ratios of Al, Ti, Mn are shown in Fig. 11 in Sect. B. The elements C, O, Mg, Si and Ca, are particularly informative, because they are sensitive to the mass and explosion energy of Pop III stars. On the other hand, Zn, is a peculiar element for both hypernovae (high values) and PISN (low values). Conversely, the measured Al is strongly affected by non-LTE effects (see Sect. 5.2), and Ti is well known to be underestimated by the models of galactic chemical evolution, as it is very sensitive to 3D effects in the explosion of SNe (see Kobayashi et al., 2006). Furthermore, there are no definite measurements available for Mn in CEMP-no stars with \(\rm[Fe/H]<-4\) (see Fig. 3). We completely exclude N from our discussion, because the yields strongly depend on the rotation assumed in the stellar evolutionary model which is highly uncertain. Other elements do not show very strong differences between models, making them less useful than those shown here.
In the first four rows of Figs. 5, and 6, we show all the possible chemical abundances of an ISM imprinted by Pop III stars in the mass range [10;100] \(\rm M_{\odot}\) (faint SN, CC SN, HE SN, and hypernovae) and in the bottom row by PISN, with [140; 260] \(\rm M_{\odot}\) and \(E_{\rm SN}\) e [\(10^{52};10^{53}\)] erg, see Fig. 4. Note that the iron abundance depends on the free parameters \(f_{*}/f_{\rm dil}\), which are varied in all the plausible parameter space (Sect. 3.1). Conversely, \(\rm[X/Fe]=[X/H]-[Fe/H]\), does not depend on them (see eq. 1) but only on the Pop III yields, \(Y_{\rm X}^{\rm III}\), which clearly depend on both the explosion energy of the Pop III SNe and the mass of the progenitor star. In the first column of Fig. 5 we show the region in which the ISM metallicity is smaller than the critical value (shaded area). The dash-dotted line indicates where the ISM metallicity equals the critical value on the plane [C/Fe]-[Fe/H]. In this case, \(Z_{\rm ISM}\) is computed considering the amounts of C and Fe only, thus obtaining a lower limit for its value. This ensures that an ISM with [C/Fe] value above the line has a metallicity higher than the critical one, without depending on the assumption for abundances of the other metals.
By inspecting Figs. 5 and 6, we notice that high-energy SNe and hypernovae (row 3 and 4), pollute the ISM with large quantities of Fe. The abundances of their descendants are, therefore, peaked at relatively high \(\rm[Fe/H]>-4\) and [C/Fe] \(\in(-1,0)\). These descendants have very small (but non-zero) chances for being C-enhanced. Thus, high-energy SNe and hypernovae alone cannot reproduce all the chemical abundances observed in metal-poor halo stars.
On the other hand, the second generation stars formed from the ejecta of faint SNe, CC SNe, and PISNe span \(\rm-8\lesssim[Fe/H]\lesssim-1\), and can, therefore, reproduce the iron abundances of all the literature halo sample. Furthermore, they cover a wide range of \(\rm-1\lesssim[C/Fe]\lesssim+5\). The peaks of the [Fe/H] and [C/Fe] distributions are, however,
Figure 5: Density maps of the predicted ISM abundances, [X/Fe] vs. [Fe/H], after the explosion of the first generation of stars. Columns are the key chemical elements C, O and Mg; while rows are different Pop III explosions: faint SN, CC SN, HE SN, hypernovae, and PISN. Star symbols are observed chemical abundances of CEMP-no (filled, the sizes are proportional to the [C/Fe] values) and C-normal (open) halo star. The dash-dotted line in the [C/Fe] diagrams is at \(Z_{\rm ISM}=Z_{\rm cr}\). Abundance ratios for the other relevant chemical elements are in Figs. 6 (Si, Ca, Zn) and B1 (Al, Ti, Mn).
Figure 6: The same as Fig. 5, but for Si, Ca and Zn. Star symbols are observed chemical abundances of CEMP-no (filled, the sizes are proportional to the [C/Fe] values) and C-normal (open) halo star. Abundance ratios for the other elements are in Figs. 5 (C, O, Mg) and B1 (Al, Ti, Mn).
located at different values for different progenitors. The faint SNe descendants show a prominent peak at [Fe/H] \(\approx-7\) and [C/Fe] \(\approx+3.5\); the CC SNe descendants are more equally distributed in the whole [Fe/H] and [C/Fe] range, showing different peaks at both low and high [C/Fe]; while PISNe descendants have the strongest peak at [Fe/H] \(>-2\) and [C/Fe] \(<0\).
To understand the progenitors of CEMP stars, in Fig. 7 we show the predicted [C/Fe], [O/Fe] and [Mg/Fe] values with respect to [Fe/H], color-coded with the initial mass of the Pop III progenitor star, for one selected mixing efficiency, \(f_{\rm mix}=0.063\). The [C/Fe] values of the second generation stars strongly depend on the mass of the Pop III progenitor. For a fixed \(f_{\rm mix}\), when the progenitor star explodes as a faint or CC SN, the descendants of the _most massive_ Pop III stars show the _highest values_ of [C, O, Mg/Fe]. If, on the other hand, it explodes as a PISN, this trend is the opposite, the more massive is the progenitor, the lower are the [C, O, Mg/Fe] values of the descendants. Yet, if we vary the mixing efficiency in the range, e.g. \(f_{\rm mix}\in[0.039,0.158]\), this relation between the progenitor masses and the [C/Fe] values of the descendants is not straightforward. Indeed, depending on the mixing efficiency, we find that descendants of progenitors with different masses can have the same [C/Fe] values (see also Fig. 11).
From these figures, we infer that the metal-poor Milky Way halo stars with \([{\rm C/Fe}]>+2.5\) agree with the chemical abundances predicted for the descendants of Pop III low-energy SNe, which are also predicted to imprint the ISM with an over-abundance of other light elements: [O/Fe]\(>+2\), [Mg/Fe]\(>+1.8\), [Si/Fe]\(>+1.8\). The carbon abundances of these CEMP-no halo stars also agree with the descendants of the least massive PISN, \(m_{\star}=140M_{\odot}\). However, this doesn't hold for the abundances of the other chemical elements, see e.g. Mg, Ca and Si in Figs. 5 and 6, thus excluding the possibility for these highly C-enhanced halo stars to be direct descendants of PISN. We can thus conclude that the metal-poor halo stars with [C/Fe] \(>+2.5\) are likely the true descendants of Pop III stars which exploded as low-energy SNe.
At \([{\rm C/Fe}]<+2.5\), the abundances of the metal-poor halo stars are
Figure 7: Predicted ISM abundances, [X/Fe] vs. [Fe/H], after the explosion of the first generation of stars with fixed \(f_{\rm mix}=0.063\). Columns are the key chemical elements C, O and Mg; while rows are different Pop III explosions: faint SN, CC SN, and PISN. Colors represent the initial stellar masses of the Pop III progenitor. Star symbols are observed chemical abundances of CEMP-no (filled, the sizes are proportional to the [C/Fe] values) and C-normal (open) halo star. Other elements are shown in Figs. 21 (Al, Si, Ca) and B3 (Ti, Mn, Zn).
also consistent with being the descendants of Pop III stars exploding as more energetic SNe (\(10-100\) M\({}_{\odot}\)), with \(E_{\rm SN}\) up to \(10^{52}\) erg, but not of PISNe. However, determining the progenitors of moderately C-enhanced and C-normal metal-poor halo stars is complicated by possible contribution of Pop II stars (see the following Section). From the first column of Fig. 5 it is evident that, independent of the explosion energy (and progenitor mass), the metals yielded by individual Pop III SNe typically enable the ISM to reach Z\({}_{\rm ISM}\geq Z_{\rm cr}\), which implies that long-lived Pop II second-generation stars can form in these environments but also that massive Pop II stars can start contributing to the ISM enrichment. Furthermore, none of the observed halo stars have Z\({}_{\star}<Z_{\rm cr}\) (shaded area in Fig. 5), confirming the critical metallicity scenario for the transition between Pop III and Pop II star formation.
### Birth environments of Pop III descendants
Cosmological models and simulations show that true second generation stars are expected to be rare (see e.g. De Bennassuti et al., 2017; Hartwig et al., 2018; Liu et al., 2021; Rossi et al., 2023; Koutsouridou et al., 2023). Hence, here we investigate how the predicted abundance patterns of Pop III descendants change when their birth environments have also been enriched up to 50% by normal Pop II stars, i.e for \(f_{\rm PopIII}\geq 0.5\).
#### 4.2.1 The [C/Fe] ratio
In Fig. 8 we show the [C/Fe] vs [Fe/H] density maps for Pop III descendants which have also been partially enriched by normal Pop II stars. We overlap the chemical abundances obtained when using the yields from both LC18 and WW95 since they do not show critical differences.
For increasing Pop II contribution to the chemical enrichment, [Fe/H] increases while [C/Fe] decreases, moving towards the abundances of the C-normal stars. As shown in Fig. 8 (first column), a small relative enrichment of 10% from Pop II stars is enough to limit the maximum [C/Fe] \(\lesssim\) +2.5, even in environments predominantly imprinted by low-energy Pop III SNe. This strongly suggests that halo stars with [C/Fe] \(\gtrsim\) +2.5 have been enriched _only_ by low-energy Pop III stars, and are thus true second generation stars. Indeed, their extreme C-enhancement cannot be reproduced with any contribution (\(\geq\)10%) of Pop II stars, or higher energy Pop III SN, the only exception being PISN with the lowest mass which however do not produce the other observed abundance ratios (see Sec 4.1).
On the other hand, we can reproduce the halo stars with [C/Fe] \(\lesssim\) +2.5 with products of low-energy Pop III SNe and a \(\geq\)10% pollution from Pop II stars. In particular, from Fig. 8 it is evident that the probability of producing a C-enhanced ISM with the products of low-energy Pop III SNe decreases as the contribution of Pop II stars to the chemical pollution increases. We also see that high-energy Pop III SNe can, with small probability, imprint the ISM up to [C/Fe]= +1.5 if the contribution of Pop II stars is \(\lesssim\) 10%. The same is true for PISN enrichment but in this case it is limited to [Fe/H] \(\geq\) \(-\)4. Finally, we note that gaseous environments predominantly imprinted by Pop III hypernovae cover a broad iron range, [Fe/H] \(\geq\) \(-\)4.5 but always have [C/Fe] \(<\) +0.7, regardless of the Pop II contribution.
Based on the [C/Fe] abundance ratio, we can only conclude that the most C-enhanced halo stars are true second generation stars. On the other hand, the CEMP-no stars with [C/Fe] \(<\) +2.5 could be polluted by Pop III stars exploding with different energies and Pop II stars at different levels. Thus, we need to investigate the imprint of the first stars also with heavier chemical elements.
#### 4.2.2 Elements beyond carbon
In Fig. 9 we show the predicted range of [X/Fe] vs [Fe/H] for the \(\alpha\)-elements O, Mg and Si, with varying Pop III contribution (from 100% down to 50%), and for increasing explosion energy of Pop III SNe (top to bottom rows). The same is shown in Fig. 10 for Al, Ca, and Zn (Fig. B4 for Ti, and Mn). In general, the predicted abundance ratios for elements lighter than Ca follow a similar trend to [C/Fe], i.e. the maximum [X/Fe] values decrease as the Pop II contribution increases, moving towards the abundances of C-normal stars.
Most of the highly C-enhanced halo stars ([C/Fe] \(>\) +2.5, largest points in Figs. 9, 10 and B4) also have high values of [O, Mg, Si/Fe] and these enhancements agree only with 100% enrichment from Pop III SNe, either faint, core-collapse or high-energy SNe. On the other hand, we predict too high [Al/Fe] values for the highly C-enhanced stars. However, the 3D and non-LTE corrections for [Al/Fe] are estimated to be \(\gtrsim\) +0.6 dex for these Fe-poor stars (see Nordlander & Lind, 2017; see Sect. 5.2). At the lowest [Fe/H] \(<\) \(-\)5, the NLTE corrected data could therefore agree with an enrichment by Pop III faint and core-collapse SNe, while for the descendants of high-energy SNe we predict [Al/Fe] values which cannot agree, even after the correction.
The highly C-enhanced stars of our literature sample also show high [Zn/Fe] values (\(\gtrsim\) 0.8), though there is only one star with a real [Zn/Fe] measurement (Ezzeddine et al., 2019) while for the other there exist only upper limits. Our predictions for faint, CC and HE SNe descendants are therefore consistent with the [Zn/Fe] values of the highly C-enhanced halo stars. The only star with a finite value for [Zn/Fe] is only marginally consistent with our predictions for different chemical elements. To explain its uncommon abundance pattern, peculiar SN explosion mechanisms, such as the aspherical SN explosion, have been indeed proposed (e.g. see Ezzeddine et al., 2019).
To conclude, the abundance ratios of the descendants shown in Figs. 9, 10 and B4, confirm that the most C-enhanced and Fe-poor halo stars have been most likely imprinted by a single or few Pop III low-energy SNe (\(E_{\rm SN}<3\times 10^{51}\)erg).
The C-enhanced stars with [C/Fe] \(\leq\) +2.5 have [O, Mg, Si, Al, Ca/Fe] values in agreement with the descendants of Pop III SNe, either faint, core-collapse or high-energy, with a contribution from Pop III stars down to 70%, for the ones with [C/Fe] \(>\) +1.5, and down to 50%, for the ones with [C/Fe] \(\leq\) +1.5. Conversely, the range of abundance ratios predicted for the descendants of hypernovae and PISNe do not match the observed one. As before, the high values of [Zn/Fe] for the CEMP stars with [C/Fe] \(\leq\) +2.5 is mainly based on upper limits and, therefore, is in agreement with all our predictions for Pop III descendants, but only marginally with PISNe.
Finally, the C-normal stars of our literature sample agree with all the abundance ratios predicted by our model (Figs. 9, 10 and B4)7 for the descendants of Pop III stars with a substantial, \(\leq\) 50%, contribution from Pop II stars. Only the descendants of PISNe are not able to reproduce the abundances of C-normal stars.
Footnote 7: With the exception of [Ti/Fe] about which we will discuss in the next Section.
To conclude, while the progenitors of the most C-enhanced stars are likely single or few massive primordial SNe, the abundances of moderately C-enhanced ([C/Fe] \(\lesssim\) +2.5) and C-normal stars are consistent with both the enrichment from primordial Pop III SNe and/or from a subsequent generation of Pop II stars.
Figure 8: Density map of the predicted ISM [C/Fe] abundance ratios as a function of [Fe/H] with different \(f_{\rm{Pb}}\)/pixpix: 90% (left), 70% (middle), 50% (right). Explosion energies of Pop III stars increase from top to bottom. The results obtained with the two sets of Pop II yields are shown together (Woosley & Weaver, 1995; Limongi & Chieffi, 2018). Star symbols are observed chemical abundances of CEMP-no (filled, the sizes are proportional to the [C/Fe] values) and C-normal (open) halo star.
Figure 9: Predicted ISM abundance ratios [X/Fe] vs. [Fe/H], for the elements O, Mg, and Si, after the explosion of Pop III and Pop II SNe. Colored areas show different Pop III contribution to the chemical enrichment: 100% (yellow), 90% (orange), 70% (magenta) and 50% (purple). The explosion energy of Pop III stars increases from top to bottom rows. Star symbols are observed chemical abundances of CEMP-no (filled, the sizes are proportional to the [C/Fe] values) and C-normal (open) halo star. Other relevant chemical elements are in Figs. 10 (Al, Ca, Zn) and B4 (Ti, Mn, in Sect. B).
Figure 10: The same as Fig. 9 but for Al, Ca and Zn. Star symbols are observed chemical abundances of CEMP-no (filled, the sizes are proportional to the [C/Fe] values) and C-normal (open) halo star. Other relevant chemical elements are shown in Figs. 9 (O, Mg, Si) and B4 (Ti, Mn).
### The complete abundance pattern
In the previous Sections we investigated how a single Pop III star pollutes with its chemical products the ISM in which it explodes and how the resulting chemical abundances change with the Pop II contribution. Our results show that C-normal environments (or stars) can be either imprinted by single Pop III SN or predominantly polluted by normal Pop II stars. Here we aim at discriminating among these two possibilities by exploiting all the different chemical elements measured.
In Fig. 11, we show the mean chemical abundance patterns of all predicted Pop III 100% and 50% descendant stars, distinguishing between the CEMP-no ones with [Fe/H] \(\in[-7.5;-4]\) (top left) and \(\in(-4;-2]\) (top right) and the C-normal ones with [Fe/H] \(\in(-4;-2]\) (bottom right), compared with the average abundances of observed stars. We also show the average abundance pattern of second generation descendants which have [Fe/H] \(\in[-7.5;-4]\) and [C/Fe] \(\in[+2.5;+5.5]\) (bottom left), compared with the average abundances of the observed stars (upper) and the abundances of each single star (lower). Computing the mean chemical abundances of the mini-halos, we are assuming that the five types of primordial SNe and the four mixing efficiencies, already discussed, are equiprobable. Basically, we selected all the models that produce the considered chemical abundances and averaged over their abundance ratios. For instance, to produce the abundances depicted in the top right section of Fig. 11 we averaged over all the models that produce \(\rm[Fe/H]\) \(\in(-4;-2]\) and [C/Fe] \(>+0.7\).
For the CEMP-no stars in the top left of Fig. 11, the predicted abundance ratios, [X/Fe], of elements lighter than Ca decrease with increasing Pop II contribution. The average [Cu/Fe] and [Zn/Fe] increase as the contribution from Pop II stars increase. This effect is mainly due to the fact that the most copper- and zinc-deficient descendants, which are the ones enriched by primordial PISN, become C-normal when we add the contribution from Pop II stars and do not contribute anymore to the average abundance pattern of CEMP-no descendants. On the contrary, the abundances of C-normal descendant stars (bottom panels of Fig. 11) do not change significantly with an increasing Pop II contribution. In all the cases depicted in Fig. 11, the standard deviation of the predicted chemical abundances is significantly reduced when the Pop II contribution increases.
Our models always underestimate the [N/Fe] ratio. This is partly due to the fact that PISNe ejecta present a strong odd-even effect, always producing \(\rm[N/Fe]\)\(\leq 0\) in the ISM (see Heger & Woosley, 2002; Salvadori et al., 2019), and partly due to the difficulty of modeling the nucleosynthesis of N in the stars with \(\rm M\in[10;100]M_{\odot}\). Indeed, the amount of N synthesized in the stars strongly depends on the mixing between the internal layers, which is usually achieved with stellar rotation (see Iwamoto et al., 2005; Kobayashi et al., 2006; Heger & Woosley, 2010 and Chiappini et al., 2005; Limongi & Chieffi, 2018) for a comparison between the yields for rotating and non-rotating stars). The yields adopted in this work are for non-rotating stars and predominantly present \(\rm[N/Fe]\)\(<0\).
At the lowest \(\rm[Fe/H]\)\(\leq-4\), the measured abundances of CEMP-no stars are only consistent with theoretical predictions of 100% Pop III contribution, as we pointed out in the previous Sections. CEMP-no stars with \(-4<\rm[Fe/H]\)\(<-2\), on the other hand, have chemical abundances that are consistent with either being second generation stars, or having Pop III enrichment at a \(\gtrsim 70\)% level, with a partial Pop II pollution. Our model overestimates the [Mg/Fe] and [Al/Fe] ratios, relative to observations, but when non-LTE corrections are applied, this discrepancy is expected to disappear (see Sect. 5.2). The predicted [Zn/Fe] average is lower than the observed one. As discussed in the previous Section, the yields of normal SNe are not able to reproduce the observed high [Zn/Fe] values and, moreover, the abundances of PISNe descendants, which have \(\rm[Zn/Fe]\)\(\ll 0\), strongly lower the average [Zn/Fe]. However, the average observed [Zn/Fe] is an upper limit and there is only one finite measurement, which is just at the edge of the values predicted by our model (Fig. 10).
The abundances depicted in the bottom left panels of Fig. 11 are a subclass of the abundances in the top left panels. The measured abundances include only the extremely C-enhanced stars ([C/Fe] \(>+2.5\)) which we predict to be the direct descendants of Pop III stars. First of all, we note that the measured abundances are very different from star to star, most of all for C, Mg, Al, Si and Zn. Moreover, a great number of them is with upper or lower limits: for instance, we have no definite measurement for [Mn/Fe] and just one for [Zn/Fe]. Therefore, the average abundances for [C/Fe] \(>+2.5\) shown in the upper panel might not be representative of the real values of [X/Fe]. In this case we show only the 100% Pop III SNe case because with a contribution of Pop II stars \(\geq 10\)% the descendants reach a maximum [C/Fe] \(<+2.5\). The average abundance ratios predicted for C, O, Na, Ca, Cr and Ni are consistent with the measured ones. Our model predicts higher [Mg/Fe] and [Al/Fe] with respect to the observations, but we expect this discrepancy to be relieved if non-LTE correction are applied (see Sect. 5.2). Conversely, our model can't reproduce the observed abundances of [Sc/Fe], [Co/Fe] and [Zn/Fe] (see also Kobayashi et al., 2020).
The measured chemical abundances of the C-normal stars in the range [Fe/H] \(\in(-4;-2]\), on the other hand, agree with our predictions for the descendants enriched by Pop III stars at a \(\gtrsim 50\)% level. However, if the birth environments of C-normal descendants are predominantly (\(>50\)%) polluted by Pop III stars, the predicted scatter is higher than what observed in C-normal stars. Furthermore, the agreement between the average and the observed abundances is better for some elements, C, O, Sc, Co and Zn, with a contribution of Pop II stars at the \(\sim 50\)% level. For C-normal descendants we predict a lower [Na/Fe] average, but the non-LTE corrections (see Sect. 5.2) should lower the observed abundances. Finally, we predict a smaller [Ti/Fe] with respect to the observed one. Ti is lightly affected by non-LTE effects and in general the stellar evolution models underestimate it (see Kobayashi et al., 2006, and references therein).
### The star-to-star scatter
The last and most conclusive result of our work concerns how the maximum scatter in abundance ratios is dependent on the relative pollution from Pop III stars, \(f_{\rm PopIII}\). In Fig. 12 we show the predicted dispersion of [C/Fe] and [Mg/Fe], with respect to \(f_{\rm PopIII}\), with a fixed mixing efficiency for Pop III stars of \(f_{mix}=0.063\). The pure Pop III descendants, \(f_{\rm PopIII}=100\)%, show a dispersion \(>5\) dex in the abundance ratios [C/Fe] and [Mg/Fe], as well as in the other chemical elements lighter than Ca. These abundance ratios are very dependent on the mass and SN explosion energy of the progenitor and therefore vary over a wide range of values. We point out that the predicted dispersion for the descendants of Pop III stars does not change if we use all the four mixing efficiencies together. This means that the scatter is driven by the different initial masses and SN explosion energies of Pop III stars. As Pop II stars contribute more to the pollution of the ISM, they wash out the diverse chemical peculiarities of the different primordial progenitors and the dispersion between different descendants is reduced. Finally, when the contribution from Pop III stars is negligible (Pop II only case), the abundances of the descendants almost correspond to the solar values.
To conclude, with our model, we predict that the scatter in [C/Fe] and [Mg/Fe] ratios is maximum for Pop III only enriched environments and that it decreases as the contamination from Pop II stars increases.
The _scatter diagnostic_ can also be applied to high-redshift absorption systems for which the measurement of the hydrogen column density is not possible, because this prediction only uses abundance ratios between different metals. It allows, without the classical comparison with [Fe/H], to understand if an absorption system has Pop III fingerprints in its gas (see Sodini et al. 2023, in prep.).
## 5 Discussion
### Model's generality and comparison with previous works
The parametric model proposed in this paper is very general, which makes it suitable for applications on a broad range of topics related to early chemical enrichment. Indeed it can interpret the chemical fingerprints left by the first SN explosions in both long-living stellar descendants (e.g. Skaladotti et al. 2023a) and in more distant gas clouds, which can be observed as absorption systems (e.g. Salvadori et al. 2023, Sodini et al. in prep.). Following the results of cosmological simulations of primordial star formation (e.g. Hirano et al. 2014; Susa et al. 2014; Hirano & Bromm 2017), our model assumes that a single Pop III star forms in each proto-galaxy. This simple but physi
Figure 11: Mean chemical abundance patterns for our models (blue) with 100% and 50% contribution from Pop III stars that predict: CEMP-no descendants with \(-7.5\leq\mathrm{[Fe/H]}\leq-4\) (top left); CEMP-no descendants with \(-4<\mathrm{[Fe/H]}\leq-2\) (top right); C-normal descendants with \(-4<\mathrm{[Fe/H]}\leq-2\) (bottom right); compared with the mean measured abundances of CEMP-no stars (red) and C-normal stars (grey; see Sect. 2 and Fig. 3). _Bottom left panels_: Mean chemical abundance patterns predicted for second generation stars (blue, 100% Pop III pollution) with \(-7.5\leq\mathrm{[Fe/H]}\leq-4\) and \(+2.5\leq\mathrm{[C/Fe]}\leq+5.5\), compared with the mean measured abundances of extremely C-enhanced stars, excluding the upper/lower limits (red, upper panel) and the abundances of individual stars (lower panel, colors as in Fig. 1). Blue error bars are the standard deviation between models. For \(f_{\mathrm{PopIII}}=50\%\) the abundance patterns are calculated using both PopII yields from WW95 (darker blue) and LC18 (lighter blue). The coloured area between them represents an intrinsic uncertainty due to the choice of the Pop II yields.
cally motivated hypothesis allows us to understand how the chemical abundances of the Pop III star descendants vary with Pop III stellar properties: their initial mass, the mixing efficiency (see Sect. C) and the SN explosion energy. For the first time, we demonstrated the importance (and degeneracy) of these three unknowns to interpret the entire chemical abundance patterns of ancient stars. This is the key to interpret the results of more sophisticated semi-analytical models which assume different Pop III IMF and energy distribution functions to follow the formation and evolution of different galaxies (e.g. Rossi et al., 2023; Koutsouridou et al., 2023).
Our findings for an ISM solely enriched by Pop III SNe are in excellent agreement with the results of Cooke & Madau (2014) and Welsh et al. (2021), who studied the abundance of some specific chemical elements (C, Mg, Ca, Ni, Fe) after the pollution of Pop III stars only. However, here we also investigate how the contamination of normal Pop II stars can affect the abundance pattern of the Pop III enriched environments, comparing _all_ the chemical abundances measured for the halo stars with the ones predicted by our model. In particular, we show that all stars with [C/Fe] \(\geq+2.5\) are genuine second-generation stars. Our results show that the probability of a gaseous environment (star) for being also imprinted by Pop II stars increases as the [C/Fe] (and other abundance ratios as [Mg/Fe]) decreases. In other words, the peculiar and variegate abundance pattern left in the ISM by Pop III SNe is gradually washed out by the dominant pollution from different generations of normal Pop II stars, which shrink the abundance ratios. Thus, we suggest that C-normal metal-poor halo stars might be the result of this dominant Pop II contribution, which is consistent with the results of both the metal-enrichment model developed by Liu et al. (2021) and cosmological semi-analytical models for the Milky Way formation (De Bennassuti et al., 2017; Koutsouridou et al., 2023).
But can we really be sure that C-normal stars are not truly second-generation objects? As we pointed out in Sect. 4.1, our model predicts that among C-normal (and [C/Fe] poor) stars there might be some second-generation stars, solely imprinted by Pop III SNe. This result is in line with the one of Welsh et al. (2021), who interpret the origin of C-normal stars with \(\rm[Fe/H]<-2.5\) using a stochastic model for Pop III chemical enrichment. However these authors, by showing that their [Mg/Ca] and [Ni/Fe] are well fitted by multiple Pop III high-energy SNe, concluded that C-normal stars are all second generation stars. Conversely, our analysis of the entire chemical abundance pattern (15 elements in total) seems to suggest that this is not the case, since the star-to-star scatter should have been much larger in the case of a pollution driven by Pop III SNe only.
In Fig. 13 we compare the observed star-to-star scatter with the predicted one for [C/Fe] and [Mg/Fe], separating the CEMP-no (top panels) and the C-normal stars (bottom panels). The shaded areas represent the star-to-star scatter of the stars in our literature sample. The theoretical scatter is computed separately for CEMP-no and C-normal descendants, fixing the mixing efficiency \(f_{\rm mix}=0.063\), by randomly selecting the same number of descendants in our model as available in the literature. We repeated this random procedure 100 times and averaged between the 100 minimum and maximum [X/Fe] values. In Fig. 13, we see that the star-to-star scatter of CEMP-no stars is consistent with the one predicted for the birth environments of second-generation stars. On the other hand, the star-to-star scatter of C-normal stars is consistent with the dispersion predicted for environments imprinted by Pop III stars at a \(\leq\) 50% level.
Recently, Hartwig et al. (2018) proposed a new diagnostic to identify stars (or ISM) mono-enriched by a single Pop III SN. They show that in the [Mg/C] vs [Fe/H] diagram, stars with [Mg/C] \(\approx-1.5\) and \(\approx 0.7\) have the highest probability to be mono-enriched (see Fig. 11 and 15 in Hartwig et al., 2018). Do we find the same results for our environments enriched by a single Pop III star? In Fig. 14 we compare the predictions of Hartwig et al. (2018) with those of our model for the birth environments of second generation mono-enriched stars and with other Pop III descendants (\(50-90\%\) Pop III polluted). The area populated by Pop III mono-enriched stars is significantly wider than what was found in Hartwig et al. (2018). Furthermore, we do not predict second generation stars at [Mg/C] \(>+0.75\). These inconsistencies have a double explanation: firstly, Hartwig et al. (2018) only explored the imprint from low-energy (faint and core-collapse)
Figure 12: Maximum extent of the abundance ratios [C/Fe] (left) and [Mg/Fe] (right) for Pop III descendant stars as a function of relative Pop III pollution, replyThe mixing efficiency is fixed, \(f_{\rm mix}=0.063\). Coloured areas represent different levels of enrichment from Pop III stars, \(f_{\rm{PopII}}\): 100% (yellow), 90% (orange), 70% (magenta), 50% (purple), 30% (inlog), and only Pop II stars (grey). The dash-dotted lines correspond to [C/Fe] = +0.7 (left) and [Mg/Fe] = 0 (right).
Pop III SNe with initial masses from 10 to 100 \(\rm M_{\odot}\); secondly, they assumed the yields from Kobayashi (2012); Nomoto et al. (2013); Ishigaki et al. (2014) while we adopt those from Heger and Woosley (2010).
We also note that our predictions for an ISM contaminated by Pop II stars at a level between 10% and 50% populate the [Mg/C] diagram in the same areas identified by Hartwig et al. (2018) for Pop III mono-enriched stars. From Fig. 14 we see that the region occupied _only_ by Pop III mono-enriched stars is very narrow (see also Hansen et al., 2020). We see that only two CEMP-no stars with \(\rm[C/Fe]\geq+1.5\) in our entire literature sample are unambiguously 2G mono-enriched stars, based on Fig. 14, while all the others are consistent with being either 2G stars or partly contaminated by normal Pop II stars. CEMP-no halo stars in Fig. 14 that show \(\rm[Mg/C]<-1.8\) and \(\rm[Fe/H]>-4.5\) might have been imprinted also by Pop III/Pop II AGB stars, which are not considered in this work. Yet, the contribution of Pop III AGB stars has only a minor effect on the gas previously enriched by a SN because the amounts of carbon ejected by AGB stars is lower than what ejected by SNe in the early universe (Rossi et al., 2023). The minor variations obtained by including Pop III AGB stars are well within other uncertainties of the model, e.g. in the chemical yields (Vanni et al. in prep). On the contrary, the two C-normal stars with extremely high \(\rm[Mg/C]\geq+1.0\) might have formed after the explosion of the most massive (\(m_{\bullet}\approx 40\rm M_{\odot}\)) Pop II stars, which eject high amounts of Mg (e.g. see Salvadori et al., 2019).
### Can we trust the data?
The predicted [C/Fe] and [Fe/H] ratios for the descendants of Pop III faint and core-collapse SNe only agree with the abundances of the most C-rich CEMP-no halo stars when the parameter \(\frac{f_{*}}{f_{\rm dil}}\) is maximum, which means having either a high Pop III star-formation efficiency and/or metals diluted in a small portion of the primordial gas. However, we haven't corrected the chemical abundances of the
Figure 13: Maximum extent of the abundance ratios [C/Fe] (left) and [Mg/Fe] (right) for Pop III descendants as a function of relative Pop III pollution. The mixing efficiency is fixed, \(f_{\rm mix}=0.063\). Coloured areas represent different levels of enrichment from Pop III stars: 100% (yellow), 90% (orange), 70% (magenta), 50% (purple), 30% (indigo), and only Pop II stars (grey). Overlapping shaded areas are the maximum scatter between the measured abundances of the metal-poor halo stars, divided in two categories: CEMP-no stars (red) and C-normal stars (grey) from the literature sample, without considering the upper/lower limit measurements (Sect. 2). The dash-dotted lines correspond to [C/Fe] = +0.7 (left) and [Mg/Fe] = 0 (right).
observed stars for the non-LTE and 3D effects: ideally this should be done on each singular star, but we can estimate the overall corrections.
The C and Fe abundances of our literature sample are not corrected for non-LTE or 3D effects, except for a few stars among the most C-enhanced ones: the Christlieb et al. (2004) (only Fe), Caffau et al. (2011) (both C and Fe), Keller et al. (2014) (both), Bonifacio et al. (2018) (only C), Starkenburg et al. (2018) (only C) and Ezzeddine et al. (2019) (only Fe) stars. However, non-LTE 3D models of stellar atmospheres have become more and more sophisticated recently and the chemical abundances, even if already corrected, might deserve another revision. The 3D and non-LTE corrections of the iron abundance, if done consistently, are opposite (the 3D is negative and the non-LTE is positive), resulting in a total shift for [Fe/H] of the order of -0.05 to +0.15 dex (see Amaris et al., 2019; Norris and Yong, 2019). The corrections to the carbon abundance are more severe, most of all for low metallicity stars, and can be up to -1.0 dex (see Amaris et al., 2019; Norris and Yong, 2019), leading to a total correction to [C/Fe] of the order of -0.5 to -1.0 dex. With these corrections, the CEMP-no halo stars with [C/Fe]\(>+2.5\) would still agree with being the descendants of Pop III low-energy SNe, but the parameters, such as the initial mass of the progenitors, would change. Nevertheless, this would still exclude the possibility of being primarily enriched by high-energy Pop III SNe (see Fig. 7) and Pop II stars.
The 3D and non-LTE corrections are also not negligible for the abundances of O, Na, Mg and Al. The [O/Fe] ratios of C-normal stars in our literature sample, at \(-4<\) [Fe/H] \(<-2\), are corrected for 3D effects, as in Cayrel et al. (2004), of \(-0.23\) dex. The 0 abundances of CEMP-no stars at lower [Fe/H] are not corrected. For them the corrections should be higher with respect to the more Fe-rich ones, up to \(-0.6\) dex (see Amaris et al., 2019). [Na/Fe], [Mg/Fe] and [Al/Fe] are not corrected for the entire literature sample. Cayrel et al. (2004) estimate a correction for [Na/Fe] of up to \(-0.5\) dex, which can improve the agreement with the models for C-normal stars (see Fig. 11), and for [Mg/Fe] of \(\sim+0.27\) dex (see also Andrievsky et al., 2010). Such an increase in the [Mg/Fe] ratio would make our models underestimate it for C-normal stars. This might be an indication that these stars have been polluted mostly by massive (\(\sim 30\) M\({}_{\odot}\)) Pop II stars, which eject high amounts of Mg with respect to Fe (this peculiar feature is washed out by the integrated contribution of the least massive Pop II stars). Ultimately, the correction to be applied on [Al/Fe] is \(\ga+0.6\) dex (see e.g. Andrievsky et al., 2008; Nordlander and Lind, 2017), but it strongly depends on the metallicity of the star, the lower is the metallicity, the higher is the correction.
## 6 Conclusions
The metal-poor stars in the Galactic halo offer a unique opportunity to identify the chemical fingerprints of the first stars and hence understand their properties. The most iron-poor and carbon-rich halo stars are commonly thought to be true second generation stars, i.e. stars that have been imprinted solely by Pop III stars. On the other hand, the debate is still open for the more iron-rich halo stars, which are thought to be either imprinted only by Pop III stars or also by normal Pop II stars.
Here we aim at finding the peculiar chemical imprints left by the first stars, and to determine which of the halo stars are real descendants of Pop III stars, i.e. if \(\ga 50\%\) of their metals are produced by Pop III stars. In order to achieve our objectives, we further improved and extended the parametric model developed by Salvadori et al. (2019), and explored the chemical abundances in the first star-forming structures after the pollution of: (i) one Pop III star; (ii) also the Pop II stars which formed subsequently. Comparing the chemical abundances resulting from our model with literature halo stars, we find that:
* The most C-enhanced ([C/Fe] \(>+2.5\)) halo stars have chemical abundances that agree with an imprint from only one primordial Pop III star exploding with low energy (\(<2\times 10^{51}\) erg).
* C-enhanced metal-poor halo stars with \(+0.7<\) [C/Fe] \(<+2.5\) are likely born in environments polluted by both Pop III and Pop II stars where Pop III stars provided \(\geq 50\%\) of the total amount of metals.
* C-normal metal-poor halo stars have probably been imprinted mainly by Pop II SNe, which provided \(\geq 50\%\) of the total metals amount of their birth places. However, we might also find C-normal metal-poor stars which are pure descendants of the most energetic Pop III SNe (hypernovae and PISN) with peculiar and outlier abundance patterns.
A key diagnostic employed to understand the origin of C-normal stars is the dispersion between the chemical abundances predicted by different models and its variation with respect to the pollution level from Pop II stars. Indeed, the scatter between the abundances of different descendants decreases as the contribution of Pop II stars to the metal pollution increases (see Sect. 4.4). If compared to the star-to-star dispersion of the chemical abundances of CEMP-no and C-normal halo stars, this supports the scenario where at [C/Fe] \(\sim 0\) the probability for metal-poor halo stars to be predominantly polluted by Pop II stars is extremely high. Very recently, it has been shown that also the abundance dispersion of some high-redshift gaseous absorption systems increases with redshift (see Sodini et al., 2023, in prep.), denoting a possible trace left by Pop III stars at \(z\ga 4\).
Our new model provides a useful tool to analyse the abundances of metal-poor environments (present-day stars and high-redshift gas
Figure 14: Predicted abundance ratios [Mg/C] vs. [Fe/H] for Pop III descendants, compared with the literature sample. Colored areas show different Pop III contribution: 100% (yellow), 90% (orange), 70% (magenta), 50% (purple). Star symbols are observed chemical abundances of CEMP-no (filled, sizes are proportional to their [C/Fe]) and C-normal (open) halo star. Green ellipses show the Pop III mono-enriched areas identified in Fig. 11 of Hartwig et al. (2018).
clouds) and to identify which of them has been likely enriched by the first Pop III SNe and at which level. This will soon become extremely important, when we will be able to exploit the chemical abundances of a huge amount of present-day stars provided by the 4MOST surveys (see Feltzing et al., 2018; Christlieb et al., 2019; de Jong et al., 2019; Skiladotti et al., 2023b) for the stellar halo and the dwarf galaxies satellites of the Milky Way, and by WEAVE (see Dalton et al., 2014; Jin et al., 2023), which will complement the work done by Gaia observing the entire Milky Way. Our model and new diagnostics have been already used to understand the origin of the recently observed high-z absorption systems (see Saccardi et al., 2023; Salvadori et al., 2023, Sodini et al., 2023, in prep) and it will be of fundamental importance to guide future observations of high-z absorbers with ANDES on the ELT (see Marconi et al., 2022), which aim at unveiling the signature of Pop III SNe.
## Acknowledgements
The authors thank Marco Limongi for the inspiring discussions about the stellar yields. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 804240). I.V. and S.S. acknowledge support from the PRIN-MIUR17, prot. n. 2017T4ARJ5.
## Data Availability
The Heger & Woosley (2010) yields and the routines to use them are available at [https://pypi.org/project/starfit/](https://pypi.org/project/starfit/) and [https://2sn.org/starfit/](https://2sn.org/starfit/).
|
2309.05181 | Magnetospheric physics of magnetars | Several aspects of the magnetospheric physics of magnetars are summarized,
including: GeV and hard X-ray emissions of magnetars, timing behaviors during
magnetar outburst (soft X-ray observations), optical/IR observations of
magnetars, radio emission of magnetars, and accreting magnetars. A unified
picture for pulsars and magnetars are adopted, especially wind braking of
magnetars, magnetar+ fallback disk systems, twisted dipole magnetic field, and
accreting low magnetic field magnetars etc. It is pointed out that magnetars
are related to a broad range of astrophysical phenomena. | H. Tong | 2023-09-11T00:55:21Z | http://arxiv.org/abs/2309.05181v1 | # Magnetospheric physics of magnetars
###### Abstract
Several aspects of the magnetospheric physics of magnetars are summarized, including: GeV and hard X-ray emissions of magnetars, timing behaviors during magnetar outburst (soft X-ray observations), optical/IR observations of magnetars, radio emission of magnetars, and accreting magnetars. A unified picture for pulsars and magnetars are adopted, especially wind braking of magnetars, magnetar+ fallback disk systems, twisted dipole magnetic field, and accreting low magnetic field magnetars etc. It is pointed out that magnetars are related to a broad range of astrophysical phenomena.
magnetars; pulsars +
Footnote †: journal:
0000
radio, optical/IR and persistent hard X-ray emission from magnetars around 2006 [11; 12; 13], the study of magnetars entered into the multiwave era (previously, the observations of magnetar were mainly in the soft X-ray band and hard X-ray only for the bursts). Later on, the the research of magnetars becomes more and more diverse, e.g., discovery of low magnetic field magnetars in 2010 [14], accreting magnetars in ultra-luminous X-ray pulsars in 2014 [15], fast radio burst from magnetars in 2020 [16; 17; 18].
### Overview of magnetars
In the early references, magnetars are grouped into two subclasses: anomalous X-ray pulsars (dubbed as AXPs), and soft gamma-ray repeaters (dubbed as SGRs). Anomalous X-ray pulsars are "anomalous" because their X-ray luminosity are higher than their rotational energy loss rate and they show no binary signature. Therefore, their energy budget is a puzzle at that time (at a time when only rotation-powered pulsars and accretion-powered pulsars are known). Soft gamma-ray repeaters got their name in comparison with typical gamma-ray bursts: the typical photon energy of soft-gamma repeaters is lower than that of gamma-ray bursts, and they can show recurrent bursts (i.e., not one-off events). Today (2023), both anomalous X-ray pulsars and soft-gamma repeaters are believed to be the same class observationally. They are believed to be magnetars, i.e. neutron stars powered by their magnetic energy release. In the following, we will mainly use the name "magnetar".
_Quantum critical field_. In magnetar researches, the quantum critical magnetic field is often employed. It is defined as the magnetic field when the electron cyclotron energy equals its rest mass energy: \(B_{q}=m_{e}^{2}c^{3}/(eth)=4.4\times 10^{13}\) G. The meaning of quantum critical field is that when dealing with microscope physics (e.g., conduction coefficient) in such strong field quantum electrodynamics (dubbed as QED) should be employed. The non-relativistic Schrodinger equation is no longer valid in such strong magnetic field. The macroscopic physics is unchanged, e.g. the magnetohydrodynamics is still valid in the neutron star crust.
_Traditional picture of magnetars_. A traditional picture of magnetars can be found in earlier reviews of magnetars (e.g., [20]). Based on multiwave observations at that time,
Figure 1: Magnetars on the \(P-\dot{P}\) diagram. Blue squares are magnetars, empty blue squares are radio emitting magnetars. Updated from Figure 1 in [19], see the caption there for the meaning of various pulsar-like objects.
a magnetar is thought to be: (1) a young neutron star (since its is in association with supernova remnants and massive star clusters), (2) with strong dipole magnetic field, e.g., higher than the quantum critical value \(B_{\rm dip}>B_{q}=4.4\times 10^{13}\) G. The dipole field is mainly responsible for the braking of magnetars. (3) with even stronger multipole field, e.g., \(B_{\rm mul}=10^{14}-10^{15}\) G. It is the multipole field which is responsible for magnetar's multiwave emissions, e.g., burst, super-Eddington luminosity and persistent emissions. This traditional picture is very nice. But it may be too simple to represent the complicated and diverse observations of magnetars.
A wonderful example of magnetar observations is their giant flares (only three up to present, Fig. 7 in [20]). A giant flare of magnetars consists of (1) a spike and a pulsating tail (the tails can last for hundreds of seconds), (2) \(10^{4}\) times super-Eddington luminosity can be reached during the tail (i.e., \(10^{42}\) erg s\({}^{-1}\)). A magnetic field as high as \(10^{15}\) G may serve as the energy power of the giant flare and may cause the super-Eddington luminosity during the tail phase [8]. In our opinion, this is the strongest evidence for the existence of magnetars (e.g, compared with that of fallback disks). Magnetars are restless compared with that of normal rotation-powered pulsars. The magnetar SGR 1806-20 not only has a giant flare, but also has many bursts [21; 22]. During these period, the frequency derivative (i.e., spin-down torque) can vary by a factor of several, even order of magnitude. Therefore, their position on the \(P\)-\(\dot{P}\) diagram (Figure 1) changes with time, which is uncommon for normal pulsars. Magnetars are not just neutron stars with strong magnetic fields. Magnetars are special because they can show various activities and variabilities.
The discovery of low magnetic field magnetars clearly shows that the tradition picture is incomplete [14; 23; 24]. Magnetars not only have bursts (including giant flares), but also can have outbursts: increase of persistent X-ray flux by two or three orders, then decay in the following months or years [25]. During the outburst, magnetars can have variable spectra and timing behaviors, transient radio emissions ([11], the first transient magnetar and the first radio emitting magnetar XTE J1810-197). Up to now, many magnetars show outbursts and their light curve are generally an exponential decaying function or some more complicated form [26]. However, if looking into the details, every magnetar has its own peculiarity. For example, the magnetar SGR 1935+2154 is not such outstanding by looking at its outbursts. However, it showed Galactic fast radio bursts, accompanied by X-ray burst [16; 17; 18; 27].
_Magnetospheric physics of magnetars._ The reason why magnetars can have many activities while normal pulsars can't may be that: magnetars have a twisted magnetosphere compared with that of normal pulsars [28]. The magnetic field of normal pulsars may be mainly dipole, with no twist and may be considered as the ground state of the magnetic field configuration. While the magnetic field of magnetars may be a twisted dipole, plus some local twisted multipole field. A twisted magnetic field has free energy compared with a dipole field. The release of this magnetic free energy may responsible for the burst and multiwave emissions of magnetars [29; 30; 31].
This review mainly lists several observational aspects of magnetars and points out the possible magnetospheric physics behind them. It is in no way complete. More can be found in [20](still intriguing at present), [32; 33; 34; 35; 36; 37] etc. A general overview of magnetars has been given in this section. The following sections will provide more details on different aspects and can be read/skipped by relevance.
## 2 Multiwave emissions of magnetars
### GeV emission of magnetars
By applying the outer gap model to magnetars, it is expected that some of them should have gamma-ray emission and may be detected by Fermi [38]. However, Fermi/LAT observation of 4U 0142+61 (the brightest magnetar in X-rays) resulted in non-detection [39]. Fermi/LAT observation of all magnetars also reported non-detection [40]. By applying the outer gap model with updated input to magnetars [41; 42], it is found that three of the magnetars should have been seen by Fermi. And the Fermi upper limits of 4U 0142+61
is already below the theoretical spectral energy distributions. The conflicts between the outer gap in the case of magnetars and Fermi observations is confirmed by later more observations [43].
The Fermi observations of magnetars imply that either (1) the outer gap model is wrong (which is unlikely considering the observation of gamma-ray pulsars [44]), or (2) the traditional magnetar picture is wrong. It can not be excluded that AXPs/SGRs are actually accreting systems [45; 46; 47; 48]. Another solution is that: magnetars may be wind braking [49]. In the wind braking scenario, magnetars do not have a strong dipole field, and the vacuum gaps (e.g., outer gap) can not exist in the case of magnetars.
There are also very high energy (e.g., TeV) observations of magnetars [50; 51]. The general upper limit is about 1% Crab unit. Considering that the Crab rotational energy loss rate is about \(10^{38}\) erg s\({}^{-1}\), the typical X-ray luminosity (which is powered by the magnetic energy release and much higher than their rotational energy loss rate) of magnetars is about \(10^{35}\) erg s\({}^{-1}\) which is 1/1000 of the Crab rotational energy loss rate. Therefore, a 1% Crab unit upper limit is not constraining. In the future, GeV and TeV emission of magnetars during bursts (including giant flares) may be one thing that can be expected.
### Hard X-ray emission of magnetars
Magnetars can have persistent hard X-ray emissions [13]. This is rather unexpected, considering the power law component of magnetar's soft X-ray component is rather steep. In contrast, the hard X-ray emission of magnetars have a rather flat spectra. Their total luminosity is comparable with that of soft X-rays. Therefore, the hard X-ray emission forms a distinct component in addition to the soft X-ray component (the soft X-ray component is mainly composed of a blackbody+power law). Combined with Fermi non-detection in the GeV range (\(>100\) MeV, [40; 52]), the spectra of magnetars is expected to have a cut-off around \(\sim 1\) MeV. The exact cut-off energy is unknown at present.
There are many theoretical modelings of magnetar's hard X-ray emissions ([53] and references therein). Possible candidates includes: bremsstrahlung, resonant Compton scattering, synchrotron process, or bulk motion Comptonization in the accretion model etc. Both the magnetospheric models and bulk motion Comptonization model are possible [54; 55]. Insight-HXMT observations may further constrain the cut-off and spectra of magnetar's hard X-ray emissions [53].
### Timing behaviors during outburst (soft-X-ray observations)
The soft-X-ray band is one of the two main channels that we learn about magnetars (the other is radio observations). During the soft X-ray outburst, magnetars show various kinds of timing and spectra variabilities. At the same time, transient hard X-ray emission (for [56] for a recent example) and transient radio emission may also be detected during outburst. We will focus on magnetar's timing behaviors during outburst.
_How magnetars are spun-down down?_. There are many timing events in magnetars, including: varying period derivative, low magnetic field magnetars (or low period-derivative magnetars), anti-glitches, negative correlations between X-ray luminosity and period derivative (period derivative reflects the spin-down torque of the magnetar). For example, repeated and delay torque variations are seen several times in magnetar 1E 1048.1\(-\)5937 [57; 58], varying spin-down rate is also found using radio observations of the magnetar PSR J1622\(-\)4950 [59], a possible decreasing spin-down rate is also reported in the low magnetic field magnetar Swift J1822.3\(-\)1606 [60]. This raises the question: "How magnetars are spun-down?". A physical model for this question should answer: (1) why there are so many timing events in and mainly in magnetars? (2) How to unify the spin-down mechanism of pulsars and magnetars?
_Various modelings for magnetars._ For the spin-down mechanism and related physics of magnetars, there are various modelings, employing or not employing the magnetar model. These includes [61]: (1) neutron+ twisted magnetosphere model [28; 29; 30], (2) Wind braking model of magnetars [49], (3) Coupled magnetic and thermal evolution [62] (the first three
modelings are in the magnetar domain), (4) fallback disk model [45; 46; 47], (5) Accretion induced star-quake model [63], (6) Quark nova remnant [64], (7) white dwarf model for AXPs and SGRs [65; 66] etc. These modelings may share some common merits. There are also various subsequent developments for every model.
The wind braking model of magnetars focus on the timing behaviors of magnetars. The general picture of wind braking is similar for both normal pulsars and magnetars: (1) the particle in the magnetosphere will experiences acceleration and subsequent radiation. This results in the star's multiwave emissions. (2) When flowing out, this particle component will also take away the rotational energy of the neutron star. This results in the spin-down of the pulsar. The same particles will contribute both to the radiation and spin-down of the neutron star. Therefore, a correlation between the emission and timing behaviors is naturally expected in the wind braking model. In the case of normal pulsars, the spin-down is made up by the sum of dipole radiation and particle component ([67; 68] and references therein). There are various winds in the universe: solar wind, stellar wind (Wolf-Rayet stars which will result in Ib, Ic supernova, high-mass X-ray binaries which are neutron stars accreting the wind of its binary companion). The particle (and electromagnetic field) outflow in the case of pulsars and magnetars is named as "wind". The existence of pulsar wind nebulae clearly demonstrates the wind of pulsars [69].
_Wind braking of magnetars._ In the wind braking model of magnetars [49], the rotational energy loss rate is enhanced by the particle wind \(\dot{E}_{w}=\dot{E}_{d}(L_{p}/\dot{E}_{d})^{1/2}\), where \(\dot{E}_{w}\) and \(\dot{E}_{d}\) are rotational energy loss rate due to particle wind and magnetic dipole radiation, respectively, \(L_{p}\) is the particle wind luminosity. The particle wind is dominated by magnetic energy release in the case of magnetars. It may be comparable with the X-ray luminosity, which can be significant larger than the magnetar's rotational energy loss rate: \(L_{p}\sim L_{x}\gg\dot{E}_{\rm rot}\). In this case, the required dipole magnetic field will be much smaller than the magnetic dipole braking assumption (i.e., the characteristic magnetic field). In the case of normal pulsars, the particle wind is due to rotational energy loss rate \(L_{p}\sim\dot{E}_{\rm rot}\). Then the rotational energy loss due to particle wind is comparable with the magnetic dipole radiation. Therefore, wind braking of magnetars can unify the spin-down mechanism of normal pulsars and magnetars. The wind braking model of magnetars had also be proposed earlier [70]. However, when Harding et al saw that a strong dipole magnetic field is not needed in the wind braking model, they said "the magnetar model must be abandoned" as the penalty of the wind braking model. However, one point they did not realize is that there are two kinds of magnetic field in magnetars (dipole field and multipole field). In the wind braking model, a magnetar is a neutron star with strong multipole field, with or without a strong dipole field (which does not play a significant role). Once this point is realized, many challenging observations of magnetars can be understood [49]. It also has clear predictions: a magnetism-powered pulsar wind nebula and a braking index smaller than three.
Several subsequent observations of magnetars can also be explained in the wind braking model. For example: (1) the timing behavior of the low magnetic field magnetar [14; 71]. (2) Possible decreasing period derivative (i.e. spin-down torque) of the low magnetic field magnetar Swift J1822.3\(-\)1606 [60; 72]. (3) Negative correlation between the X-ray luminosity and spin-down torque of the Galactic center magnetar [73; 74]. (4) The anti-glitch in the magnetar 1E 2259+586 can also be understood naturally in the wind braking scenario [75; 76]. (5) Possible magnetar wind nebula around the magnetar Swift J1834.9-0846 [77; 78]. Glitches of normal pulsars have been studied for over 50 years. In the case of magnetars, anti-glitches are discovered. This again demonstrates the peculiarity of magnetars. Magnetars can provide us many things unexpected in normal pulsars (FRBs, anti-glitch, fallback disks etc).
The reason why magnetars can have so many timing and radiative events may be that their magnetic field is a twisted dipole field instead of simple dipole field. The idea of twisted magnetic field has been discussed earlier in the case of the solar magnetic field [79]. Later it has been applied to the case of magnetars [28; 29]. From a geometrical point of view, the twisted can be westward or eastward. More importantly is that a twisted magnetic
field carries magnetic free energy. The long term flux decay of transient magnetars may due to untwisted of a globally twisted magnetic field [30]. In a globally twisted magnetic field, the magnetar may have large polar caps. This will have many sequences for the radio and X-ray emissions of magnetars. The calculations in both local twist or globally twist is rather complicated. A simplified model (toy model) is developed for the flux decay, shrink hot spot and delayed spin-down torque of magnetar outburst [19]. A toy model is easy to use, especially for observers.
### Optical/IR observations of magnetars: fallback disks
Some of the ejected material of a supernova may fallback and form a disk around the central compact star. This is the idea of fallback disks. In the case of pulsars, the propose of a fallback disk has a long history[80]. However, no fallback disk is found observationally. Optical/IR observations of the magnetar 4U 0142+61 revealed possible existence of a fallback disk [12]. The idea of finding a fallback disk is achieved in a magnetar. Both the magnetar model and the fallback disk model claim their success for the fallback disk around 4U 0142+61.
Magnetar+fallback disk systemThe combination of magnetar+fallback disk made a success in explaining the central compact object (dubbed as CCO, cyan cycles in Figure 1) inside supernova remnant RCW 103. This compact has a pulsation period about 6.6 hours [81]. It is confirmed to be a magnetar (by it magnetar-like burst and outburst, [82; 83]). This will make the magnetar inside RCW 103 a very special magnetar. Compared with other magnetars, other central compact objects, normal pulsars, and accreting neutron stars, the magnetar inside RCW 103 has the longest pulsation period at that time. A combination of magnetar+fallback disk may explain its long pulsation period [84]. A high disk mass (\(\sim 10^{-5}\) M\({}_{\odot}\)) and high dipole field (\(\sim 5\times 10^{15}\) G) is required to explain a period about \(2\times 10^{4}\) s. Later discovery of long period radio pulsars (dubbed as LPRPs, red cycles in Figure 1, where sources with period longer than 23.5s are not shown. See Figure 1 in [88] for updates.) may also be magnetar+fallback disk systems [85; 86; 87; 88]. Another possibility for long period radio pulsar is that they are white dwarfs [89]. It is interesting to note that, the white dwarf model is also an alternative to the magnetars model for AXPs and SGRs (The fallback disk model is also originally proposed to beat the magnetar model). The fallback disk may be relevant to many other aspects of pulsars and magnetars (nulling, braking index, precession etc, [90] and references therein).
### Radio emission of magnetars
Radio observations found most of the pulsars up to now [91]. Radio emission of magnetars also reveal a wealth of information about their physics and provide links between magnetars and normal pulsars. Among the about 30 magnetars up to now [92], 6 magnetars are observed to have radio emissions (chronological order, emphasizing the radio polarization aspect):(1) XTE J1810\(-\)197 (first transient magnetar and first radio emitting magnetar [93; 94], revived later [95]), (2) 1E 1547.0\(-\)5408 [96], (3) PSR J1622\(-\)4959 ([59], revived later [97]), (4) SGR 1745\(-\)2900 (Galactic centre magnetar [98; 99]), (5) Swift J1818.0\(-\)1607 (may be a transition object [100; 101; 102]), (6) SGR 1935+2154 (emitting Galactic FRBs [103]). The radio emitting high magnetic field pulsar (blue dots in Figure 1) PSR J1119\(-\)6127 is also reported to have magnetar activities [104]. These observations of magnetar radio emissions provides clues and links to the physics of FRBs [105; 106].
The first radio emitting magnetar XTE J1810\(-\)197 and later more sources tell us that [11; 95]: (1) magnetar radio emissions have a flat spectra, (2) they are high variable (flux, pulse profile, timing etc), (3) they are transients (disappear during the outburst decaying phase, and may revive during the next outburst). The radio-loud magnetar PSR J1622\(-\)4950 shows a decreasing polarization position angle with time [59], which implies a timing evolving magnetosphere of magnetars. Swift J1818.0\(-\)1607 has a steep spectra at first and flat spectra later, which may be a transition object between normal pulsars and magnetars [102]. The bright and narrow single pulse and flat polarization angle is similar to other
magnetars and that of FRBs [100; 101; 105]. The polarization position angle also changes slope which has never been observed in normal pulsars, which again require a dynamic magnetosphere of magnetars [101]. A dynamic polarization position angle is also found in FRBs [106], which may imply similar physics are happening there.
In summary, the radio emissions of magnetars are highly variable (flux, pulse profile, timing, polarization, and position angle etc). The position angle depends on the magnetic field geometry of the neutron star. A timing varying position angle may indicate a timing varying magnetic field in magnetars. This is consistent with the untwisting picture of magnetar outburst. The question is: how to model the position angle for a complex field geometry? Once obtained, this model can be applied to both magnetars and FRBs.
_Rotating vector model for magnetars._ Assuming a globally twisted dipole magnetic field, the magnetic field geometry can be approximated analytically [30]. Employing spherical geometry or differential geometry, the modification of the rotating vector model (a model for the position angle) in the case of magnetars can be obtained analytically [107]. Once another magnetic field geometry is obtained, given the toroidal field, the modification to the position angle can also be approximated. Therefore, every magnetospheric model for magnetars should calculate its field geometry and compare with the radio observations of magnetars. In the presence of multipole field, the appearance and disappearance of multipole field may cause a changing slope of the position angle ([101]).
### Accreting magnetars
Magnetars are just a special kind of pulsars. Since there are accreting neutron stars in binary systems[3], then it is natural to expect that there are magnetars in binary systems. The question is: how can we identify possible signatures of magnetars in a binary system? From the X-ray spectra perspective, magnetars have a flat hard X-ray spectra, compared with rotation-powered pulsars and accretion-powered pulsars [32]. From the physical point of view (as discussed above), the key difference between magnetars and rotation-powered pulsars is magnetar's multipole field. It is not their position on the \(P-\dot{P}\) diagram (i.e., not the surface dipole field). Therefore, we must find evidences for strong multipole field in accreting systems, in order to say that they are accreting magnetar candidates. Possible evidences for strong multipole field include [108]: (1) magnetar burst, (2) hard X-ray tail etc. One thing that is rather unexpected is the discovery of ultra-luminous X-ray pulsars, which may be super-Eddington accreting magnetars in binary systems [15].
Ultraluminous X-ray sources are super-Eddington (for a stellar mass object) point sources offset from the galactic centre. Previously, the ultra-luminous X-ray sources are thought to be intermediate mass black holes or super-Eddington accreting stellar mass black holes [109]. The detection of pulsation (a pulsation period modulated by the orbital motion) from an ultra-luminous X-ray source confirms their neutron star nature. Like that of normal accreting neutron stars, the ultra-luminous X-ray pulsar is also observed to be spinning up [15]. Then the problem (or difficulty) of ultra-luminous X-ray source is two fold: how to explain their super-Eddington luminosity (e.g., \(10^{40}\) erg s\({}^{-1}\)) and their spin-up rate (i.e., even if we do no know the super-Eddington mechanism, such a huge accretion flow will result in a very large spin-up torque, which is much larger than the observed value)? The answer is accreting magnetars. It is proposed in both the observational paper [15] and subsequent modelings [110; 111; 112].
_Accreting low magnetic field magnetar._ In the accreting low magnetic field magnetar scenario, the super-Eddington luminosity is due to the presence of strong multipole field (e.g., \(10^{14}\) G). The rotational behaviors of the ultra-luminous X-ray pulsar is due to interaction of its much lower dipole field (e.g., \(10^{12}\) G) with the accretion flow [110]. The idea of accreting low magnetic field magnetar is consistent with the studies of isolated magnetars (aged magnetars are more likely to be low magnetic field magnetars). The propose of accreting low magnetic field is consistent with later observations [113] and theoretical models [114; 115]. Accreting magnetars may be related to the formation of some peculiar millisecond pulsars [112].
Three kinds of accreting magnetarsAt present, the slowest pulsation X-ray pulsar is AX J1910.7+0917 with a period about 10 hours [116]. The magnetar inside RCW 103 (with a period of 6.6 hours) is the now the second slowest rotating neutron star. Similar to the magnetar inside RCW 103, AX J1910.7+0917 may be an accreting magnetar with low mass accretion rate in a binary system. If ultra-luminous X-ray pulsars are super-Eddington accreting magnetars, then it is possible that there are also other accreting magnetar with lower mass accretion rate. In our opinion [114], there may be three kinds of accreting magnetars: (1) ultra-luminous X-ray pulsars may be accreting magnetars with high mass accretion rate. The high mass accretion rate may result in the decay of the magnetic field. Thus result in an accreting low magnetic field magnetars. (2) Slow pulsation X-ray pulsars (e.g., AX J1910.7+0917, 2S 0114+65, 4U 2206+54, super-giant fast X-ray transients) may be accreting magnetars with low mass accretion rate. (3) Slow pulsation X-ray pulsars in SMC (with periods 1000s) may be accreting magnetars with intermediate mass accretion rate. Considering that 4U 2206+54 is spinning down, while 2S 0114+65 is spinning up, it is possible that AX J1910.7+0917 is the linking source between 4U 2206+54 and 2S 0114+65. For the fallback disk accreting magnetar inside RCW 103, ultra-luminous X-ray pulsar, and slow pulsation X-ray pulsars, they may all be accreting magnetars. Accreting magnetars are also magnetars.
## 3 Summary: magnetars in astrophysics
At present, magnetars have a limited number of sources (about 30). Future more radio and X-ray observations may tell us more about magnetars [103; 117]. Magnetars are linked to a broad range of observations in astrophysics: (1) The typical examples are anomalous X-ray pulsars and soft-gamma ray repeaters. They may be magnetar candidates. (2) The X-ray dim isolated neutron stars may be dead magnetars (blue diamonds in Figure 1). (3) For the central compact objects inside supernova remnants, they may be magnetar-in-waiting (i.e. anti-magnetar) or fallback disk accreting magnetars. (4) High magnetic field pulsars provide links between normal pulsars and magnetars (e.g., PSR J1846\(-\)0258 and PSR J1119\(-\)6127). (5) The existence of low magnetic field magnetars imply that there may be magnetar activities in normal pulsars in the future. (6) It is natural that there are also accreting magnetars in binary systems (e.g., LS 1+61, superslow X-ray pulsars, ULX pulsars etc). (7) For the proposal of magnetars inside FRBs and GRBs, a definite period is required. Until then can we say that they are magnetars.
This work is supported by National SKA Program of China (No. 20205KA0120300) and NSFC (12133004).
## Conflicts of Interest:
The author declare no conflict of interest.
|
2309.06488 | Operationally independent events can influence each other in quantum
theory | In any known description of nature, two physical systems are considered
independent of each other if any action on one of the systems does not change
the other system. From our classical intuitions about the world, we further
conclude that these two systems are not affecting each other in any possible
way, and thus these two systems are causally disconnected or they do not
influence each other. Building on this idea, we show that in quantum theory
such a notion of classical independence is not satisfied, that is, two quantum
systems can still influence each other even if any operation on one of the
systems does not create an observable effect on the other. For our purpose, we
consider the framework of quantum networks and construct a linear witness
utilizing the Clauser-Horne-Shimony-Holt inequality. We also discuss one of the
interesting applications resulting from the maximal violation of classical
independence towards device-independent certification of quantum states and
measurements. | Shubhayan Sarkar | 2023-09-12T18:03:06Z | http://arxiv.org/abs/2309.06488v3 | # Operationally independent events can influence each other in quantum theory
###### Abstract
In any known description of nature, two physical systems are considered independent of each other if any action on one of the systems does not change the other system. From our classical intuitions about the world, we further conclude that these two systems are not affecting each other in any possible way, and thus these two systems are causally disconnected or they do not influence each other. Building on this idea, we show that in quantum theory such a notion of "classical independence" is not satisfied, that is, two quantum systems can still influence each other even if any operation on one of the systems does not create an observable effect on the other. For our purpose, we consider the framework of quantum networks and construct a linear witness utilizing the Clauser-Horne-Shimony-Holt inequality. We also discuss one of the interesting applications resulting from the maximal violation of "classical independence" towards device-independent certification of quantum states and measurements.
_Introduction--_ Nonlocality is one of the most fascinating aspects of quantum theory, encapsulating the absence of a local description for spatially separated quantum systems that can not communicate with each other. Discovered by Bell in 1964 [1; 2] and then experimentally observed in the last decades [3; 4; 5; 6], the phenomenon of nonlocality clearly establishes the departure of the quantum world from classical physics. An equivalent way to understand it is that two quantum systems can be correlated in a stronger way than two classical systems.
In this work, we pose an even more stringent inquiry: consider two systems that exhibit no correlation with each other, meaning they are mutually independent. The fundamental question we address is whether these two independent systems can influence each other in any manner. Equivalently, can one system exert an impact on its counterpart when there is no correlation and no communication between them? Drawing upon our classical understanding of the natural world, it can be logically deduced that in the absence of communication and with both systems being mutually independent, they cannot exert any influence on each other in any conceivable manner. We consider this viewpoint as a notion of classicality and term it "classical independence".
Here, we show that the notion of classical independence is violated in quantum theory, that is, two mutually independent quantum systems might affect each other if they are individually entangled to some different quantum systems. For our purpose, we consider the framework of quantum networks, in particular, the quantum bilocality scenario [7; 8] with weaker constraints on the network. We then derive a linear inequality inspired by Clauser-Horne-Shimony-Holt (CHSH) inequality [9]. Restricting to operationally independent correlations, we find an upper bound for correlations that can be described in a classically independent way. We then find a set of quantum states and measurements that violate this bound. Furthermore, we find the maximum value of the inequality that can be attained in quantum theory. Interestingly using the methods presented in [10], also allows us to certify the quantum states and measurements in a device-independent way from the maximal violation of the constructed inequality.
_Classical independence.--_ Consider two systems with Alice and Bob such that measurements \(\mathcal{A},\mathcal{B}\) with outcomes \(a,b\) can be performed on them respectively. Now, we define when these two systems can be considered to be operationally independent of each other.
**Definition 1** (Operational independence).: _Two systems are operationally independent if the probability of obtaining an outcome when performing a measurement on one system is completely independent of the other, that is,_
\[p(a|\mathcal{A},b,\mathcal{B})=p(a|\mathcal{A})\qquad\forall a,b,\mathcal{A}, \mathcal{B}. \tag{1}\]
The resulting joint probability \(p(a,b|\mathcal{A},\mathcal{B})\) factors out using Bayes rule as
\[p(a,b|\mathcal{A},\mathcal{B})=p(a|\mathcal{A})p(b|\mathcal{B})\qquad\forall a,b,\mathcal{A},\mathcal{B}. \tag{2}\]
Inspired by the above definition, we define the principle of no influence as stated below.
**Definition 2** (No-influence principle).: _Two systems do not influence each other if given additional information \(\lambda\), the probability of obtaining an outcome when performing a measurement on one system is completely independent of the other, that is,_
\[p(a|\mathcal{A},b,\mathcal{B},\lambda)=p(a|\mathcal{A},\lambda)\qquad\forall a,b,\mathcal{A},\mathcal{B},\lambda. \tag{3}\]
It is straightforward to observe that no-influence principle implies operational independence. This brings us to the definition of classicality, which we call classical independence.
**Definition 3** (Classical independence).: _Two operationally independent events [def. 1] are classically independent of each
other if they do not influence each other [def. 2], or to put it simply the notion of classical independence means_
\[\text{Operational Independence}\implies\text{No Influence}. \tag{4}\]
Equivalently the above definition can be understood as, if the correlations between two parties are mutually independent for any possible choice of measurement of both parties, then the correlations must be local.
Let us now construct a scenario where we can observe the violation of classical independence with quantum states and measurements. A natural scenario that one could investigate in this regard is the standard Bell scenario. However, it is quite clear that if the correlations between two parties are operationally independent (2), then one can never observe any violation of a Bell inequality. Consequently in this work, we consider the bilocality scenario [7] with three parties as described below.
_The scenario--_ We consider three parties namely, Alice, Bob and Eve in three different spatially separated labs. Alice and Bob receive a single particle from sources \(S_{1},S_{2}\) respectively and Eve receives two particles from both the sources. Unlike the bilocality scenario, the sources here need not be independent of each other, thus we call it "weak-bilocality scenario". Now, Alice and Bob perform two dichotomic measurements on their particles which they can freely choose. Eve on the other hand can only perform a single four-outcome measurement. The measurement inputs of Alice and Bob are denoted as \(x,y=0,1\) respectively and their outcomes are denoted as \(a,b=0,1\), whereas the outcomes of Eve are denoted as \(e=0,1,2,3\). The scenario is depicted in Fig. 1.
The experiment is repeated enough times to construct the joint probability distribution or correlations, \(\overline{p}=\{p(a,b,e|x,y)\}\) where \(p(a,b,e|x,y)\) denotes the probability of obtaining outcome \(a,b,e\) by Alice, Bob and Eve when they choose the inputs \(x,y\) respectively. These probabilities can be computed in quantum theory using the Born rule as
\[p(a,b,e|x,y)=\text{Tr}\left[(N^{A}_{a|x}\otimes N^{B}_{b|y}\otimes N^{E}_{e}) \rho_{ABE}\right] \tag{5}\]
where \(N^{A}_{a|x},N^{B}_{b|y},N^{E}_{e}\) denote the measurement elements of Alice, Bob and Eve corresponding to \(x,y\) input and \(\rho_{ABE}\) denotes the joint state generated by the source \(S_{1},S_{2}\). The measurement elements are positive semi-definite and \(\sum_{a}N^{A}_{a|x}=\sum_{b}N^{B}_{b|y}=\sum_{e}N^{E}_{e}=1\) for all \(x,y\). It is important to recall here that Alice and Bob can not communicate with each other during the experiment.
It is usually simpler to express the probabilities in terms of expectation values as
\[\langle\mathcal{A}_{x}\mathcal{B}_{y}N^{E}_{e}\rangle = p(0,0,e|x,y)+p(1,1,e|x,y) \tag{6}\] \[- p(0,1,e|x,y)-p(1,0,e|x,y)\]
where \(\mathcal{A}_{x},\mathcal{B}_{y}\) denote Alice's and Bob's observable corresponding to the input \(x,y\) respectively and can be expressed as \(s_{i}=N^{\text{s}}_{b|i}-N^{\text{s}}_{1|i}\) for \(s=A,B\).
_Violation of classical independence.--_ Let us now restrict that the correlations \(p(a,b|x,y)\) are operationally independent [def. 1], that is, \(p(a,b|x,y)=p(a|x)p(b|y)\).
Now, let us express the joint probability distribution \(p(a,b,e|x,y)\) as
\[p(a,b,e|x,y)=\sum_{\lambda}p(\lambda)p(a,b,e|x,y,\lambda) \tag{7}\]
Using Bayes rule, the probability \(p(a,b,e|x,y,\lambda)\) can be rewritten as
\[p(a,b,e|x,y,\lambda)=p(a|x,b,y,e,\lambda)p(b|y,e,\lambda)p(e|\lambda) \tag{8}\]
Assuming no-influence [def. 2] allows us to conclude that \(p(a|x,b,y,e,\lambda)=p(a|x,e,\lambda)\) and thus we get that
\[p(a,b,e|x,y)=\sum_{\lambda}p(a|x,e,\lambda)p(b|y,e,\lambda)p(e|\lambda)p( \lambda). \tag{9}\]
Notice that in the bilocality scenario [7], one additionally assumes that \(p(a|x,e,\lambda)=p(a|x,\lambda)\) and \(p(b|y,e,\lambda)=p(b|y,\lambda)\).
Inspired by [9], we will now construct a linear functional constructed from the joint probability distribution \(\vec{p}\). In terms of observables, the functional can be represented as
\[\mathcal{I}=\big{\langle}\mathcal{A}_{0}(\mathcal{B}_{0}-\mathcal{B}_{1}) \mathcal{E}_{0}+\mathcal{A}_{1}(\mathcal{B}_{0}+\mathcal{B}_{1})\mathcal{E}_ {1}\big{\rangle} \tag{10}\]
Figure 1: Weak-bilocality scenario. Alice and Bob each receive a single particle from their respective sources, which might be correlated to each other, while Eve receives two particles, one from each of these sources. Alice and Bob have the freedom to independently select and conduct two dichotomic measurements on their respective particles. In contrast, Eve’s measurement is constrained to a single four-outcome measurement. None of the parties can communicate with each other.
where \(\mathcal{E}_{0}=N_{0}^{E}-N_{1}^{E}-N_{2}^{E}+N_{3}^{E}\) and \(\mathcal{E}_{1}=N_{0}^{E}+N_{1}^{E}-N_{2}^{E}-N_{3}^{E}\). As shown in Appendix A, the above inequality can be broken up into conditional CHSH inequalities, up to the presence of Eve's measurement, which were useful to prove that every pure entangled state is Bell nonlocal [11] and self-testing the Bell basis [12].
Let us compute the maximum value of \(\mathcal{I}\) (10) achievable using correlations that satisfy "classical independence". We will further call this value as the classical bound and denote it as \(\beta_{\mathbb{C}}\). For this purpose, let us express the expectation value (6) using (9) as
\[\langle\mathcal{A}_{x}\mathcal{B}_{y}N_{e}^{E}\rangle=\sum_{ \lambda}p(\lambda)p(e|\lambda)\big{(}p(0|x,e,\lambda)-p(1|x,e,\lambda)\big{)}\] \[\big{(}p(0|y,e,\lambda)-p(1|y,e,\lambda)\big{)} \tag{11}\]
which can be simply stated as
\[\langle\mathcal{A}_{x}\mathcal{B}_{y}N_{e}^{E}\rangle=\sum_{\lambda}p(\lambda )p(e|\lambda)\langle\mathcal{A}_{x,e,\lambda}\rangle\langle\mathcal{B}_{y,e, \lambda}\rangle. \tag{12}\]
Using the above expression in Appendix A, we calculate the classical bound of \(\mathcal{I}\) (10) which is stated below.
**Fact 1**.: _Consider the functional \(\mathcal{I}\) (10). The maximum value \(\beta_{\mathbb{C}}\) that can be achieved by classically independent correlations is \(\beta_{\mathbb{C}}=2\)._
Consider now the quantum state \(|\psi_{ABE}\rangle=|\phi_{A\overline{A}}^{+}\rangle\otimes|\phi_{B\overline{B} }^{+}\rangle\) such that \(E\equiv\overline{AB}\) and \(|\phi^{+}\rangle\) is the two-qubit maximally entangled state. It is easy to check that such a state will always generate correlations between Alice and Bob that are operationally independent [def. 1]. Also, consider that Alice's and Bob's observables are given by
\[\mathcal{A}_{0} = \sigma_{z},\qquad\mathcal{A}_{1}=\sigma_{x}\] \[\mathcal{B}_{0} = \frac{\sigma_{z}+\sigma_{x}}{\sqrt{2}},\qquad\mathcal{B}_{1}= \frac{\sigma_{x}-\sigma_{z}}{\sqrt{2}} \tag{13}\]
along with Eve's measurement given by the Bell-basis as \(\widehat{N}_{i,j}^{E}=|\phi_{i,j}\rangle\langle\phi_{i,j}|\) where \(|\phi_{i,j}\rangle=\frac{1}{\sqrt{2}}(|i\,j\rangle+(-1)^{i}|\overline{i}\, \overline{j}\rangle)\) where \(i,j=0,1\) and \(N_{i,j}^{E}\equiv N_{2i+j}^{E}\). Plugging these states and observables in the functional \(\mathcal{I}\) (10) gives us the value \(2\sqrt{2}\). Thus, quantum theory violates the notion of classical independence. Consequently, one can conclude that systems that are operationally independent can influence each other. We show in Appendix A that \(2\sqrt{2}\) is in fact the maximal value achievable using quantum theory.
We can identify some necessary conditions to violate the notion of classical independence using quantum states and measurements. The first necessary condition is that Eve needs to perform an entangled measurement as one can observe from Eq. (9). This is contrary to the violation of bilocality which can also happen with separable measurements with Eve. Further on, the sources generating entangled states between Alice-Eve and Bob-Eve are necessary. Although the inequality (10) considered in this work requires incompatible measurements to obtain any violation, one further needs to explore whether incompatible measurements are a necessity to violate classical independence or not. Let us now discuss an interesting application arising due to the violation of classical independence.
_Self-testing.--_ Self-testing is the strongest device-independent scheme that allows one to certify the quantum states and measurements without making any assumption on the devices involved apart from the fact that they are governed by quantum theory [13]. Self-testing in quantum networks has been explored recently [14; 15; 16; 17; 12; 10; 12], however, all of these schemes also assume that the sources are independent [see nevertheless [19]]. Here, we do not need to assume it as we show in Appendix B that the condition of operational independence [see Eq. (1)] allows one to conclude that the sources are independent. Let us now state the self-testing result the proof of which is discussed in Appendix B.
**Theorem 1**.: _Assume that the operationally independent correlations \(\overline{\mu}\) attain the quantum bound of \(\mathcal{I}\) (10). Then, (i) The Hilbert spaces of all the parties decompose as \(\mathcal{H}_{s}=\mathcal{H}_{s^{\prime}}\otimes\mathcal{H}_{s^{\prime\prime}}\) and \(\mathcal{H}_{\overline{s}}=\mathcal{H}_{\overline{s}}\otimes\mathcal{H}_{ \overline{s}^{\prime\prime}}\); (ii) There exist local unitary transformations \(U_{s}:\mathcal{H}_{s}\rightarrow\mathcal{H}_{s}\) and \(U_{\overline{s}}:\mathcal{H}_{\overline{s}}\rightarrow\mathcal{H}_{\overline{s}}\) such that_
\[(U_{s}\otimes U_{\overline{s}})|\psi_{s\overline{s}}\rangle=|\phi_{s^{\prime \prime}s^{\prime}}^{+}\rangle\otimes|\bar{\xi}_{s^{\prime\prime}s^{\prime \prime}}\rangle \tag{14}\]
_for some \(|\bar{\xi}_{s^{\prime\prime}s^{\prime\prime}}\rangle\in\mathcal{H}_{s^{\prime \prime}}\otimes\mathcal{H}_{\overline{s}^{\prime\prime}}\), and the measurements of all parties are certified as_
\[\overline{U}\,N_{i,j}^{E}\,\overline{U}^{\dagger}=|\phi_{i,j}\rangle\langle \phi_{i,j}|_{E^{\prime}}\otimes\mathbb{1}_{E^{\prime\prime}},\quad U_{s}\,s_{i} \,U_{s}^{\dagger}=\tilde{s}_{i}\otimes\mathbb{1}_{s^{\prime\prime}} \tag{15}\]
_for \(i,j=0,1\) where \(\overline{U}=\otimes_{s}U_{\overline{s}}\) such that \(s=A,B\) and \(E\equiv\overline{AB}\). The observables \(\tilde{s}_{i}\) are given in Eqs. (13)._
_Discussions--_ Likewise preparation non-contextuality [20], the above result can also be considered as a violation of Leibniz principle of indiscernibles [21], that can
Figure 2: Causality graph of the weak-bilocality scenario. The square boxes represent the measurement devices and the circles represent the sources. The grey circle represents a hidden variable that might correlate the sources.
be simply stated as principles that hold operationally should also hold ontologically. In the present context, if two systems are operationally independent does not imply that they are ontologically independent. Consider again the no-influence principle which can also be interpreted as the assumption of local causality in the Bell scenario. However, the striking difference is the fact that the parties involved in the Bell scenario are not operationally independent. Consequently, the scenario considered in this work is weaker when compared to the Bell scenario, that is, we identify non-classical behavior even in situations where one can not violate a Bell inequality. Further on, we find that the assumptions considered in this work are weaker when compared to the bilocality scenario [7] as we allow Eve to influence Alice's and Bob's results. Furthermore, in any quantum network including the bilocality scenario one needs to further assume that the sources are statistically independent of each other [see nevertheless [19]]. However, here we do not put any restrictions on the sources and even allow them to be entangled. Furthermore, the bilocality scenario has already been experimentally implemented [22] and thus we believe that the violation of inequality (10) can be easily tested.
Analyzing the above result from a realist perspective gives an interesting insight towards understanding whether a measurement on an entangled counterpart produces a physical change on the other. In the Bell scenario, a possible explanation of the observed nonlocal correlations is that the measurement by Alice updates the state with Bob or vice versa. However, such an explanation in the above-presented scenario is not consistent. First, consider that Eve performed her entangled measurement before Alice and Bob, then as the states between Alice and Bob are entangled then one can explain the violation of classical-independence using a similar realist explanation as the Bell scenario. However, consider now that Eve has not performed her entangled measurement, then as the states between Alice and Bob are separable any measurement by Alice should not alter Bob's state but can alter Eve's state. Consequently, there exist spacelike frames of reference where Alice's state update is caused by Bob's measurement and other frames where it remains unchanged. Thus, whether the "physical state" of Alice gets updated when Bob performs a measurement depends on the information about Eve's result which again is problematic if one considers that the cause-effect relationship is not epistemic.
Several interesting problems follow up from our work. The most interesting among them would be to explore in detail whether a cause-effect relationship between two events is consistent in quantum theory or not. A simpler problem will be to extend the weak-bilocality scenario to the multipartite regime with arbitrary number of sources or higher number of outcomes. Furthermore, it will be extremely interesting if one can use the above-presented self-testing result to construct a device-independent key distribution scheme or for randomness certification.
###### Acknowledgements.
We thank Stefano Pironio for insightful comments. This project was funded within the QuantERA II Programme (VERIQTAS project) that has received funding from the European Union's Horizon 2020 research and innovation programme under Grant Agreement No 101017733.
|
2309.09963 | Power of quantum measurement in simulating unphysical operations | The manipulation of quantum states through linear maps beyond quantum
operations has many important applications in various areas of quantum
information processing. Current methods simulate unphysical maps by sampling
physical operations, but in a classical way. In this work, we show that using
quantum measurement in place of classical sampling leads to lower simulation
costs for general Hermitian-preserving maps. Remarkably, we establish the
equality between the simulation cost and the well-known diamond norm, thus
closing a previously known gap and assigning diamond norm a universal
operational meaning as a map's simulability. We demonstrate our method in two
applications closely related to error mitigation and quantum machine learning,
where it exhibits a favorable scaling. These findings highlight the power of
quantum measurement in simulating unphysical operations, in which quantum
interference is believed to play a vital role. Our work paves the way for more
efficient sampling techniques and has the potential to be extended to more
quantum information processing scenarios. | Xuanqiang Zhao, Lei Zhang, Benchi Zhao, Xin Wang | 2023-09-18T17:39:38Z | http://arxiv.org/abs/2309.09963v1 | # Power of quantum measurement in simulating unphysical operations
###### Abstract
The manipulation of quantum states through linear maps beyond quantum operations has many important applications in various areas of quantum information processing. Current methods simulate unphysical maps by sampling physical operations, but in a classical way. In this work, we show that using quantum measurement in place of classical sampling leads to lower simulation costs for general Hermitian-preserving maps. Remarkably, we establish the equality between the simulation cost and the well-known diamond norm, thus closing a previously known gap and assigning diamond norm a universal operational meaning as a map's simulability. We demonstrate our method in two applications closely related to error mitigation and quantum machine learning, where it exhibits a favorable scaling. These findings highlight the power of quantum measurement in simulating unphysical operations, in which quantum interference is believed to play a vital role. Our work gives the way for more efficient sampling techniques and has the potential to be extended to more quantum information processing scenarios.
_Introduction.--_ Quantum measurement is the key operation that brings probability into quantum mechanics. It can be viewed as a quantum way of sampling outcomes from a probability distribution governed by the Born rule. The task of quantum random sampling [1; 2] was conceived to demonstrate a computational advantage of quantum computers based on the observation that quantum measurement following certain quantum computations is difficult to simulate classically [3]. Due to its relative simplicity, quantum random sampling has led to experimental demonstration of quantum advantage on noisy intermediate-scale quantum devices available nowadays [4; 5; 6; 7]. However, the task itself is proposed for the sole purpose of showing quantum advantage and therefore lacks practical meaning. On real-world sampling problems, the benefits of quantum measurement are yet to be discovered.
A practical task of broad interest where sampling naturally arises is the simulation of unphysical linear maps, which is an essential subroutine in many quantum information processing tasks. Though quantum operations are limited to completely positive and trace-preserving (CPTP), or more generally, completely positive and trace-non-increasing (CPTN) maps due to the physicality requirement of quantum states, many operations beyond CPTN maps are of great importance from both theoretical and practical perspectives. For example, positive but not completely positive maps, such as partial transposition [8; 9], are widely used to characterize and detect entanglement in quantum states [10]. Maps that are even non-positive are encountered in the mitigation of errors on quantum devices [11; 12]. These crucial applications motivate the research of realizing unphysical operations, primarily Hermitian-preserving maps.
Among a few different paths to realizing Hermitian-preserving maps [12; 13; 14; 15], quasi-probability decomposition (QPD) has been a popular method due to its favorable memory requirement and easy implementability. The idea of QPD is to decompose an unphysical operation \(\mathcal{E}\) into a linear combination of physical operations \(\mathcal{N}_{j}\): \(\mathcal{E}=\sum_{j}\alpha_{j}\mathcal{N}_{j}\), where \(\alpha_{j}\) are suitable coefficients. This decomposition allows the computation of the expectation value of any observable \(O\) with respect to any state \(\rho\) transformed by \(\mathcal{E}\): \(\mathrm{tr}\left[\mathcal{E}(\rho)O\right]=\sum_{j}\alpha_{j}\,\mathrm{tr} \left[\mathcal{N}_{j}(\rho)O\right]\). Specifically, the computation is completed by sampling physical operations \(\mathcal{N}_{j}\) with a probability proportional to \(|\alpha_{j}|\) and classically post-processing the measurement outcomes.
While QPD has enjoyed great success in a variety of tasks [16; 17; 18; 19; 20; 21], it is still conservative for that it samples physical operations in a classical way. Quantum mechanics allows a more general way of sampling, that is, by quantum measurement (e.g., see Ref. [22] for an initial investigation of sampling with quantum measurement in simulating an ideal two-point correlator). Thus, it is then natural to ask: does quantum measurement bring any advantage over classical sampling in simulating maps beyond quantum operations?
In this work, we give an affirmative answer to this question by utilizing a quantum instrument to sample physical operations and simultaneously apply the sampled operation to the input state. We show that one quantum instrument is all you need to achieve an optimal simulation cost, which is significantly lower than the one attained by classical sampling in some cases. At the same time, we prove that the simulation cost of any Hermitian-preserving map is equal to the map's diamond norm, generalizing the result in Ref. [14] and endowing diamond norm with a universal operational meaning in the simulation of Hermitian-preserving maps. To demonstrate the advantage of quantum sampling, we consider applications in retrieving faithful information from noisy quantum states and extracting entries from a state's density matrix, which are tasks closely related to error mitigation and quantum machine learning.
_Measurement-controlled post-processing.--_ A Hermitian-preserving map, which maps Hermitian operators to Hermitian operators, can be written as a linear combination of CPTN maps. QPD simulates a Hermitian-preserving map using such a decomposition. For example, consider a decomposition of a
Hermitian-preserving map \(\mathcal{E}=\sum_{j}\alpha_{j}\mathcal{N}_{j}\), where each \(\alpha_{j}\) is a real number and \(\mathcal{N}_{j}\) is a CPTN map. To begin with, QPD samples a CPTN map from \(\{\mathcal{N}_{j}\}\) with a probability distribution \(p_{j}\coloneqq p(\mathcal{N}_{j})\coloneqq|\alpha_{j}|/\sum_{j}|\alpha_{j}|\). Then, the sampled operation is applied to the input state \(\rho\)[23] followed by a one-shot measurement with the given observable \(O\). When the sampled operation is \(\mathcal{N}_{j}\), a coefficient \(\mathrm{sgn}(\alpha_{j})\gamma\) will be multiplied to the outcome from the observable measurement, where \(\gamma\coloneqq\sum_{j}|\alpha_{j}|\) and \(\mathrm{sgn}\) denotes the sign function. The post-processing coefficient ensures that the final output has an expected value \(\sum_{j}p_{j}\mathrm{sgn}(\alpha_{j})\gamma\operatorname{tr}[\mathcal{N}_{j} (\rho)O]=\operatorname{tr}\left[\mathcal{E}(\rho)O\right]\). After repeating the sampling, measurement, and post-processing for multiple rounds, we can get a fairly accurate estimation of \(\operatorname{tr}\left[\mathcal{E}(\rho)O\right]\) by taking the average of the outputs from all rounds. The whole process is shown in Fig. 1(a).
Compared with physical operations, unphysical maps could lead to a higher sampling overhead in terms of the number of one-shot measurements, or equivalently, the number of input state copies required to achieve the desired accuracy [12; 14]. By Hoeffding's inequality [24], the number of one-shot measurements \(M\) satisfying
\[M\geq\frac{2\gamma^{2}\|O\|_{\infty}^{2}\log\frac{2}{\delta}}{\varepsilon^{2}} =\gamma^{2}K(\delta,\varepsilon,O) \tag{1}\]
are needed to ensure that the final estimation is within an error \(\varepsilon\) with a probability no less than \(1-\delta\), where \(K(\delta,\varepsilon,O)\coloneqq 2\|O\|_{\infty}^{2}\log\frac{2}{\delta}/ \varepsilon^{2}\) is independent of the post-processing. The factor \(\gamma=\sum_{j}|\alpha_{j}|\), which is the magnitude of the multiplying coefficients used in the post-processing, characterizes the sampling overhead of simulating \(\mathcal{E}\) with the decomposition \(\mathcal{E}=\sum_{j}\alpha_{j}\mathcal{N}_{j}\).
Note that in the above model of QPD, the action on the input state is controlled by a classical system. Which physical operation will be enacted in each round is resolved by a pre-determined classical probability distribution. As we are dealing with quantum information, there is no sense to limit the control system to a classical one. What we need is an operation that takes in an input state and outputs a state for the succeeding observable measurement and a classical value controlling the post-processing. The most general form of such a physical operation is known as a quantum instrument. We also note that quantum instrument was used in Ref. [22] for simulating ideal two-point correlators.
A quantum instrument is a quantum operation that gives both classical and quantum outputs. Mathematically, it is described by a collection \(\{\mathcal{M}_{j}\}\), where each \(\mathcal{M}_{j}\) is a CPTN map and \(\sum_{j}\mathcal{M}_{j}\) is CPTP [25]. Given an initial state \(\rho\), the quantum instrument outputs a measurement outcome \(j\) and a corresponding post-measurement state \(\rho_{j}\coloneqq\mathcal{M}_{j}(\rho)/\operatorname{tr}\left[\mathcal{M}_{j }(\rho)\right]\) with a probability \(\operatorname{tr}\left[\mathcal{M}_{j}(\rho)\right]\) dependent on the input state. When the measurement outcome from the quantum instrument is \(j\), we multiply a coefficient \(\alpha_{j}\) to the value obtained from the one-shot observable measurement on the state \(\rho_{j}\). Then, the expected value of the output after classical post-processing is \(\sum_{j}\operatorname{tr}\left[\mathcal{M}_{j}(\rho)\right]\alpha_{j} \operatorname{tr}\left[\rho_{j}O\right]=\operatorname{tr}\left[\sum_{j}\alpha_ {j}\mathcal{M}_{j}(\rho)O\right]\). As in QPD, we repeat these steps for multiple rounds to obtain an estimation of \(\operatorname{tr}\left[\sum_{j}\alpha_{j}\mathcal{M}_{j}(\rho)O\right]\). We call this whole process measurement-controlled post-processing, which is visualized in Fig. 1(b).
In the example presented above, the map \(\sum_{j}\alpha_{j}\mathcal{M}_{j}\) is the effective operation performed on the input state. While this decomposition looks a lot like the decomposition in QPD, the sampling overhead associated with it is \(\alpha_{\max}\coloneqq\max_{j}|\alpha_{j}|\) instead of \(\sum_{j}|\alpha_{j}|\). This is because \(\alpha_{\max}\) is the largest magnitude among all the post-processing coefficients. Specifically, it can be directly verified by Hoeffding's inequality that
\[M\geq\alpha_{\max}^{2}K(\delta,\varepsilon,O) \tag{2}\]
one-shot measurements, or equivalently, copies of the input state, are required to make sure the prediction has an error smaller than \(\varepsilon\) with a probability no less than \(1-\delta\).
The major distinction between QPD and measurement-controlled post-processing that leads to the difference in the expressions for sampling overheads is that QPD encodes a fixed probability distribution in the decomposition coefficients, whereas the latter does not impose such an artificial probability distribution on the measurement outcomes. Instead, the probability distribution \(\{\operatorname{tr}\left[\mathcal{M}_{j}(\rho)\right]\}\) arises natu
Figure 1: Difference between QPD and measurement-controlled post-processing. The task is to estimate the expectation value of an observable with respect to a state transformed by a Hermitian-preserving map without knowing the observable nor the state. (a) Estimating the expectation value with QPD. The action on the input state in each round is determined by classical random sampling, which is independent of the input state. (b) Estimating the expectation value with measurement-controlled post-processing. The action on the input state in each round is determined by quantum measurement governed by the Born rule, which takes the input state into consideration.
rally from the the Born rule for the quantum measurement embedded in the CPTN maps constituting the quantum instrument, and the distribution varies from state to state.
_Twisted channel for mathematical characterization.--_ To assist the analysis of measurement-controlled post-processing, we introduce what we call a twisted channel as its mathematical characterization. The effective operation \(\sum_{j}\alpha_{j}\mathcal{M}_{j}\) can be written as \(\alpha_{\max}\sum_{j}\alpha^{\prime}_{j}\mathcal{M}_{j}\), where the normalized coefficient \(\alpha^{\prime}_{j}\coloneqq\alpha_{j}/\alpha_{\max}\) satisfies \(|\alpha^{\prime}_{j}|\leq 1\). Then, the combination \(\sum_{j}\alpha^{\prime}_{j}\mathcal{M}_{j}\) has a unit sampling overhead and \(\sum_{j}|\alpha^{\prime}_{j}|\mathcal{M}_{j}\) is a CPTN map. Without loss of generality, we can require \(\sum_{j}|\alpha^{\prime}_{j}|\mathcal{M}_{j}\) to be CPTP. This is because there exists a map \(\mathcal{M}^{\prime}\) such that \(\sum_{j}|\alpha^{\prime}_{j}|\mathcal{M}_{j}+2\mathcal{M}^{\prime}\) is CPTP. Then, we can add two terms \(\mathcal{M}^{\prime}\) and \(-\mathcal{M}^{\prime}\) into the combination \(\sum_{j}\alpha^{\prime}_{j}\mathcal{M}_{j}\) without changing the map this combination represents for nor its sampling overhead. Absorbing \(|\alpha^{\prime}_{j}|\) into each \(\mathcal{M}_{j}\), we arrive at the following definition of a twisted channel [26].
**Definition 1** (Twisted channel): _A twisted channel \(\mathcal{T}\) is a linear map that can be written as \(\mathcal{T}=\sum_{j}s_{j}\mathcal{M}_{j}\), where \(s_{j}\in\{+1,-1\}\) and \(\{\mathcal{M}_{j}\}\) is a quantum instrument._
A twisted channel is a map that, though not necessarily physical, can be simulated with unit sampling overhead using measurement-controlled post-processing with a quantum instrument. It is clear that the effective operation of measurement-controlled post-processing is simply a single twisted channel scaled by a coefficient that coincides with the sampling overhead. As measurement-controlled post-processing is more general than the classically controlled one, one twisted channel with a suitable scalar is enough for simulating any Hermitian-preserving map, and we formally prove this statement in the Appendix. On the other hand, one may decompose a Hermitian-preserving map into a linear combination of multiple twisted channels as in QPD, which corresponds to sampling multiple quantum instruments. It turns out that involving more quantum instruments does not result in a lower overhead. In other words, a single quantum instrument is all it needs to simulate an arbitrary Hermitian-preserving map with an optimal sampling overhead.
**Theorem 2** (One quantum instrument is all you need): _Under measurement-controlled post-processing, any protocol that involves the sampling of multiple quantum instruments is equivalent to a protocol using a single quantum instrument in terms of the simulated map and the sampling overhead._
The proof of Theorem 2 can be found in the Appendix, where we show that any linear combination of twisted channels can be reduced to a scaled twisted channel without changing the sampling overhead.
_Diamond norm as the simulation cost.--_ We have shown that in both QPD and measurement-controlled post-processing, a sampling overhead characterizing the required number of one-shot measurement rounds is associated with a decomposition of a Hermitian-preserving map. Considering that a Hermitian-preserving map's decomposition is not unique, we define a Hermitian-preserving map's simulation cost as the lowest possible sampling overhead associated with any its valid decomposition. Within measurement-controlled post-processing, the simulation cost of a Hermitian-preserving map \(\mathcal{E}\) is defined as
\[\gamma_{\mathrm{TC}}\left(\mathcal{E}\right)\coloneqq\min\left\{\alpha\mid \mathcal{E}=\alpha\mathcal{T},\ \alpha\geq 0,\ \mathcal{T}\in\mathrm{TC}\right\}, \tag{3}\]
where \(\mathrm{TC}\) denotes the set of twisted channels.
The diamond norm, also known as the completely bounded trace norm, is a fundamental quantity in quantum information and efficiently computable by a semi-definite program (SDP) [27, 28]. It serves as a measure of distance between quantum channels and finds natural operational meanings in channel discrimination tasks [29, 30]. Given a Hermitian-preserving map \(\mathcal{E}_{A\to B}\), its diamond norm is defined as
\[\left\|\mathcal{E}\right\|_{\diamond}\coloneqq\max_{\rho_{A^{\prime}A}}\left\| \mathrm{id}_{A^{\prime}}\otimes\mathcal{E}_{A\to B}(\rho_{A^{\prime}A}) \right\|_{1}, \tag{4}\]
where the optimization is over all states \(\rho_{A^{\prime}A}\) and the dimension of the system \(A^{\prime}\) is equal to the dimension of system \(A\). The map \(\mathrm{id}_{A^{\prime}}\) denotes the identity channel on the system \(A^{\prime}\), and \(\left\|\cdot\right\|_{1}\) is the trace norm.
For any Hermitian-preserving map that is proportional to a trace-preserving map, it has been shown in Ref. [14] that its simulation cost using QPD is equal to its diamond norm [27]. However, for general Hermitian-preserving maps, this equality between the cost and the diamond norm does not hold. In an extreme case, the simulation cost can be twice as large as the map's diamond norm.
In Theorem 3, we show that this unpleasant gap between the cost in QPD and the diamond norm can be remedied. In particular, we prove that the simulation cost induced by the measurement-controlled post-processing is equal to the map's diamond norm for any Hermitian-preserving map.
**Theorem 3** (Diamond norm is the cost): _Let \(\mathcal{E}_{A\to B}\) be an arbitrary Hermitian-preserving map. Then, its simulation cost using a twisted channel can be obtained by the following SDP:_
\[\gamma_{\mathrm{TC}}(\mathcal{E})= \min\left\{\alpha\mid J^{\mathcal{E}}=M^{+}-M^{-},\ M^{\pm}\geq 0,\right.\] \[\left.\mathrm{tr}_{B}\left[M^{+}+M^{-}\right]=\alpha 1\right\}, \tag{5}\]
_where \(J^{\mathcal{E}}\) is the Choi operator of \(\mathcal{E}\). Furthermore, this cost is equal to the map's diamond norm, i.e.,_
\[\gamma_{\mathrm{TC}}(\mathcal{E})=\|\mathcal{E}\|_{\diamond}. \tag{6}\]
The SDP in Eq. (5) follows from the fact that any twisted channel can be written as a difference between two CPTN maps because CPTN maps following the same coefficient in the decomposition can be grouped into a single CPTN map. A formal proof of this theorem is given in the Appendix.
Theorem 3 not only implies that measurement-controlled post-processing can simulate a Hermitian-preserving map at
a cost much lower than QPD, but also establishes diamond norm as a universal quantity measuring the simulability of a Hermitian-preserving map.
From another perspective, the cost of simulating an unphysical map characterizes the map's non-physicality [12; 14]. All conventional physical operations, i.e., CPTN maps, have unit simulation costs. Within the framework where classical post-processing is allowed, physical operations are extended to twisted channels. Intuitively, the more unphysical a map is, the more expensive it is to simulate this map with physical operations. In this sense, we treat non-physicality as a resource and twisted channels as free operations. Then, a map's non-physicality can be quantified by robustness measures, which are widely used in quantum resource theories [31; 32; 33; 34; 35; 36; 37; 38].
Here, we consider the absolute robustness [32] of a Hermitian-preserving map \(\mathcal{E}\), which we define as
\[R(\mathcal{E})\coloneqq\min\left\{\lambda\left|\;\frac{\mathcal{E}+\lambda \mathcal{T}}{1+\lambda}\in\mathrm{TC},\;\mathcal{T}\in\mathrm{TC}\right.\right\}. \tag{7}\]
It turns out that this robustness measure is equivalent to the diamond norm in the way that
\[\|\mathcal{E}\|_{\diamond}=2R(\mathcal{E})+1 \tag{8}\]
holds for all Hermitian-preserving maps \(\mathcal{E}\). A proof of this equality can be found in the Appendix.
_Advantages of twisted channels in practice.--_ Here, we study information recovering and processing as two examples of the twisted channels' applications to demonstrate the advantage of quantum measurement over classical sampling in practical tasks.
Information recovering refers to the task that predicts the expectation value of an observable \(O\) with respect to a quantum state \(\rho\) given its noisy copies \(\mathcal{N}(\rho)\), where the channel \(\mathcal{N}\) represents the noise. This problem was proposed in Ref. [39], where the authors addressed it within a framework of classically sampling CPTP maps. This is equivalent to optimizing a Hermitian-preserving and trace-scaling map \(\mathcal{D}\) so that \(\mathrm{tr}\left[\mathcal{D}\circ\mathcal{N}(\rho)O\right]=\mathrm{tr}\left[ \rho O\right]\), where a map is said to be trace-scaling if it is proportional to some trace-preserving map.
Here, we extend this framework to include general Hermitian-preserving maps, which can be simulated either by classically sampling CPTN maps using QPD or by a twisted channel realized through measurement-controlled post-processing. We compare the lowest sampling overheads incurred by these two methods to recover the desired expectation value in an example, and the results are presented in Fig. 2. In this example, the given observable is the sum of the four Pauli operators \(X+Y+Z+I\). We consider three different types of noise: the amplitude damping noise \(\mathcal{N}^{\epsilon}_{\mathrm{AD}}(\cdot)\) with Kraus operators \(|0\rangle\!\langle 0|+\sqrt{1-\epsilon}|1\rangle\!\langle 1|\) and \(\sqrt{\epsilon}|0\rangle\!\langle 1|\), the dephasing noise \(\mathcal{N}^{\epsilon}_{\mathrm{deph}}\) with Kraus operators \(\sqrt{1-\epsilon/2}I\) and \(\sqrt{\epsilon/2}Z\), and the depolarizing noise \(\mathcal{N}^{\epsilon}_{\mathrm{depo}}(\cdot)=(1-\epsilon)(\cdot)+\epsilon \,\mathrm{tr}[\cdot]I/2\). The parameter \(\epsilon\in[0,1]\) indicates the noise level for all the three channels. It is observed in Fig. 2 that for all these noises, the twisted channel method incurs overheads significantly lower than those of QPD, and the gaps between them steadily enlarge as the noise level goes up.
The other application is the processing of quantum data, which involves a collection of tasks aimed at implementing extraction maps on input quantum states. Quantum algorithms have the potential to achieve quantum speedup owing to the vast information storage capabilities of quantum systems. However, this advantage also poses challenges in processing valuable information when quantum data contain a surplus of irrelevant details. To optimize the efficiency and accuracy of quantum algorithms, it is crucial to implement an extraction map that minimizes the spatial dimensions of quantum data while retaining as much useful information as possible. This is especially important for some quantum machine learning tasks. For example, in the quantum convolutional neural networks proposed in Ref. [40], such operations are implemented by measurement-controlled unitaries, and hence are restricted to physical maps only.
Twisted channels can expand the range of applicable extraction to encompass unphysical maps. We consider an elementary operation called the _entry extraction map_ as an example. In particular, entry extraction maps extract entry-wise information of an input quantum state and blend these entries with zeros to form a new matrix while preserving the relative positions of the entries. Additionally, when an off-diagonal entry is extracted, its symmetrical counterpart must be extracted as well to maintain Hermiticity. The graphical illustration of an entry extraction map can be found in Fig. 3(a), and its mathematical definition is presented in the Appendix.
To showcase the efficiency of quantum measurement, we compare the simulation costs between QPD and the twisted
Figure 2: Comparison between sampling overheads of QPD and measurement-controlled post-processing for information recovering under common noises at different noise levels. The markers on the solid lines represent the overheads achieved by using twisted channels, i.e., measurement-controlled post-processing, and those on the dashed lines are the overheads achieved by QPD.
channel method for several entry extraction maps. The input quantum data for these maps are fixed to \(6\times 6\) matrices, while the output dimensions and indices sets extracted by these map are randomly selected. Fig. 3(b) illustrates that for all the selected extracting operations, measurement-controlled post-processing achieves the same or lower costs than QPD.
_Quantum interference supplies the advantage.--_ The advantage of measurement-controlled post-processing over QPD is substantiated by the numerical results presented above. Consequently, it is pertinent to inquire about the underlying physical property that contributes to these enhancements. Here, we suggest that quantum interference is the key of such improvements.
Upon closer examination, the only difference between measurement-controlled post-processing and QPD lies in their respective methodologies of operation sampling. QPD first samples an operation according to a fixed priori probability distribution and then performs the sampled operation. On the other hand, measurement-controlled post-processing does not have such a priori probability distribution. As depicted in Fig. 1(b), it first applies an aggregate quantum operation to the input state, which creates interference between the paths leading to different measurement outcomes associated with different physical operations waiting to be sampled. The probability of getting each outcome is affected by the interference depending on the input state. Upon measurement, the quantum system collapses to an output state corresponding to one particular operation based on the adaptive probability distribution. Such dynamic assignment of probabilities makes interference the core ingredient for the enhancements brought by measurement-controlled post-processing.
_Concluding remarks.--_ We demonstrate the power of quantum measurement in an important practical task by showing that quantum measurement results in a lower simulation cost of an unphysical operations compared with classical sampling. The simulation costs when quantum measurement is employed reduce to the well-known diamond norm for all Hermitian-preserving maps.
The measurement-controlled post-processing scheme can be extended to other scenarios in addition to simulating unphysical maps. More and more quantum algorithms and protocols consider classical randomness and post-processing as useful tools. Examples of use cases include circuit knitting [41, 42], Hamiltonian simulation [43, 44, 45], and quantum error correction [46, 47, 48]. Whether quantum measurement can be used in place of classical sampling in these cases to further improve the performance is an interesting open problem for future work.
_Acknowledgments.--_ The authors thank Mingrui Jing for fruitful discussions.
|
2309.16857 | General and Unified Model of the Power Flow Problem in Multiterminal
AC/DC Networks | This paper proposes a generic and unified model of the power flow (PF)
problem for multiterminal hybrid AC/DC networks. The proposed model is an
extension of the standard AC-PF. The DC network is treated as an AC one and, in
addition to the Slack, PV and PQ nodes, four new node types are introduced to
model the DC buses and the buses connecting the AC/DC interfacing converters
(IC). The unified model is solved using the Newton-Raphson method. The extended
PF equations can be used in the presence of multiple ICs operating under
different control modes. Compared to other recent works, the proposed method
allows multiple ICs to regulate the DC voltage simultaneously. This corresponds
to more realistic operational conditions that ensure redundancy and allow for
more flexible control of the hybrid grid. The proposed model can be used for
networks under unbalanced conditions and allows for an intentionally negative
sequence power injection. In addition to the operational advantages of this
method, it is shown that the computational performance of the proposed method
is one order of magnitude better than that of other methods presented in the
existing recent literature while having the same accuracy. | Willem Lambrichts, Mario Paolone | 2023-09-28T21:12:09Z | http://arxiv.org/abs/2309.16857v1 | # General and Unified Model of the Power Flow Problem in Multiterminal AC/DC Networks
###### Abstract
This paper proposes a generic and unified model of the power flow (PF) problem for multiterminal hybrid AC/DC networks. The proposed model is an extension of the standard AC-PF. The DC network is treated as an AC one and, in addition to the _Slack_, _PV_ and _PQ_ nodes, four new node types are introduced to model the DC buses and the buses connecting the AC/DC interfacing converters (IC). The unified model is solved using the Newton-Raphson method. The extended PF equations can be used in the presence of multiple ICs operating under different control modes. Compared to other recent works, the proposed method allows multiple ICs to regulate the DC voltage simultaneously. This corresponds to more realistic operational conditions that ensure redundancy and allow for more flexible control of the hybrid grid. The proposed model can be used for networks under unbalanced conditions and allows for an intentionally negative sequence power injection. In addition to the operational advantages of this method, it is shown that the computational performance of the proposed method is one order of magnitude better than that of other methods presented in the existing recent literature while having the same accuracy.
## Nomenclature
\(\mathcal{N}\): Set of AC nodes
\(\mathcal{M}\): Set of DC nodes
\(\Gamma\): Set of IC nodes (i.e. nodes connected to AC or DC side of IC)
\(\overline{E}\): Nodal (AC + DC) phase-to-ground voltage phasor1
\(\overline{I}\): Current phasor (AC + DC)1
\(P\): Active power
\(Q\): Reactive power
\(\overline{Y}^{ac}\): AC compound admittance matrix \(\overline{I}=\overline{Y}^{ac}\overline{E}\)
\(Y^{dc}\): DC compound admittance matrix \(\overline{I}=Y^{dc}\overline{E}\)
\(I^{sw}\): DC current modelling IC's switching losses
\(\overline{E}^{c}\): AC voltage drop modelling IC's conduction losses
\(S^{loss}\): Total power losses over IC
\(R_{eq}\): Equivalent resistance of the IGBT
\(T_{ON}\): Equivalent time constants of transistor's turn-on
\(T_{OFF}\): Equivalent time constants of transistor's turn-off
\(T_{REC}\): Equivalent time constants of diode's reverse recovery
\(\overline{Z}^{filter}\): Impedance of the filter of IC
\(\mathbf{J}\): Jacobian matrix
\(\mathbf{x}\): Vector of the unknown variables
\(\mathbf{y}\): Vector of the mismatches
\(\epsilon\): Convergence criteria
\(\bullet_{i}\): Subscript indicating the AC nodes
\(\bullet_{j}\): Subscript indicating the DC nodes
\(\bullet_{l}\): Subscript indicating node at AC side of IC
\(\bullet_{k}\): Subscript indicating node at DC side of IC
\(\bullet^{\prime}\): Real part of complex number
\(\bullet^{\prime\prime}\): Imaginary part of complex number
\(\bullet^{\phi}\): Superscript for phase angle \(\phi\in\{a,b,c\}\)
\(\bullet^{*}\): Superscript for reference point or setpoint
\(\bullet^{0}\): Superscript indicating zero sequence
\(\bullet^{+}\): Superscript indicating positive sequence
\(\bullet^{-}\): Superscript indicating negative sequence
## I Introduction
Multiterminal hybrid AC/DC networks are gaining more interest nowadays in the area of HVDC as well as in hybrid AC/DC microgrids. Indeed, multiterminal HVDC systems allow the interconnection of transition networks to increase the flexibility and integration of renewables [1]. On the other hand, hybrid AC/DC microgrids are a promising solution to increase the share of distributed generation in future power grids that are expected to massively rely on converter-interfaced renewable resource generation [2].
Power flow (PF) studies are a crucial element in the analysis, planning and operation of these modern power systems that are transitioning to hybrid AC/DC systems. From a modelling point of view, the PF analysis of multiterminal HVDC and microgrids is identical. The main challenge comes from the incorporation of the AC/DC Interfacing Converters (IC). Indeed, the IC can operate using different control modes that affect the PF model. Typically, in a Voltage Source Converter (VSC), the d- and q-components of the current and voltages are decoupled. This allows for the control of two variables simultaneously, i.e. active power, reactive power, AC voltage, or DC voltage. The common controllable pairs of variables are: \(P_{ac}-Q_{ac}\), \(P_{ac}-|E_{ac}|\), \(E_{dc}-Q_{ac}\) or \(E_{dc}-|E_{ac}|\). For the first two operating modes, \(P_{ac}-Q_{ac}\) and \(P_{ac}-|E_{ac}|\), the connected AC bus is considered as a _PQ node_ and _PV node_2. Therefore, the traditional AC power flow theory can be applied. Furthermore, using the active power balanced and IC loss model, the DC side is modelled as a _P-node_ and can be easily included in the PF model. The main challenge in the construction of a unified PF comes from the operating modes where the IC controls the DC voltage (that is, \(E_{dc}-Q_{ac}\) and \(E_{dc}-|E_{ac}|\) ). Because the DC voltage is a control variable, only one AC variable is controlled (either \(Q_{ac}\) or \(|E_{ac}|\)), and the traditional PF theory cannot be used anymore. Note that at least one IC is required to control the DC voltage to ensure the stability of the DC grid.
Furthermore, hybrid AC/DC microgrids are often subjected to strong unbalanced loading conditions. Therefore, it is in the interest of the system's quality of supply to have a PF model that allows to consider generic AC unbalances including, but not limited to, the intentional injection of negative sequence power.
The structure of the paper is as follows: Section II gives a review of the state-of-the-art on PF methods for hybrid AC/DC networks and discusses several major limitations of these methods that our proposed model tackles. Section III presents the proposed generic hybrid AC/DC PF model. The load flow equations are presented for all different node types and operation modes of the ICs. Furthermore, a detailed loss model is presented that improves the accuracy of the PF solution. In Section IV, a case study is presented to numerically validate the proposed method on a hybrid AC/DC microgrid under balanced and unbalanced loading conditions. Section V presents an in-depth comparison and benchmark with a publicly available MATPOWER-based PF algorithm for hybrid AC/DC networks [3]. The computation time is analysed for multiple hybrid AC/DC networks, including a large network to show the scalability of our proposed method.
## II State-of-the-art review
The PF problem for AC/DC networks has been studied extensively since the 1980s. Reference [4] proposed a unified PF model that includes the DC network model and allows for a multiterminal configuration. The model is solved using the Newton-Raphson (NR) method. [5] presented a method for a decoupled PF where the ICs are modelled as voltage-dependent loads to eliminate the need for DC variables. However, these methods are only valid for Line-Commutated Converters (LCC). For VSCs, the mathematical model is fundamentally different, and the above-mentioned methods cannot be used anymore.
In the applications of VSCs, the PF models proposed in the literature can be classified into two main groups: _sequential_ and _unified models_.
In the _sequential models_, the DC grid and AC grid are solved separately and iteratively linked using the active power balance at the IC controlling the DC voltage [6]. Notice that all, except for one, ICs are assumed to operate in PV or PQ mode. Therefore, the traditional PF theory is used to model the AC grid. The active power injection of the IC controlling the DC voltage is unknown and is computed using the active power balance between the AC and DC network. Reference [7] proposes an improved sequential PF method that includes the IC's tap positions as an additional state variable to enhance the robustness of the PF calculation.
The main problem with these methods is the need for an iterative procedure with multiple computation loops, which increases the computational complexity significantly.
In the _unified models_, the AC and DC power flows are solved as one problem by modelling the entire network (i.e., AC network, DC network and IC) as one system. In the specific case where the DC voltage is regulated using droop control, references [8, 9, 10, 11] propose a method where the \(V_{dc}-P\) droop curves are incorporated directly into the PF model.
When DC voltage is regulated in an optimal manner, not through droop control, only a few unified approaches have been suggested. Reference [12] proposes a unified model based on the sequential approach of [6]. The updated unified method includes the converter losses of the DC voltage-controlling IC as an optimisation variable. Using this additional variable, the active power injection on the AC side of this IC is equal to the sum of all DC power injections and losses.
The authors of [13] propose a new equivalent representation of a VSC where the converter is modelled as an ideal tap-changing transformer with a complex tap. The tap magnitude corresponds to the VSC's modulation index and the tap angle is equal to the phase angle of the AC voltage at the IC's node. An additional shunt susceptance and resistance are included to model the reactive power flow and the converter's losses. The approach allows to describe the IC's fundamental frequency operation as a two-port model. [14] includes the two-port model in a unified PF model that can handle the different operating modes and is solved using the NR method. The method requires new additional control variables and can only model the positive sequence operation of ICs.
The authors in [15, 16] propose a PF method using the Flexible Universal Branch Model (FUBM). The FUBM is based on the above-described two-port model and can realistically model AC transmission lines and VSCs operating under different operation modes. The FUBM PF-based method has been made publicly available as an extension of the MATPOWER tool and is used to benchmark our proposed method in Section V. References [17, 18] follow a similar approach in which the AC/DC IC is represented as a two-port model and is included as a building block of the compound admittance matrix of the entire AC/DC network. Therefore, the method can be integrated into conventional PF programmes with minimal modifications. Reference [19] proposes an AC-equivalent approach in which every DC line is replaced by a set of parallel AC lines and in which the ICs are replaced by an equivalent line model dependent on the modulation index.
Reference [20] proposes another method in which every IC is modelled using two conventional AC generators, one for the AC side and one for the DC side, and coupled by a linear constraint to ensure energy conservation. Furthermore, the DC network is modelled as an AC one, so existing AC-PF tools can be reused.
The main limitation of all methods proposed previously in the literature is that only one IC can regulate the DC voltage. When multiple ICs regulate the voltage of the DC grid, the problem becomes unfeasible and does not converge to a solution. This fundamental limitation has been identified in [12, 15]. Having multiple ICs controlling \(V_{dc}\) is crucial for numerous reasons. 1) The security of supply of the DC system since a redundant number of converters can better keep the DC voltage within nominal bounds. Therefore, when one converter goes offline, the other converters will
continue to maintain the DC voltage level. 2) When multiple converters control the DC voltage level, the power required to maintain the nominal DC voltage setpoint is shared over multiple converters. Therefore, more power can be exchanged between AC and DC networks, allowing for a broader range of operations than when only a single IC can control the DC voltage. 3) When DC transformers are present, the DC grid can have multiple voltage levels, which leads to a more optimal control of the entire grid [21]. The hybrid AC/DC grid used for the validation of our proposed LF method is a real grid available at the EPFL campus that has multiple DC transformers and is inspired by real microgrid benchmarks. The methods previously presented in the literature are not usable in such a network. An additional major limitation of the methods proposed in the literature is that microgrids are often subjected to unbalanced loading conditions. These are created by, e.g. single-phase photovoltaic inverters or electric vehicle chargers. None of the models can handle these unbalanced conditions. Furthermore, the proposed PF models cannot account for the intentional injection of negative sequence power that is often required to compensate for the unbalanced loading conditions. The final element of the proposed unified PF model is to improve the computational speed of the tool in comparison to existing ones.
In this respect, this paper proposes a generic method that tackles these four fundamental limitations at the same time. The method is based on the AC-PF and is suitably extended to include the DC network and the ICs. The DC network is treated as a standard AC one and the ICs are treated in a generic way. Depending on their operation mode, i.e. if a voltage or power reference is tracked, the PF equations are suitably adapted. The DC voltage control is no longer limited to only one IC. The proposed method can be used for all types of hybrid networks (i.e., multiterminal HVDC or hybrid AC/DC microgrids) under balanced or unbalanced conditions. Furthermore, the method allows us to accurately model the AC/DC grid when a negative sequence power is injected to compensate for unbalances. 3.
Footnote 3: The source code is made publicly available on [https://github.com/DESI-EPFL](https://github.com/DESI-EPFL)
## III Methodology
### _Node types in hybrid AC/DC networks_
The PF problem requires an exact model of the AC network, the DC network, and the ICs. The AC network consists of three types of buses: _Slack_, _PV_ and _PQ_ nodes and is modelled using the standard PF theory. The DC network is modelled identically to the AC network with \(Q=0\) and \(\overline{Z}=R\) in order to reuse the AC-PF theory. Because of the nature of the ICs, which are typically VSCs, it is not possible anymore to use the traditional PF theory and an extension is needed where the model equations are dependent on the converter's operational mode. Typically, the control architecture of VSCs allows the control of two variables simultaneously: \(E_{dc}-Q_{ac}\), \(P_{ac}-Q_{ac}\) and \(E_{dc}-|E_{ac}|\). Tab. I gives an overview of all possible node types in hybrid AC/DC grids. Note that at least one VSC is required to impose the DC voltage (\(E_{dc}\)) [12].
### _Power flow equations_
A generic hybrid AC/DC network is considered with \(\mathcal{N}\) AC nodes and \(\mathcal{M}\) DC nodes, where buses \((l,k)\in\Gamma\) are the couples of AC/DC converter buses (see Fig.1). Furthermore, we assume \(l\in\mathcal{N}\) and \(k\in\mathcal{M}\). Therefore, \((\mathcal{N}=\mathcal{N}_{slack}\cup\mathcal{N}_{PQ}\cup\mathcal{N}_{PV}\cup \Gamma_{l})\) and \((\mathcal{M}=\mathcal{M}_{P}\cup\mathcal{M}_{V}\cup\Gamma_{k})\)
The two grids are interlinked by one or more interfacing converters (i.e., \(|\Gamma|\geq 1\)) that can operate under different control modes. Notice that the PF equations below are written in rectangular coordinates4.
Footnote 4: This was a design choice by the authors, however, the model can also be easily described in polar coordinates.
#### Iii-B1 AC network
The PF equations for _PQ_ nodes read:
\[\Re\Big{\{}\overline{E}_{i}^{\phi}\sum_{n\in\mathcal{N}}\underline {\underline{\underline{\mathbf{Y}}}_{i,n}^{ac}\underline{\underline{\mathbf{E} }}_{n}^{\phi}}\Big{\}} =P_{i}^{\phi*},\quad\text{for }i\in\mathcal{N}_{PQ} \tag{1}\] \[\Im\Big{\{}\overline{E}_{i}^{\phi}\sum_{n\in\mathcal{N}}\underline {\underline{\mathbf{Y}}}_{i,n}^{ac}\underline{\underline{\mathbf{E}}}_{n}^{ \phi}} =Q_{i}^{\phi*},\quad\text{for }i\in\mathcal{N}_{PQ} \tag{2}\]
The PF equations for _PV_ nodes read:
\[\Re\Big{\{}\overline{E}_{i}^{\phi}\sum_{n\in\mathcal{N}}\underline {\underline{\underline{\mathbf{Y}}}_{i,n}^{ac}\underline{\underline{\mathbf{ E}}}_{n}^{\phi}}\Big{\}} =P_{i}^{\phi*},\quad\text{for }i\in\mathcal{N}_{PV} \tag{3}\] \[{E_{i}^{\phi}}^{2}+{E_{i}^{\phi}}^{2} ={E_{i}^{\phi}}^{2},\quad\text{for }i\in\mathcal{N}_{PV} \tag{4}\]
where \(P_{i}^{\phi*}\) and \(Q_{i}^{\phi*}\) are the active and reactive power nodal injections at node \(i\) and phase \(\phi\in\{a,b,c\}\). The \(\underline{\underline{\underline{\mathbf{\bullet}}}}\) indicates the complex conjugate, the apostrophes \(\bullet^{\prime}\) and \(\bullet^{\prime\prime}\) refer to the real and imaginary parts of the phase-to-ground voltage \(\overline{E}_{i}\). \(\overline{\underline{\mathbf{Y}}}^{ac}\) is the admittance matrix of the AC network.
\begin{table}
\begin{tabular}{l l l l l}
**Bus Type** & **VSC contrl.** & **Known var.** & **Unknown var.** & **Index** \\ \hline AC slack & & \(|E_{ac}|\), \(\angle E_{ac}\) & \(P_{ac}Q_{ac}\) & \(s\in\mathcal{N}_{slack}\) \\ \hline \(P_{ac}\), \(Q_{ac}\) & & \(P_{ac}Q_{ac}\) & \(|E_{ac}|\), \(\angle E_{ac}\) & \(i\in\mathcal{N}_{PQ}\) \\ \hline \(P_{ac}\), \(|E_{ac}|\) & \(P_{ac}|E_{ac}|\) & \(Q_{ac}\), \(\angle E_{ac}\) & \(i\in\mathcal{N}_{PV}\) \\ \hline \(VSC_{ae}\) & \(P_{ac}\) - \(Q_{ac}\) & \(P_{ac}\) \(Q_{ac}\) & \(|E_{ac}|\), \(\angle E_{ac}\) & \(i\in\mathcal{N}_{PQ}\) \\ \hline \(VSC_{ae}\) & \(P_{ac}\) - \(Q_{ac}\) & \(P_{ac}\) \(Q_{ac}\) & \(|E_{ac}|\), \(\angle E_{ac}\) & \(i\in\mathcal{N}_{PQ}\) \\ \hline \(E_{ac}\), \(Q_{ac}\) & \(Q_{ac}\) & \(P_{ac}\) \(Q_{ac}\) & \(|E_{ac}|\), \(\angle E_{ac}\) & \(i\in\mathcal{N}_{PQ}\) \\ \hline \(VSC_{de}\) & \(P_{ac}\) - \(Q_{ac}\) & \(Q_{ac}\) & \(P_{ac}\) \(|E_{ac}|\), \(\angle E_{ac}\) & \(i\in\mathcal{N}_{PQ}\) \\ \hline \(VSC_{ae}\) & \(P_{ac}\) - \(Q_{ac}\) & \(P_{ac}\) \(Q_{ac}\) & \(|E_{ac}|\), \(\angle E_{ac}\) & \(i\in\mathcal{N}_{PQ}\) \\ \hline \(VSC_{de}\) & \(P_{ac}\) - \(Q_{ac}\) & \(P_{dc}\) & \(|E_{ac}|\), \(\angle E_{ac}\) & \(i\in\mathcal{N}_{PQ}\) \\ \hline \(E_{dc}\) & \(P_{ac}\) - \(Q_{ac}\) & \(P_{dc}\) & \(|E_{ac}|\), \(\angle E_{ac}\) & \(i\in\mathcal{N}_{PQ}\) \\ \hline \(P_{
2 DC network
The PF equations for \(P\) nodes in the DC grid read:
\[E_{j}\sum_{m\in\mathcal{M}}Y_{j,m}^{dc}E_{m}=P_{j}^{*},\quad\text{for }j\in \mathcal{M}_{P} \tag{5}\]
The PF equations for \(V\) nodes in the DC grid read:
\[E_{j}=E_{j}^{*},\quad\text{for }j\in\mathcal{M}_{V} \tag{6}\]
#### Ii-B3 VSC interfacing converters
For the \(\mathbf{E_{dc}-Q_{ac}}\) operating mode, the extended PF equations are based on the power balance (7). Notice that the reactive power losses over the filter are already accounted for in the control of the VSC (see Fig.1).
\[P_{l}^{a}+P_{l}^{b}+P_{l}^{c}+P_{(l,k)}^{loss}+P_{(l,k)}^{filter}= P_{k},\] \[Q_{l}^{a}+Q_{l}^{b}+Q_{l}^{c}-Q_{(l,k)}^{loss}=Q_{l}^{*},\qquad \quad\text{for }(l,k)\in\Gamma_{E_{dc}Q} \tag{7}\]
with \(Q_{l}^{*}\) the reactive power setpoint of the VSC and subscripts \(\bullet_{k}\) and \(\bullet_{l}\) referring to the resp. DC and AC side of the VSC. In a balanced system, the power is shared equally among the three phases, and thus \(P_{l}^{\phi}=\frac{1}{3}P_{k}\) and \(Q_{l}^{\phi}=\frac{1}{3}Q_{l}^{*}\). In unbalanced systems, the phase-locked loop (PLL) usually synchronizes to the AC network positive sequence, therefore, only the positive sequence powers are injected:
\[\begin{cases}P_{l}^{0}&=0\\ P_{l}^{+}&+P_{(l,k)}^{+loss}+P_{(l,k)}^{+filter}=P_{k}\\ P_{l}^{-}&=0\end{cases}\]
\[\begin{cases}Q_{l}^{0}&=0\\ Q_{l}^{+}&-Q_{(l,k)}^{+loss}=Q_{l}^{*}\\ Q_{l}^{-}&=0\end{cases} \tag{8}\]
where
\[P_{l}^{+} = \Re\Big{\{}\overline{E}_{l}^{+}\sum\nolimits_{n\in\mathcal{N}} \underline{\Sigma}_{(l,n)}^{ac}\underline{\Sigma}_{n}^{+}\Big{\}}, \tag{9}\] \[Q_{l}^{+} = \Im\Big{\{}\overline{E}_{l}^{+}\sum\nolimits_{n\in\mathcal{N}} \underline{\Sigma}_{(l,n)}^{ac}\underline{\Sigma}_{n}^{+}\Big{\}},\] (10) \[P_{k}^{+} = \Re\Big{\{}E_{k}\sum\nolimits_{m\in\mathcal{M}}Y_{(k,m)}^{dc}E_{ m}\Big{\}} \tag{11}\]
and \(E_{k}=E^{*}\) is the DC voltage setpoint.
Because \(\overline{S}_{l}^{0}=P_{l}^{0}+jQ_{l}^{0}=\overline{E}_{l}^{0}\Omega_{l}^{0}\), the homopolar component of the voltage \(E_{l}^{0}\), or the current \(I_{l}^{0}\), has to be zero. For a VSC, where the voltage is controlled, \(E_{l}^{0}\) is set to zero. For a current source converter, \(I_{l}^{0}\) would be zero (idem for the negative sequence component). Note here that this operation distinction has to be made, since the expression for the homopolar and negative sequence powers in (8) cannot be used. Using these power injections results in a trivial expression, and thus in an underdetermined problem.
Substituting expressions (9), (10) and (11) into (8) reads:
\[\begin{cases}E^{0\prime}=0\\ \Re\Big{\{}\overline{E}_{l}^{+}\sum\nolimits_{n\in\mathcal{N}}\underline{ \Sigma}_{(l,n)}^{ac}\underline{\Sigma}_{n}^{+}\Big{\}}+P_{(l,k)}^{+loss}+P_{(l,k)}^{+filter}=\\ \qquad\qquad\qquad\qquad E_{k}^{*}\left(Y_{(k,k)}^{dc}E_{k}^{*}+\sum\nolimits_ {\begin{subarray}{c}m\in\mathcal{M}\\ m\neq k\end{subarray}}Y_{(k,m)}^{dc}E_{m}\right)\\ E^{-\prime}=0\end{cases} \tag{12}\]
\[\begin{cases}E^{0\prime}=0\\ \Im\Big{\{}\overline{E}_{l}^{+}\sum\nolimits_{n\in\mathcal{N}}\underline{ \Sigma}_{(l,n)}^{ac}\underline{\Sigma}_{n}^{+}\Big{\}}-Q_{(l,k)}^{+loss}=Q_{l} ^{*}\\ E^{-\prime}=0\end{cases} \tag{13}\]
Rewriting the positive sequence component of (12) to \(E_{k}^{*}\) (the controllable DC voltage) leads to the quadratic equation (14):
\[\Big{(}Y_{(k,k)}^{dc}\Big{)}E_{k}^{*2}+\Big{(}\sum\nolimits_{ \begin{subarray}{c}m\in\mathcal{M}\\ m\neq k\end{subarray}}Y_{(k,m)}^{dc}E_{m}\Big{)}E_{k}^{*}\] \[\qquad\qquad\qquad-\Re\Big{\{}\overline{E}_{l}^{+}\sum\nolimits_ {n\in\mathcal{N}}\underline{\Sigma}_{(l,n)}^{ac}\underline{\Sigma}_{n}^{+} \Big{\}}=0 \tag{14}\]
Solving the quadratic equation (14) to \(E_{k}^{*}\) results in two solutions: one close to 1 p.u. and another (infeasible) close to 0 p.u. Because the operational voltage of a grid is close to 1 p.u., the only feasible solution is given by the positive equation (15).
\[E_{k}^{*}= -\frac{\sum\nolimits_{\begin{subarray}{c}m\in\mathcal{M}\\ m\neq k\end{subarray}}Y_{(k,m)}^{dc}E_{m}}{2Y_{(k,k)}^{dc}}\pm \tag{15}\] \[\frac{\sqrt{\left(\sum\nolimits_{\begin{subarray}{c}m\in\mathcal{M }\\ m\neq k\end{subarray}}Y_{(k,m)}^{dc}E_{m}\right)^{2}-4Y_{(k,k)}^{dc}\Re\Big{\{} \overline{E}_{l}^{+}\sum\nolimits_{n\in\mathcal{N}}\underline{\Sigma}_{(l,n)}^ {ac}\underline{\Sigma}_{n}^{+}\Big{\}}}}{2Y_{(k,k)}^{dc}}\]
Notice that (15) is dependent on the positive sequence nodal voltage component \(\overline{E}^{+}\). Using the standard Fortescue symmetrical component decomposition, the nodal voltage can be transformed back to the phase domain:
\[\left[\begin{array}{c}\overline{E}^{0}\\ \overline{E}^{+}\\ \overline{E}^{-}\end{array}\right]=\frac{1}{3}\left[\begin{array}{ccc}1&1&1\\ 1&\overline{\alpha}&\overline{\alpha}^{2}\\ 1&\overline{\alpha}^{2}&\overline{\alpha}\end{array}\right]\cdot\left[\begin{array}{c} \overline{E}^{a}\\ \overline{E}^{b}\\ \overline{E}^{c}\end{array}\right] \tag{16}\]
For the \(\mathbf{P_{ac}-Q_{ac}}\) operating mode, the AC side of a generic VSC, which can inject positive and negative power references, is modelled as (17) and (18). In the case that the IC only injects the positive sequence, the power balance of the negative sequence is replaced by \(\overline{E}^{-}=0\).
\[\begin{cases}E^{0\prime}=0\\ \Re\Big{\{}\overline{E}_{l}^{+}\sum\nolimits_{n\in\mathcal{N}}\underline{\Sigma}_{(l,n)}^{ac}\underline{\Sigma}_{n}^{+}\Big{\}}-P_{(l,k)}^{+loss}=P_{l}^{+*}\\ \Re\Big{\{}\overline{E}_{l}^{-}\sum\nolimits_{n\in\mathcal{N}}\underline{\Sigma}_{(l,n)}^{ac}\underline{\Sigma}_{n}^{-}\Big{\}}-P_{(l,k)}^{-loss}=P_{l}^{-*}\\ \end{cases} \tag{17}\]
\[\begin{cases}E^{0\prime}=0\\ \Im\Big{\{}\overline{E}_{l}^{+}\sum\nolimits_{n\in\mathcal{N}}\underline{\Sigma}_{(l,n)}^{ac}\underline{\Sigma}_{n}^{+}\Big{\}}-Q_{(l,k)}^{+loss}=Q_{l}^{+*}\\ \Im\Big{\{}\overline{E}_{l}^{-}\sum\nolimits_{n\in\mathcal{N}}\underline{\Sigma}_{(l,n)}^{ac}\underline{\Sigma}_{n}^{-}\Big{\}}-Q_{(l,k)}^{-loss}=Q_{l}^{-*},\\ \end{cases} \tag{18}\]
\[\text{for }(l,k)\in\Gamma_{PQ}\]
The active power balance also imposes the DC power injection on the DC side. Taking the converter and filter losses into account gives:
\[P_{l}^{*}\qquad=E_{k}\sum\nolimits_{m\in\mathcal{M}}Y_{k,m}^{dc}E_{m}-P_{(l,k)} ^{loss}-P_{(l,k)}^{filter} \tag{19}\]
For the \(\mathbf{E_{de}}-|\mathbf{E_{ae}}|\) operating mode, the interfacing VSC model consists of equations (12) and (4).
### _AC/DC interfacing converter loss model_
The losses of the AC/DC interfacing converter can be included for a more accurate grid model. Assuming that the semiconductor switches of the VSCs are Insulated-Gate Bipolar Transistors (IGBT), references [22, 23] propose an accurate VSC loss model. The converter model that includes the losses and filter is shown in Fig.2. The conduction losses are modelled as a voltage source \(\overline{E}_{l}^{c}\) on the AC side (20) and the switching losses as a DC current source \(I_{k}^{sw}\) (21).
\[\overline{E}_{(l,k)}^{c}=R_{eq}\left(|\overline{I}_{l}|\right)\overline{I}_{l} \tag{20}\]
\[\overline{I}_{(l,k)}^{sw}=2\frac{T_{ON}+T_{OFF}+T_{REC}}{T_{s}}\ \frac{1}{N}\text{ cot}(\frac{\pi}{N})|\overline{I}_{l}| \tag{21}\]
where \(R_{eq}\) is the equivalent resistance of the IGBT, \(T_{s}\) is the switching period, and \(N=f_{/f_{line}}\) is the ratio between the switching frequency and the line frequency. \(T_{ON}\) and \(T_{OFF}\) are the equivalent time commutation constants characterising the transistor's turn-on and turn-off effects under the test conditions, and \(T_{REC}\) is the reverse recovery at turn-off of the diode. It can be observed that the losses, \(\overline{E}_{l}^{c}\) and \(I_{k}^{sw}\), are only dependent on the converter's switching frequency and the several parameters that can be easily derived from the semiconductor's datasheet.
The total losses are computed as the sum of the conduction and switching losses:
\[\overline{S}_{(l,k)}^{loss}=\overline{E}_{(l,k)}^{c}\sum\nolimits_{n\in \mathcal{N}}\underline{Y}_{(l,n)}^{ac}\underline{\mathbf{E}}_{n}+I_{(l,k)}^{ sw}E_{k} \tag{22}\]
The active losses over the _RL_-filter with impedance \(\overline{Z}_{l}^{filter}\) can be straightforwardly modelled as:
\[P_{(l,k)}^{filter}=\Re\Big{\{}\overline{Z}_{l}^{filter}\Big{|}\sum\nolimits_{n \in\mathcal{N}}\overline{Y}_{(l,n)}^{ac}\overline{E}_{n}\Big{|}^{2}\Big{\}} \tag{23}\]
### _Solution via the Newton-Raphson method_
The PF problem given by (1)-(4), (5), (6) (12), (13), the positive solution of (15), (17) and (18) is solved in a unified way via the Newton-Raphson (NR) method. In compact matrix formulation, it can be written as:
\[\mathbf{J}^{(\nu)}\cdot\Delta\mathbf{x}^{(\nu+1)}=\Delta\mathbf{ y}^{(\nu)} \tag{24}\] \[\text{where}\] \[\Delta\mathbf{x}^{(\nu+1)}=\mathbf{x}^{(\nu+1)}-\mathbf{x}^{(\nu)}\] (25) \[\Delta\mathbf{y}^{(\nu)}=\mathbf{y}^{*}-F(\mathbf{x}^{(\nu)}) \tag{26}\]
where, \(\nu\) is the iteration's step, \(\mathbf{J}\) is the PF Jacobian composed by the first-order partial derivatives of the PF model, \(\mathbf{x}\) is the vector of the unknown variables. \(\Delta\mathbf{y}\) is the mismatch related to the known variables, i.e., the difference between the power or voltage setpoint, indicated by the symbol \(*\), and the evaluated PF equations. For the hybrid system discussed in the case study, the linearised system of equations gives:
\[\begin{bmatrix}J_{Pac}/E^{\prime}&J_{Pac}/E^{\prime\prime}&J_{Pac}/E_{de}\\ J_{Qae}/E^{\prime}&J_{Qae}/E^{\prime\prime}&J_{Qae}/E_{de}\\ \overline{J}_{Ea}/E^{\prime\prime}&J_{Ea}/E^{\prime\prime}&\overline{J}_{Ea}/E _{de}\\ J_{P_{de}^{\prime}/E^{\prime}}&J_{P_{de}^{\prime}/E^{\prime\prime}}&J_{P_{de}^{ \prime}/E_{de}}^{\prime}\\ J_{Q_{de}^{\prime}/E^{\prime}}&J_{Q_{de}^{\prime}/E^{\prime\prime}}&J_{Q_{de}^{ \prime}/E_{de}}^{\prime}\\ \overline{J}_{Ea}/E^{\prime\prime}&\overline{J}_{Ea}/E^{\prime\prime}&\overline{ J}_{Ea}/E_{de}\\ J_{Ea}/E^{\prime\prime}&J_{Ea}/E^{\prime\prime}&\overline{J}_{Ea}/E_{de}\\ J_{Ea}/E^{\prime\prime}&J_{Ea}/E^{\prime\prime}&J_{Ea}/E_{de}\\ J_{Ea}/E^{\prime\prime}&J_{Ea}/E^{\prime\prime}&J_{Ea}/E_{de}\\ J_{P_{de}^{\prime\prime}/E^{\prime\prime}}&J_{P_{de}^{\prime\prime}/E^{\prime \prime}}&J_{Ea}/E_{de}\\ \overline{J}_{P_{de}^{\prime\prime}/E^{\prime\prime}}&J_{P_{de}^{\prime\prime}/E ^{\prime\prime}}&J_{Ea}/E_{de}\\ \overline{J}_{P_{de}^{\prime\prime}/E^{\prime\prime}}&J_{P_{de}^{\prime\prime}/E ^{\prime\prime}}&J_{Ea}/E_{de}\\ \overline{J}_{P_{de}^{\prime\prime}/E^{\prime\prime}}&J_{P_{de}^{\prime\prime}/E ^{\prime\prime}}&J_{Ea}/E_{de}\\ \overline{J}_{P_{de}^{\prime\prime}/E^{\prime\prime}}&J_{P_{de}^{\prime\prime}/E ^{\prime\prime}}&J_{P_{de}^{\prime\prime}/E_{de}}\\ \end{bmatrix} \tag{27}\]
where e.g. \(J_{P_{ae}/E^{\prime}}\) is the partial derivative \(\frac{\partial P_{ae}}{\partial E^{\prime}}\).
The unknowns are updated at each step by taking the inverse of the Jacobian until the convergence criterium is reached.
\[\mathbf{x}^{(\nu+1)}=\mathbf{x}^{(\nu)}+\mathbf{J}^{(\nu)}{}^{-1}\Delta\mathbf{ y}^{(\nu)} \tag{28}\]
The convergence criterium is set on the update of the mismatches (29) where \(\epsilon\) is the desired tolerance:
\[\Delta\mathbf{y}^{(\nu)}<\epsilon \tag{29}\]
## IV Case study
The proposed PF algorithm for hybrid AC/DC is first validated on the hybrid AC/DC grid developed at the EPFL. The topology and parameters of the hybrid network are presented in [24] and shown in Fig. 3. The hybrid AC/DC microgrid consists of 18 AC nodes, 8 DC nodes, and 4 interfacing converters that can work under two control modes: \(\mathbf{E_{de}}-\mathbf{Q_{ac}}\) and \(\mathbf{P_{ae}}-\mathbf{Q_{ac}}\). Tab. II summarises the node types in the hybrid network. Both grids have a base power of \(100\,\mathrm{kVA}\) and a base voltage of \(400\,\mathrm{V_{ac}}\) and \(800\,\mathrm{V_{dc}}\). Notice that any method previously proposed in the literature cannot be used for the PF analysis of this hybrid microgrid because two IC are controlling the DC voltage. The model is made publically available to the interested reader to reproduce the results at [https://github.com/DESL-EPFL](https://github.com/DESL-EPFL).
Two steady-state time simulations are performed in the EMTP-RV simulation environment: a balanced one and a strongly unbalanced one, where the difference power injections
Fig. 2: Transformer-like model of an inverter leg of the AC/DC interfacing converter with RL filter.
between the phases in _B09_ reaches \(0.5\) p.u. The results from the simulation are considered the 'ground truth' and are used to validate the PF algorithm. The NR convergence criterium (29) is set at \(\epsilon<10^{-6}\) for the update of the mismatches \(\Delta\mathbf{y}\). The voltage errors, the difference between the ground truth and the results of the load flow algorithm, are presented as a histogram in Fig.4. The top figures show the voltage errors of the real and imaginary components and the DC voltages for the balanced case. The results of the unbalanced case are shown in the two lower figures. For the balanced loading conditions, the maximum voltage error is in the order of \(5\times 10^{-6}\). For the unbalanced loading conditions, the maximum error is in the order of \(2\times 10^{-5}\). The NR algorithm takes around \(20\,\mathrm{ms}\) and 4 iterations to converge for the considered three-phase 26-node hybrid microgrid when started from a _flat start_ where the voltage magnitudes are initialised at 1 p.u.
## V Comparison with existing methods
The performance of the proposed method is also benchmarked against the FUBM-based PF algorithm proposed in [15, 16]. The PF model has been made publicly available as an extension of the MATPOWER package [3]. As discussed in Section II, the model is based on the universal branch model to model the ICs and requires four additional state variables to model each AC-DC interface: two variables (\(m_{a}\)and \(\theta_{a}\)) to model the amplitude modulation and the phase-shifting action of the PWM control of the VSC, a shunt susceptance \(B_{eq}\) to compensate the reactive power injected into the DC grid and the variable \(G^{sw}\), accounting for the VSC losses. The PF model is solved using the Newton-Raphson method.
The comparison is performed on four different hybrid AC/DC networks: 1) the hybrid microgrid developed at the EPFL presented in Section IV, 2) the IEEE 30-node system that has been extended with a 3 and 5-node DC network, 3) the IEEE 57-node network that is connected to the IEEE 14-node network using a DC network, and 4) the modified 1354-node PEGASE network to showcase the scalability of our proposed method. The modifications made on the IEEE networks and the PEGASE network to include a DC system, have been proposed by [15].
The FUBM-based PF algorithm only works for balanced, single-phase systems (i.e. only the direct sequence equivalent is considered) and, furthermore, the method is limited to only one IC to control the DC voltage. Therefore, our proposed model has been appropriately adapted to consider only the direct sequence, and the studied networks only contain one voltage-controllable IC.
The two PF methods are compared in accuracy and computation time. To allow a fair comparison, the boundary conditions are set the same: the power setpoints of all the generators and loads, the type of nodes (_PV_ or _PQ_) and the operating modes of the ICs. The convergence criteria (29) for both methods are set at \(\epsilon<10^{-8}\) for the update of the mismatches. Furthermore, the initial conditions of the unknowns, \(\mathbf{x}^{(0)}\), are all set at \(1p.u.\).
The results of the time analysis for the four reference hybrid AC/DC networks are presented in Table III. The CPU time (in seconds), the number of state variables, and the number of iterations required to reach the convergence criteria are shown. To allow a fair comparison, only the time to run the iterative NR process is considered, i.e., the computation of the mismatch equations, the construction of the Jacobian and the update of the unknowns. The columns _EPFL_ indicate the results of our proposed PF methods, while _FUBM_ refers to the results of the FUBM method presented in [15].
It can be seen that the computation time of the proposed method is one order of magnitude smaller than the FUBM method. This can be explained by the fact that the FUBM requires additional variables to model the ICs. Therefore, the number of unknowns increases along with the number of iterations and consequently, the computation time. The method
\begin{table}
\begin{tabular}{l c c} \multicolumn{1}{c}{**AC**} & \multicolumn{2}{c}{**DC**} \\ \hline
**Bus Type** & **Bus \#** \\ \hline PQ & 2-14 & **Bus Type** & **Bus \#** \\ \hline VSC \((E_{de}-Q_{ac})\) & 15,18 & **P** & 24-26 \\ \hline VSC \((P_{ac}-Q_{ac})\) & 16,17 & **VSC \((P_{ac}-Q_{ac})\)** & 19,20 \\ \hline AC slack & 1 & **P** & 21,22 \\ \end{tabular}
\end{table} TABLE II: Node types in the hybrid AC/DC microgrid
Fig. 3: Topology of the hybrid AC/DC micro-grid developed at the EPFL
proposed in this paper does not require additional variables to model the IC's behaviour since it is an extension of the AC-PF and only requires 2 variables for each AC node and 1 variable for each DC node. The large CPU time of the FUBM method is mainly due to the computation of the partial derivatives of the power injections with respect to the variables \(m_{a}\) and \(B_{eq}\) which are needed for the IC's model.
The scalability of the method is demonstrated on the 1354-node PEGASE grid. Despite the large number of unknowns, the proposed method can solve the PF in 6 iterations in the order of seconds. The number of state variables is this time lower in the FUBM model. This is because the FUBM model is formulated in polar coordinates, and a large number of generators (261) are present in the PEGASE grid. The generators are modelled as PV nodes and because of the polar representation of the FUBM method, only one state variable per _PV node_ is needed. Our proposed method is formulated in rectangular coordinates and requires 2 state variables per PV node.
## VI Conclusion
In this paper, we present a novel model for the power flow in multiterminal hybrid AC/DC networks. The model is formulated in a general and unified way and solved using the Newton-Rapson method. New to previously published works, the proposed methodology allows multiple AC/DC converters to control the DC voltage. This is a crucial element in the planning and control of multiterminal AC/DC networks, as it corresponds more to the realistic operational condition of these hybrid grids. Additionally, the model is able to compute the power flow under balanced and unbalanced loading conditions and can account for intentionally negative sequence power injection.
The method is numerically validated on a hybrid AC/DC microgrid, whose topology has been inspired by a real benchmark grid. Multiple ICs regulate the DC voltage level and the network is subjected to unbalanced loading conditions. It is shown that the error between the ground truth, obtained in an EMTP-RV time-domain simulation, and the results of the proposed PF method is very small in the order of \(2\times 10^{-5}\) to \(5\times 10^{-6}\).
Furthermore, the computational time of the proposed method is compared with the FUBM-based PF method, which is implemented as an extension of the MATPOWER tool. The comparison has been performed using multiple hybrid networks with different voltage levels, topologies, and sizes. The convergence criterion is set the same for both methods. It is shown that the method proposed in this paper converges by a factor of 10 faster than the FUBM-based method.
|
2309.09977 | A Multi-Token Coordinate Descent Method for Semi-Decentralized Vertical
Federated Learning | Communication efficiency is a major challenge in federated learning (FL). In
client-server schemes, the server constitutes a bottleneck, and while
decentralized setups spread communications, they do not necessarily reduce them
due to slower convergence. We propose Multi-Token Coordinate Descent (MTCD), a
communication-efficient algorithm for semi-decentralized vertical federated
learning, exploiting both client-server and client-client communications when
each client holds a small subset of features. Our multi-token method can be
seen as a parallel Markov chain (block) coordinate descent algorithm and it
subsumes the client-server and decentralized setups as special cases. We obtain
a convergence rate of $\mathcal{O}(1/T)$ for nonconvex objectives when tokens
roam over disjoint subsets of clients and for convex objectives when they roam
over possibly overlapping subsets. Numerical results show that MTCD improves
the state-of-the-art communication efficiency and allows for a tunable amount
of parallel communications. | Pedro Valdeira, Yuejie Chi, Cláudia Soares, João Xavier | 2023-09-18T17:59:01Z | http://arxiv.org/abs/2309.09977v1 | # A Multi-Token Coordinate Descent Method for Semi-Decentralized Vertical Federated Learning
###### Abstract
Communication efficiency is a major challenge in federated learning (FL). In client-server schemes, the server constitutes a bottleneck, and while decentralized setups spread communications, they do not necessarily reduce them due to slower convergence. We propose Multi-Token Coordinate Descent (MTCD), a communication-efficient algorithm for semi-decentralized vertical federated learning, exploiting both client-server and client-client communications when each client holds a small subset of features. Our multi-token method can be seen as a parallel Markov chain (block) coordinate descent algorithm and it subsumes the client-server and decentralized setups as special cases. We obtain a convergence rate of \(\mathcal{O}(1/T)\) for nonconvex objectives when tokens roam over disjoint subsets of clients and for convex objectives when they roam over possibly overlapping subsets. Numerical results show that MTCD improves the state-of-the-art communication efficiency and allows for a tunable amount of parallel communications.
## 1 Introduction
Federated Learning (FL) is a machine learning paradigm where data is distributed across a set of clients who collaborate to learn a model without sharing local data (McMahan et al., 2017). Most FL literature considers data distributed by samples (horizontal FL), where each client holds all the features of a subset of the samples, yet recently there has been a growing interest on feature-distributed setups (vertical FL), where each client holds a subset of the features for all samples (He et al., 2018; Chen et al., 2020; Alghunaim et al., 2021; Liu et al., 2022b; Castiglia et al., 2022). Vertical FL may be of particular interest for applications using, for example, time series data measured by personal devices to learn a model of some cross-client phenomenon of interest (e.g. meteorological), where each sample corresponds to the data collected across the devices at a given timestamp. Another example involves performing a computer vision task using multiple views of the same object, where a sample corresponds to the concatenation of views from different devices.
FL often deals with the client-server setup, where a server is connected to all clients and the clients do not communicate with each other, forming a star-shaped topology. However, such schemes have a single point of failure and suffer from a communication bottleneck on the server (Lian et al., 2017). On the other hand, there is extensive literature on decentralized optimization, where there is no server--from earlier work motivated by applications such as wireless sensor networks and multiagent control (Nedic and Ozdaglar, 2009; Duchi et al., 2012; Qu and Li, 2018), to recent work motivated by FL (Koloskova et al., 2020; Li et al., 2020; Zhao et al., 2022). Yet, these algorithms often converge slowly in sparse and large networks (Nedic et al., 2018) and, although they spread the communication load across the network, they tend to have poor communication efficiency. Semi-Decentralized FL (SDFL) uses both client-server _and_ client-client communications, reducing the overhead at the server (Lin et al., 2021) while avoiding the shortcomings of decentralized setups and being able to handle multiple clusters of clients.
When concerned with the communications between clients, the use of a token method (Bertsekas, 1997; Nedic and Bertsekas, 2001; Ram et al., 2009; Johansson et al., 2010; Mao et al., 2020; Hendrikx, 2022), where a model-embedding token follows a random walk over a communication graph (undergoing local updates), allows for better communication efficiency (Hendrikx, 2022) than the more common consensus-based methods (Nedic and Ozdaglar, 2009; Duchi et al., 2012; Qu and Li, 2018; Koloskova et al., 2020). Yet, the convergence rate of token methods degrades even faster for larger and sparser networks, due to a lack of parallel communications (Hendrikx, 2022). Based on the idea that performing multiple random walks in parallel leads to a linear speed-up in the cover time (Alon et al., 2008), multi-token methods (Ye et al., 2020; Chen et al., 2022; Hendrikx, 2022) mitigate this problem by running multiple tokens simultaneously and combining them.
### Our contribution
Motivated by the above observations, we propose an SDFL multi-token algorithm for vertical FL. Our main contributions are as follows.
* We introduce Multi-Token Coordinate Descent (MTCD), which, to the best of our knowledge, is the first multi-token method leveraging the SDFL scheme to achieve a flexible degree of dependence on the server, recovering both client-server and fully decentralized FL as special cases;
* We show that MTCD converges at a rate of \(\mathcal{O}(1/T)\) for nonconvex objectives when tokens roam over disjoint subsets of clients and for convex objectives when they roam over possibly overlapping subsets of clients;
* Numerical experiments on both synthetic and real data and for a variety of communication setups show that MTCD improves the state-of-the-art communication efficiency.
### Related works
Coordinate descent.Coordinate Descent (CD) methods (Wright, 2015), where (blocks of) coordinates are updated sequentially, rather than simultaneously, are natural candidates for optimization in feature-distributed learning. The block selection is most often cyclic (Beck and Tetruashvili, 2013) or independent and identically distributed at random (Nesterov, 2012; Richtarik and Takac, 2012). In contrast, Sun et al. (2019) considers block selection following a Markov chain. Several extensions to CD have been proposed, such as acceleration and parallelization (Fercoq and Richtarik, 2015) and distributed CD methods (Liu et al., 2022b; Chen et al., 2022).
Vertical FL.Existing vertical FL works include He et al. (2018), which generalizes Smith et al. (2018) to the decentralized setting. Both works use primal-dual optimization techniques, as does DCPA (Alghunaim et al., 2021), a state-of-the-art decentralized method. In contrast, the following methods work in the primal domain, allowing them to learn more general models. In (Chen et al., 2020), a CD-based method is used, but no local updates are performed, while Liu et al. (2022b) does consider local updates. This latter work and Castiglia et al. (2022), which introduces the use of compressed embeddings, lowering the communication cost, are particularly close to our method. Note that the communication costs in vertical FL methods typically depend on the number of samples (or batch size, in the case of stochastic methods) (Alghunaim et al., 2021; Liu et al., 2022b; Castiglia et al., 2022), further highlighting the importance of communication efficiency. An interesting line of work related to vertical FL is hybrid FL (Zhang et al., 2021b), which deals with datasets distributed both by samples and features. For a more detailed survey of vertical FL methods, see (Wei et al., 2022; Liu et al., 2022a).
Semi-Decentralized FL.Recently, SDFL approaches have been proposed to lower communication costs and deal with data heterogeneity (Lin et al., 2021; Guo et al., 2021), and to handle intermittent connections, latency, and stragglers (Bian and Xu, 2022; Yemini et al., 2022). Additionally, other SDFL works deal with (multi-layered) hierarchical networks (Zhang et al., 2021a; Hosseinalipour et al., 2022). SDFL is also referred to as hybrid FL sometimes, but we opt for the term semi-decentralized FL to avoid confusion with the data partitioning setting mentioned above.
## 2 Problem setup
We consider a dataset \(\mathbf{X}\in\mathbb{R}^{N\times d}\) with \(N\)\(d\)-dimensional samples distributed by features across a set of clients \([K]\coloneqq\{1,\ldots,K\}\). Client \(k\) holds its local data \(\mathbf{X}_{k}\in\mathbb{R}^{N\times d_{k}}\) and we have \(\mathbf{X}=[\mathbf{X}_{1},\cdots,\mathbf{X}_{K}]\). Note that \(d_{1}+\cdots+d_{K}=d\). We consider a broad class of machine learning models, know as split neural networks (Ceballos et al., 2020), illustrated in Figure 1.
In split neural networks, each client has an associated local model \(h_{k}(\cdot\,;\mathbf{X}_{k})\colon\mathbf{\Theta}_{k}\mapsto\mathcal{H}_{k}\) parameterized by \(\mathbf{\theta}_{k}\in\mathbf{\Theta}_{k}\), which extracts a (typically lower-dimensional) representation of \(\mathbf{X}_{k}\). For simplicity, we write \(h_{k}(\mathbf{\theta}_{k})\coloneqq h_{k}(\mathbf{\theta}_{k};\mathbf{X}_{k})\). These embeddings, \(h_{k}(\mathbf{\theta}_{k})\), are then combined using an aggregation mechanism \(H\colon\mathcal{H}_{1}\times\cdots\times\mathcal{H}_{K}\mapsto\mathcal{H}\) to form \(H(h_{1}(\mathbf{\theta}_{1}),\ldots,h_{K}(\mathbf{\theta}_{K}))\), which is used as input to a fusion model \(\phi\colon\mathcal{H}\times\mathbf{\Theta}_{0}\mapsto\mathbb{R}\), parameterized by \(\mathbf{\theta}_{0}\). Although more aggregation mechanisms are possible (Ceballos et al., 2020), we focus on aggregation by concatenation (the most general case) and by sum. While \(\mathbf{\theta}_{k}\in\mathbf{\Theta}_{k}\) is associated with \(\mathbf{X}_{k}\), acting as a local model for client \(k\), our fusion model \(\mathbf{\theta}_{0}\) can be learned at different locations, depending on whether we consider the existence of a server. Split neural networks include, for example, generalized linear models, such as linear regression, logistic regression, and support vector machines, where \(h_{k}(\mathbf{\theta}_{k})=\mathbf{X}_{k}\mathbf{\theta}_{k}\) for \(k\in[K]\) and \(\mathbf{\Theta}_{0}\) is an empty set.
Let \(\mathbf{\Theta}\) denote a parameter space such that \(\mathbf{\Theta}=\mathbf{\Theta}_{0}\times\mathbf{\Theta}_{1}\times\cdots\times\mathbf{\Theta}_ {K}\) and \(f\colon\mathbf{\Theta}\mapsto\mathbb{R}\) denote our objective function, our goal is to solve the following optimization problem, which encompasses the training of split neural networks:
\[\min_{\mathbf{\theta}\in\mathbf{\Theta}}\;\left\{f(\mathbf{\theta})\coloneqq\phi\left(H \big{(}h_{1}(\mathbf{\theta}_{1};\mathbf{X}_{1}),\ldots,h_{K}(\mathbf{\theta}_{K};\mathbf{X}_ {K})\big{)},\,\mathbf{\theta}_{0}\right)\right\}, \tag{1}\]
where the labels \(\mathbf{y}\) are included in the loss function, which we assume to be known by all clients.
Throughout the paper, we consider Problem (1) and assume \(f\) is an \(L\)-smooth function. The standard definition of \(L\)-smoothness is given below. We define and assume \(f^{\star}\coloneqq\min_{\mathbf{\theta}}f(\mathbf{\theta})>-\infty\).
**Assumption 1** (Smoothness).: _A differentiable function \(f\colon\mathbb{R}^{d}\mapsto\mathbb{R}\) is \(L\)-smooth if there exists some \(L\in(0,\infty)\):_
\[\|\nabla f(\mathbf{x})-\nabla f(\mathbf{y})\|\leq L\|\mathbf{x}-\mathbf{y}\|,\quad\forall\; \mathbf{x},\mathbf{y}\in\mathbb{R}^{d}. \tag{2}\]
## 3 Proposed method
### The fully decentralized setting
We start by introducing a simple, special case of our algorithm, which we refer to as Single-Token Coordinate Descent (STCD). This method, which is also a subroutine of our general algorithm, is closely related to Sun et al. (2019) and the application mentioned therein, taken from Mao et al. (2020). Yet, we work in the primal domain and on a feature-distributed setting.
Setup.In this section, we do not require the existence of a server. We solve Problem (1) in a fully decentralized manner, communicating through channels described by a communication graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), where \(\mathcal{V}\coloneqq[K]\) is the vertex set and \(\mathcal{E}\) the edge set. We denote the set of neighbors of client \(k\) by
Figure 1: A split neural network, where \(K\) embeddings are obtained from neural networks, before an aggregation mechanism \(H\) is applied and its result is inputted into a fusion neural network.
\(\mathcal{N}_{k}\coloneqq\{i\colon\{i,k\}\in\mathcal{E}\}\) and define \(\bar{\mathcal{N}}_{k}\coloneqq\mathcal{N}_{k}\cup\{k\}\). In this section only, \(\mathbf{\theta}_{0}\) is associated with some client \(k\), which is responsible for updating both the local model it holds, \(\mathbf{\theta}_{k}\), and the fusion model \(\mathbf{\theta}_{0}\).1
Footnote 1: We do this for alignment with the analysis in Sun et al. (2019), where all blocks are selected following a Markov Chain. However, in practice, we may want to update \(\mathbf{\theta}_{0}\) at each client instead, in which case the analysis would need to be adjusted.
Token.Since all clients know \(\phi\), if a client knows \(\mathcal{Z}\coloneqq\{H(h_{1}(\mathbf{\theta}_{1}),\ldots,h_{K}(\mathbf{\theta}_{K})), \mathbf{\theta}_{0}\}\), it can compute \(f\). We call \(\mathcal{Z}\) our _token_. The size of the token depends on the model being considered. For example, if \(\mathcal{H}_{k}\subseteq\mathbb{R}^{NE}\) for all \(k\) (i.e., we have an \(E\)-dimensional embedding per sample) and we do aggregation by concatenation, then \(\mathcal{Z}\) is of size \(KNE+\dim(\mathbf{\Theta}_{0})\), where \(\dim(\cdot)\) denotes the dimensionality of a space. Yet, for aggregation by sum, we drop the dependency on \(K\). In particular, for generalized linear models, \(\mathcal{Z}=\{\mathbf{X}\mathbf{\theta}\}\) is of size \(N\).
The token suffices to perform local CD steps.We have seen that a client holding \(\mathcal{Z}\) can compute \(f\). Yet, more importantly, if client \(k\) has access to its local data \(\mathbf{X}_{k}\) and local model \(\mathbf{\theta}_{k}\), then holding \(\mathcal{Z}\) enables it to compute the partial gradient with respect to \(\mathbf{\theta}_{k}\),
\[\nabla_{k}f(\mathbf{\theta})\coloneqq\nabla_{\mathbf{\theta}_{k}}f(\mathbf{ \theta}) =\nabla_{\mathbf{\theta}_{k}}\phi\left(H(h_{1}(\mathbf{\theta}_{1}),\ldots,h_{K}(\mathbf{\theta}_{K})),\mathbf{\theta}_{0}\right)\] \[=\frac{d\phi\left(H(h_{1}(\mathbf{\theta}_{1}),\ldots,h_{K}(\mathbf{ \theta}_{K})),\mathbf{\theta}_{0}\right)}{dH(h_{1}(\mathbf{\theta}_{1}),\ldots,h_{K}( \mathbf{\theta}_{K}))}\cdot\frac{dH(h_{1}(\mathbf{\theta}_{1}),\ldots,h_{K}(\mathbf{ \theta}_{K}))}{dh_{k}(\mathbf{\theta}_{k})}\cdot\frac{dh_{k}(\mathbf{\theta}_{k})}{d \mathbf{\theta}_{k}},\]
where \(\mathcal{Z}\) is used in the computation of the first two terms. This will allow client \(k\) to update its local model \(\mathbf{\theta}_{k}\).
We now describe STCD, summarized in Algorithm 1, where \(\mathcal{U}\) denotes the uniform distribution. We index \(\mathbf{\theta}\) and \(\mathcal{Z}\) with two counters: one for the sequence of clients (and thus coordinate block) visited while roaming and one for the local updates at each client. To simplify the description of the algorithm, we omit \(\mathbf{\theta}_{0}\) for the rest of Section 3.1, as if it were part of the local model of the client it is associated with for updating purposes. Yet, \(\mathbf{\theta}_{k}\) does not leave \(k\) but \(\mathbf{\theta}_{0}\) does, as \(\mathbf{\theta}_{0}\) is part of the token.
* **Initialization:** The token \(\mathcal{Z}^{s,q}\) must always be in accordance with \(\mathbf{\theta}^{s,q}\). So, as \(\mathcal{Z}^{0,0}\) starts at client \(k^{0}\), this client must know \(\{H(h_{1}(\mathbf{\theta}_{1}^{0,0}),\ldots,h_{K}(\mathbf{\theta}_{K}^{0,0})),\mathbf{ \theta}_{0}^{0,0}\}\). For some models, we can achieve this by initializing the local models \(\mathbf{\theta}_{k}^{0,0}\) such that the embeddings \(h_{k}(\mathbf{\theta}_{k}^{0,0})\) are independent of local data \(\mathbf{X}_{k}\). When this is not possible, the clients can send their initial embeddings to \(k^{0}\) as a prelude.
* **Updating the local model and the token:** as explained above, the client holding the token after \(s\) hops, \(k^{s}\), can compute the partial gradient with respect to its local model \(\mathbf{\theta}_{k^{s}}\) locally. This allows it to perform a CD step. Further, to lower communication costs, we do \(Q\) local CD updates at each client. That is, for \(q=0,\ldots,Q-1\): \[\mathbf{\theta}_{k^{s}}^{s,q+1}=\mathbf{\theta}_{k^{s}}^{s,q}-\eta\nabla_{k^{s}}f(\mathbf{ \theta}^{s,q})\] (2)
and \(\mathbf{\theta}_{k}^{s,q+1}=\mathbf{\theta}_{k}^{s,q}\) for \(k\neq k^{s}\). We must now update \(\mathcal{Z}\) accordingly. To compute \(\mathcal{Z}^{s,q+1}\), we use \(\mathcal{Z}^{s,q}\), \(h_{k^{s}}(\mathbf{\theta}_{k^{s}}^{s,q+1})\), and \(h_{k^{s}}(\mathbf{\theta}_{k^{s}}^{s,q})\), which are held by \(k^{s}\). For example, for aggregation by sum, we have \[H(h_{1}(\mathbf{\theta}_{1}^{s,q+1}),\ldots,h_{K}(\mathbf{\theta}_{K}^{s,q+1}))=H(h_{ 1}(\mathbf{\theta}_{1}^{s,q}),\ldots,h_{K}(\mathbf{\theta}_{K}^{s,q}))+h_{k^{s}}(\mathbf{ \theta}_{k^{s}}^{s,q+1})-h_{k^{s}}(\mathbf{\theta}_{k^{s}}^{s,q}).\] This allows us to perform multiple local CD steps. Further, after these \(Q\) steps, the updated token can be sent to client \(k^{s+1}\). Thus, by induction, we keep the token up-to-date throughout our algorithm.
* **Communicating the token:** the token is communicated to a neighbor of \(k^{s}\). This results in a sequence of clients (and blocks) that follows a Markov Chain.
In essence, STCD is a technique allowing for Markov Chain Coordinate Descent (Sun et al., 2019) to be performed in feature-distributed setups. In terms of the progress made in the parameter space, Algorithm 1 differs from Sun et al. (2019) only in that we consider local updates with \(Q>1\).
Convergence guarantees.If \(f\) is an \(L\)-smooth function (A1) with a nonempty set of minimizers and \((\mathbf{\theta}^{i})_{i=1}^{r}\) is a sequence generated by Algorithm 1, Sun et al. (2019) give convergence guarantees for \(Q=1\). In particular, let \(r=sQ+q\), we have under mild assumptions on the Markov chain (for example, if the Markov chain is time-homogeneous, irreducible, and aperiodic) that \(\lim_{r\to\infty}\mathbb{E}\|\nabla f(\mathbf{\theta}^{r})\|=0\) and, let \(\Delta\coloneqq f\left(\mathbf{\theta}^{0}\right)-f^{\star}\),
\[\mathbb{E}\left[\min_{i\in[r]}\lVert\nabla f(\mathbf{\theta}^{i})\rVert^{2} \right]\leq\frac{(\Omega_{1}(\tau-1)^{2}+\Omega_{2})\Delta}{r}, \tag{3}\]
where \(\Omega_{1}\) and \(\Omega_{2}\) are constants that depend on the minimum value of the stationary distribution of the Markov chain \(\pi_{\min}\), the step-size \(\eta\), and the smoothness constant \(L\). Further, \(\tau\) denotes the \(\frac{\pi_{\min}}{2}\)-mixing time of the the Markov chain. (We present all the aforementioned Markov chain-related terms in Appendix A.) While Sun et al. (2019) only consider \(Q=1\) explicitly, their analysis can also cover the \(Q>1\) case. To see this, consider a virtual dynamic graph \(\mathcal{G}^{r}=(\mathcal{V},\mathcal{E}^{r})\) where \(\mathcal{V}\) is the original vertex set and, recalling that \(\mathcal{E}\) is the original edge set,
\[\mathcal{E}^{r}=\begin{cases}\mathcal{E}&\text{if }r\text{ mod }Q=0,\\ \{\{i,i\}:i\in\mathcal{V}\}&\text{otherwise}.\end{cases}\]
Running STCD on \(\mathcal{G}^{r}\) with a single local update is equivalent to running STCD on the original graph \(\mathcal{G}\) with \(Q\) local updates. This dynamic graph preserves the properties required for the analysis to hold. To see this, let \(\mathbf{P}\) denote the transition matrix of a random walk on the original graph \(\mathcal{G}\) and let \(\mathbf{P}(r)\) denote the transition matrix of a random walk on \(\mathcal{G}^{r}\). Note that \(\mathbf{P}(r)=\mathbf{I}\) for all \(r\mod Q\neq 0\), where \(\mathbf{I}\) denotes the identity matrix, and \(\mathbf{P}(r)=\mathbf{P}\) for all \(r\mod Q=0\). Assuming, for simplicity, that \(R\geq Q\), we have that \(\mathbf{P}(r)\mathbf{P}(r+1)\ldots\mathbf{P}(r+R)=\mathbf{P}^{\lfloor R/Q\rfloor}\). We thus recover the results in Sun et al. (2019) up to a factor of \(Q\) in the mixing time. (That is, in (3), we replace \(\tau\) by \(Q\tau\).)
Limitations.The decentralized token algorithm in this section has an appealing simplicity. However, while it outperforms state-of-the-art feature-distributed learning algorithms in a variety of setups, as we will see in Section 4, its performance deteriorates faster with network connectivity than these decentralized consensus-based algorithms and its convergence per iteration can be rather slow. These problems will be mitigated by the more general multi-token method presented next.
### Semi-decentralized setting
In Section 3.1, we introduced a special case of MTCD where a single token roams over a fully decentralized set of clients. We now present our method in the semi-decentralized setting, which subsumes the setting in Section 3.1 as a special case. Multi-token CD alternates between a _roaming_ step and a _syncing_ step.
* _Roaming._ We start with multiple, matching tokens at a subset of the clients. As each token performs a different random walk realization, they undergo different sequences of CD updates, becoming distinct.
* _Syncing._ To leverage these parallel computations while keeping our model estimates coupled, we periodically sync the roaming tokens at the server, combining the progress of multiple CD sequences.
By alternating between these two steps, we get a communication-efficient algorithm with a flexible degree of parallelization, depending on the number of tokens and the frequency at which we sync them. This allows us to smoothly control the trade off between the communication efficiency of settings with less parallel computations and the faster iteration convergence of settings with more parallel computations.
We further generalize STCD by allowing the use of stochastic gradient estimates. Let \(\mathbf{B}\in\mathbb{R}^{B\times d}\) denote a mini-batch of size \(B\) defined by a random set of indices \(\mathcal{B}\subseteq[N]\), we make the following standard assumptions that our gradient estimate is unbiased and has a bounded variance. For clarity, we make the dependency on \(\mathbf{X}\) explicit by writing \(\nabla f(\mathbf{\theta};\mathbf{X})=\nabla f(\mathbf{\theta})\).
**Assumption 2** (Unbiased gradients).: _For any mini-batch \(\mathbf{B}\), the stochastic gradient is unbiased:_
\[\mathbb{E}_{\mathcal{B}}[\nabla f(\mathbf{\theta};\mathbf{B})]=\nabla f(\mathbf{\theta}; \mathbf{X}),\quad\forall\;\mathbf{\theta}\in\mathbb{R}^{d}.\] (A2)
**Assumption 3** (Bounded variance).: _For any mini-batch \(\mathbf{B}\), there exists a constant \(\sigma\geq 0\) such that:_
\[\mathbb{E}_{\mathcal{B}}\|\nabla f(\mathbf{\theta};\mathbf{B})-\nabla f(\mathbf{\theta}; \mathbf{X})\|^{2}\leq\frac{\sigma^{2}}{B},\quad\forall\;\mathbf{\theta}\in\mathbb{R} ^{d}.\] (A3)
Setup.In addition to a communication graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), we now also consider the existence of a central server with links to all clients, as illustrated in Figure 2. The existence of a server brings a change to the model partitioning: \(\mathbf{\theta}_{0}\) is now updated at the server. Further, we now have \(\Gamma\) tokens roaming simultaneously. Each token \(\mathcal{Z}_{\gamma}\) has an associated model estimate \(\mathbf{\theta}(\gamma)\), for \(\gamma\in[\Gamma]\). Thus, during the roaming step, each client \(k\) must now keep up to \(\Gamma\) local model estimates \(\mathbf{\theta}_{k}(\gamma)\) in memory.2 To simplify the description of the algorithm, we define \([\Gamma]_{0}\coloneqq[\Gamma]\cup\{0\}\), with token \(\gamma=0\) staying at the server throughout the roaming step.
Footnote 2: In practice, it suffices to to add a copy of \(\mathbf{\theta}^{t}\) as a new token visits during a given roaming step (with a maximum of \(\Gamma\) model estimates at a client), resetting the number of copies when syncing.
We now describe MTCD, summarized in Algorithm 2, where \(k_{\gamma}^{t,s}\) denotes the client holding token \(\gamma\) after \(t\) synchronizations and \(s\) hops and \(P_{\gamma}\) denotes a distribution over the clients, for \(\gamma\in[\Gamma]\). We index \(\mathbf{\theta}\) and \(\mathcal{Z}\) with three counters: one for synchronizations at the server and two matching the counters used in STCD. For simplicity, we write \(\mathcal{Z}^{t}\coloneqq\mathcal{Z}_{1}^{t,0,0}=\cdots=\mathcal{Z}_{\Gamma}^{ t,0,0}\) and \(\mathbf{\theta}^{t}\coloneqq\mathbf{\theta}^{t,0,0}(1)=\cdots=\mathbf{\theta}^{t,0,0}(\Gamma)\).
* **Initialization:** all model-token pairs \((\mathbf{\theta}(\gamma),\mathcal{Z}_{\gamma})\) are initialized to the same values. As in STCD, \(\mathcal{Z}_{\gamma}\) must be in accordance with \(\mathbf{\theta}(\gamma)\).
* **Roaming:** The server samples a set of indices \(\mathcal{B}^{t}\) and communicates it to each client \(k\), which returns its local embedding \(h_{k}(\mathbf{\theta}^{t}_{k};\mathbf{B}^{t})\). This allows the server to compute token \(\mathcal{Z}^{t}\) and send copies of it to start the roaming step at \(k_{\gamma}^{t,0}\sim P_{\gamma}\), where \(P_{\gamma}(k)\coloneqq\mathbb{P}(k_{\gamma}^{t,0}=k)\). Note that \(P_{0}\) is a point mass distribution with support over the server, \(k=0\) (a node without neighbors). As \(\mathcal{Z}_{\gamma}\) reaches client \(k_{\gamma}^{t,s}\), it is used to perform a local CD step on model estimate \(\mathbf{\theta}(\gamma)\) with respect to block \(k_{\gamma}^{t,s}\) and is then updated accordingly. Each \(\mathcal{Z}_{\gamma}\) roams for \(S\) hops, as in STCD. In parallel, \(\mathbf{\theta}_{0}(0)\) is updated at the server.
* **Sracing:** after \(S\) hops, each client combines its model estimates, obtaining \(\mathbf{\theta}^{t+1}_{k}=\sum_{\gamma=1}^{\Gamma}w_{k\gamma}\mathbf{\theta}^{t,S,Q} _{k}(\gamma)\). We cover the choice of \(\mathbf{w}_{k}\coloneqq(w_{k1},\ldots,w_{k\Gamma})\), which lies in the \(\Gamma\)-dimensional probability simplex, later.
Figure 2: Semi-decentralized setup. Client-server communications are represented by dashed blue lines and client-client communications by solid green lines.
Recovering client-server and decentralized setups.On the one hand, if no client-server communications are available (\(S\to\infty\)), our algorithm is reduced to a fully decentralized one, recovering STCD. On the other hand, if the edge set \(\mathcal{E}\) is empty, we recover the client-server setting. In this case, if we assign a token to each client (\(\Gamma=K\) and each \(P_{\gamma}\) has support over a single different client), we get full participation, as in Liu et al. (2022b). In contrast, if \(\Gamma<K\), we recover a partial participation client-server scheme. To the best of our knowledge, this is the first partial participation vertical FL algorithm with multiple local updates.
In general, as we increase the amount of parallel computations (by increasing \(\Gamma\)) and client-server communications (by lowering \(S\)), the communication efficiency and the number of iterations needed to converge both decrease. Given this trade-off, we see that our choice of \(S\) and \(\Gamma\) depends on the application.
Having introduced the general MTCD algorithm, we now go over two particular instances of it, both for semi-decentralized setups, and present convergence guarantees for each.
Setting with a token per cluster.We now present some convergence guarantees for MTCD in the case where we have \(C\) disjoint clusters of clients \(\mathcal{C}_{1},\ldots,\mathcal{C}_{C}\) and one token roams each cluster, hence \(\Gamma=C\). Without loss of generality, we assume \(\gamma\) roams \(\mathcal{C}_{\gamma}\), allowing us to use \(\gamma\) and \(c\) interchangeably. Thus, \(P_{\gamma}\) has support over \(\mathcal{C}_{\gamma}\), and only over \(\mathcal{C}_{\gamma}\). We also define a cluster containing only the server, \(\mathcal{C}_{0}\coloneqq\{0\}\).
These clusters may correspond to the natural topology of the communication graph, due to physical limitations or to privacy constraints preventing communications between clients in different clusters (e.g., households or companies). Yet, we may also generate artificial partitioning of the original graph, prior to learning our model, in order to allow for the use of multiple tokens while avoiding overlapping trajectories, rather than having a single token roaming over a too large (and thus poorly connected) graph.
Since in this setting the blocks of coordinates being updated in each model estimate are disjoint, we let \(\mathbf{w}_{k}\in\mathbb{R}^{\Gamma}\) be the one-hot encoding for the token that visits client \(k\), thus combining the model estimates by simply taking the updated version of each block.
**Theorem 1**.: _Let \(f\) have a nonempty set of minimizers and let all \(f(\cdot;\mathbf{B})\) be \(L\)-smooth (A1) and have an unbiased gradient (A2) with a bounded variance (A3). If \((\mathbf{\theta}^{t})\) is a sequence generated by Algorithm 2 with \(\eta\in\left(0,\frac{\rho}{(L+1)SQ(\rho+Se(1+e))}\right)\) under the token per cluster setting (that is, \(P\), \(\mathcal{G}\), and \(\mathbf{w}_{k}\) as explained above) and \(\mathbb{P}(k_{\gamma}^{t,0}=k)>0\) for all \(\gamma\) and \(k\in\mathcal{C}_{\gamma}\), then:_
\[\mathbb{E}\left[\frac{1}{T}\sum_{t=0}^{T-1}\left\|\nabla f\left(\mathbf{\theta}^{ t}\right)\right\|^{2}\right]=\mathcal{O}\left(\frac{\Delta}{T}+\frac{\sigma^{2}}{B} \right), \tag{4}\]
_where the expectation is over \(\{k_{c}^{t,s}\}\) and \(\{\mathcal{B}_{t}\}\). Here, \(\Delta\coloneqq f\left(\mathbf{\theta}^{0}\right)-f^{\star}\)._
Encouragingly, we see that, for full batch (exact gradient), we recover the \(\mathcal{O}(\Delta/T)\) rate of convergence for the (expected) squared norm of the gradient of centralized CD methods. Further, for mini-batches,
by choosing a sufficiently large batch size \(B=\Omega(\sigma^{2}/\epsilon)\) we can preserve the iteration complexity to reach \(\mathbb{E}\left[\frac{1}{T}\sum_{t=0}^{T-1}\left\|\nabla f\left(\mathbf{\theta}^{t} \right)\right\|^{2}\right]\leq\epsilon\).
Setting with overlapping token trajectories.We now present some convergence guarantees for MTCD in the setting where we allow for overlapping token trajectories. We propose choosing the convex combination weights to be \(\mathbf{w}_{k}=(\frac{1}{\mathsf{I}},\ldots,\frac{1}{\mathsf{I}})\in\mathbb{R}^{\Gamma}\) for all \(k\), thus combining the model estimates by averaging them. We also consider, for simplicity, that \(P_{\gamma}=P\), and assume that distribution \(P\) has support over all clients.
In this setting, to handle the periodic combination by averaging of the model estimates, we develop convergence guarantees for convex objectives. The standard definition of convexity is given below.
**Assumption 4** (Convexity).: _A function \(f\colon\mathbb{R}^{d}\mapsto\mathbb{R}\) is convex if for all \(a\in[0,1]\):_
\[f(a\mathbf{x}+(1-a)\mathbf{y})\leq af(\mathbf{x})+(1-a)f(\mathbf{y}),\quad\forall\;\mathbf{x},\bm {y}\in\mathbb{R}^{d}.\] (A4)
**Theorem 2**.: _Let \(f\) be convex (A4) and have a nonempty set of minimizers and let all \(f(\cdot;\mathbf{B})\) be \(L\)-smooth (A1) and have an unbiased gradient (A2) with a bounded variance (A3). If \((\mathbf{\theta}^{t})\) is a sequence generated by Algorithm 2 with for \(\eta\in\left(0,\frac{\rho^{\prime}}{(L+1)SQ(\rho^{\prime}+Se(1+e))}\right)\) under the overlapping tokens setting (that is, \(P\), \(\mathcal{G}\), and \(\mathbf{w}_{k}\) as explained above) and \(\mathbb{P}(k_{\gamma}^{t,0}=k)>0\) for all \(\gamma\in[\Gamma]\) and \(k\in[K]\), then:_
\[\mathbb{E}\left[\frac{1}{T}\sum_{t=0}^{T-1}\left\|\nabla f\left(\mathbf{\theta}^{ t}\right)\right\|^{2}\right]=\mathcal{O}\left(\frac{\Delta}{T}+\frac{\sigma^{2}}{B }\right),\]
_where the expectation is over \(\{k_{\gamma}^{t,s}\}\) and \(\{\mathcal{B}_{t}\}\). Here, \(\Delta\coloneqq f\left(\mathbf{\theta}^{0}\right)-f^{\star}\)._
As in the token per cluster setting, we recover the \(\mathcal{O}(\Delta/T)\) rate for the (expected) squared norm of the gradient when the exact gradient is used, matching the rate of centralized CD methods, and, for a sufficiently large mini-batch size \(B=\Omega(\sigma^{2}/\epsilon)\), we preserve the iteration complexity to reach \(\mathbb{E}\left[\frac{1}{T}\sum_{t=0}^{T-1}\left\|\nabla f\left(\mathbf{\theta}^{ t}\right)\right\|^{2}\right]\leq\epsilon\).
## 4 Experiments
We test our method empirically, comparing it to DCPA (Alghunaim et al., 2021), a state-of-the-art fully decentralized method, and a standard vertical FL (S-VFL) method which, since we do not consider compression in our experiments, coincides with both C-VFL (Castiglia et al., 2022) and FedBCD (Liu et al., 2022). Note that, while some trajectories have a small variance, making the confidence interval hard to see, all experiments are run for 5 seeds.
### Convex problems
In this section, we use CVXPY (Diamond and Boyd, 2016) to obtain \(f^{\star}\), to use the (relative) suboptimality gap \(\frac{f(\mathbf{\theta}^{t})-f^{\star}}{f^{\star}}\) as a metric. We define iteration as the cumulative number of hops and denote by \(C_{C2C}\) and \(C_{C2S}\) the cost of Client-To-Client and Client-To-Server communications, respectively, whose ratio is important for SDFL. For simplicity, we assume \(C_{C2S}\) is the same for communications from the client to the server and vice-versa, although this is often not the case. Throughout Section 4, we consider \(C_{C2S}/C_{C2C}=100\) when plotting the suboptimality gap with respect to the communication cost, multiplying the number of \(C_{C2C}\) communications by \(0.01\) (each communication unit is the size of the token, which varies with the setup) before adding them to the number of C2S communications, to obtain the communication costs. For MTCD, we assume a uniform distribution over the clients when resuming roaming. In Section 4.1, we allow for the tokens to overlap.
We perform ridge regression on a dataset generated following the same process as (Alghunaim et al., 2021), with \(N=1000\) samples and \(d=2000\) features split evenly across clients. We have \(f(\mathbf{\theta})=\|\mathbf{X}\mathbf{\theta}-\mathbf{y}\|_{2}^{2}/2+\alpha\|\mathbf{\theta}\|_{2 }^{2}/2\), with \(\alpha=10\). For this problem, we use \(\eta=10^{-5}\) and \(Q=20\) for MTCD and \(\eta=5\times 10^{-7}\) and \(Q=20\) for S-VFL. For DCPA, we use \(\mu_{w}=0.01\), \(\mu_{y}=0.0003\), and \(\mu_{x}=0.03\).
We perform sparse logistic regression on the Gisette dataset (Guyon et al., 2004), where \(N=6000\) and \(d=5000\), again split evenly across clients. Let \(s(z)\coloneqq(1+e^{-z})^{-1}\) and \(\beta=1\):
\[f(\mathbf{\theta})=-\sum_{n}\big{[}y_{n}\log s(\mathbf{x}_{n}^{\top}\mathbf{\theta})+(1-y_{ n})\log(1-s(\mathbf{x}_{n}^{\top}\mathbf{\theta}))\big{]}+\beta\|\mathbf{\theta}\|_{1}, \quad y_{n}\in\{0,1\},\,n\in[N].\]
where \(\mathbf{x}_{n}\) and \(y_{n}\) denote samples and labels, respectively. We use \(\eta=10^{-4}\) and \(Q=30\) for MTCD and \(\mu_{w}=0.001,\mu_{y}=0.00003,\mu_{x}=0.003\) for DCPA.3
Footnote 3: In DCPA, the lack of a closed form solution for the proximal operator of the convex conjugate of the logistic regression loss leads to the need to solve a local optimization problem at each client at every iteration.
Fully decentralized setting.In Figure 3, we see that, while MTCD with \(S\to\infty\) and \(\Gamma=1\) (that is, STCD) does not improve upon DCPA in terms of progress per iteration, it is significantly more communication efficient. Yet, STCD is particularly vulnerable to poorly connected networks, as seen when going from an Erdos-Renyi graph to a path graph. Note that, for sparse logistic regression, while the proximal term used to handle the regularizer is not covered by our analysis, it does well empirically. Figure 4 shows additional ridge regression experiments in the \(N\gg d\) regime across six different graph topologies.
SDFL setting.We now tackle the same ridge regression problem, now focusing on path graphs, where poor connectivity is a greater challenge. In Figure 5, we see in the top row that our method improves upon the communication efficiency of the other methods. When plotting the number of communications needed to attain a suboptimality gap of \(10^{-4}\) for each \(C_{C2C}/C_{C2S}\), we see that, for different values of \(C_{C2C}/C_{C2S}\), the communication efficiency of SDFL methods varies. In the bottom row, we see that, as we increase the syncing frequency, the convergence per iterations speeds up and the communication efficiency decreases, as expected. This illustrates the flexibility of our method, which allows us to choose a regime at which to operate.
### Neural network training
We train an MVCNN (Su et al., 2015) model on ModelNet10 (Wu et al., 2015), a dataset of 3D CAD models. We consider 12 clients split into two clusters of six clients, each capturing a different (2D) view of each object, and run MTCD for both complete and path graphs with \(S=6\). We use a fixed \(\eta=0.001\) for S-VFL. When running MTCD on 2 complete graph clusters with 6 clients each, we also start with \(\eta=0.001\) but halve it every 20 epochs. On the path graph clusters, we start with \(\eta=0.0005\) and halve it every 20 epochs. For both both types of graph, we use \(S=6\). We use \(Q=10\) and a batch size of 64 for both MTCD and S-VFL.
Figure 3: The left four plots correspond to an Erdős–Rényi graph with \(p=0.4\) and right four plots to a path graph, all with \(K=40\). The top row concerns ridge regression and the bottom row concerns sparse logistic regression. We consider a communication unit consisting of \(N\) scalars. The \(S\to\infty\) MTCD run has \(\Gamma=1\).
Figure 4: Ridge regression with \(N=4000\) and \(d=200\) across six different network topologies, all with \(K=40\) clients. Each communication unit consists of \(N\) scalars. The MTCD run has \(\Gamma=1\). Algebraic connectivity, \(\alpha_{\mathcal{G}}\), is the second smallest eigenvalue of the Laplacian matrix of graph \(\mathcal{G}\).
Figure 5: Experiment on a \(K=80\) path graph. For the top row, the MTCD run with \(S\to\infty\) has \(\Gamma=1\) and the one with \(S=64\) has \(\Gamma=2\). For the bottom row, all MTCD runs have \(\Gamma=2\).
We also train a ResNet18 (He et al., 2016) on CIFAR-10 (Krizhevsky et al., 2009), for \(K=4\) and two clusters (2 clients each). For MTCD, we use \(S=2\). We use a fixed \(\eta=0.0001\), \(Q=10\) and a batch size of 100 for both S-VFL and MTCD. For MTCD, we use \(S=2\).
In Figure 6, we present the results of the ModelNet10 and the CIFAR-10 experiments, both in the token-per-cluster setting. In both, we observe a similar performace in terms of convergence per iteration, but that MTCD outperforms the baseline in communication efficiency.
## 5 Conclusions
We formalize the multi-token SDFL scheme and propose MTCD, a communication-efficient SDFL algorithm for vertical FL. We provide convergence guarantees for our method and show empirically the improved communication efficiency of our algorithm as well as the power of endowing decentralized methods with periodical client-server communications. A natural extension to this work is to consider compression and privacy mechanisms, such as differential privacy.
## Acknowledgements
This work is supported in part by the Fundacao para a Ciencia e a Tecnologia through the Carnegie Mellon Portugal Program; by the grants U.S. National Science Foundation CCF-2007911 and ECCS-2318441; and by the CMU-Portugal project CMU/TIC/0016/2021.
|
2309.14926 | A criterion for Lubin's conjecture | We prove that a formulation of a conjecture of Lubin regarding two power
series commuting for the composition is equivalent to a criterion of checking
that some extensions generated by the nonarchimedean dynamical system arising
from the power series are Galois. As a consequence of this criterion, we obtain
a proof of Lubin's conjecture in a new case. | Léo Poyeton | 2023-09-26T13:35:10Z | http://arxiv.org/abs/2309.14926v1 | # A criterion for Lubin's conjecture
###### Abstract
We prove that a formulation of a conjecture of Lubin regarding two power series commuting for the composition is equivalent to a criterion of checking that some extensions generated by the nonarchimedean dynamical system arising from the power series are Galois. As a consequence of this criterion, we obtain a proof of Lubin's conjecture in a new case.
Key words and phrases:Field of norms; \((\varphi,\Gamma)\)-modules; \(p\)-adic representations; Cohen ring; non-Archimedean dynamical system; \(p\)-adic Hodge theory; Local class field theory; Lubin-Tate group.
series to commute with a noninvertible series, there must be a formal group somehow in the background".
Various results have been obtained to support Lubin's observation, see for instance the following non exhaustive list [19], [20], [21], [22], [23], [24], [25], [26], [27], [28], [29].
This observation has lead to several versions of what might be called Lubin's conjecture, and these versions have all been proved under very strong assumptions on the nonarchimedean dynamical system considered.
In this note, we consider two power series \(P,U\in T\cdot\mathcal{O}_{K}\llbracket T\rrbracket\) such that \(P\circ U=U\circ P\), with \(P^{\prime}(0)\in\mathfrak{m}_{K}\) and \(U^{\prime}(0)\in\mathcal{O}_{K}^{\times}\), and we assume that \(P(T)\neq 0\mod\mathfrak{m}_{K}\) and that \(U^{\prime}(0)\) is not a root of unity. Our so called version of Lubin's conjecture is the following:
**Conjecture 0.1**.: _Let \(P,U\in T\cdot\mathcal{O}_{K}\llbracket T\rrbracket\) such that \(P\circ U=U\circ P\), with \(P^{\prime}(0)\in\mathfrak{m}_{K}\) and \(U^{\prime}(0)\in\mathcal{O}_{K}^{\times}\) not a root of unity, and such that \(P(T)\neq 0\mod\mathfrak{m}_{K}\). Then there exists a finite extension \(E\) of \(K\), a formal group \(S\) defined over \(\mathcal{O}_{E}\), endomorphisms of this formal group \(P_{S}\) and \(U_{S}\) and a power series \(h(T)\in T\mathcal{O}_{E}\llbracket T\rrbracket\) such that \(P\circ h=h\circ P_{S}\) and \(U\circ h=h\circ U_{S}\)._
In the conjecture above, we say following Li's terminology [20] that \(P\) and \(P_{S}\) are semiconjugate and that \(h\) is an isogeny from \(P_{S}\) to \(P\).
In several proven cases of this conjecture [23, 24, 25], the Lubin-Tate formal group is actually defined over \(\mathcal{O}_{K}\). However, this is not true in general.
The goal of this note is to prove the following theorem, which gives a new criterion to prove Lubin's conjecture in some cases:
**Theorem 0.2**.: _Let \((P,U)\) be a couple of power series in \(T\cdot\mathcal{O}_{K}\llbracket T\rrbracket\) such that \(P\circ U=U\circ P\), with \(P^{\prime}(0)\in\mathfrak{m}_{K}\) and \(U^{\prime}(0)\in\mathcal{O}_{K}^{\times}\), and we assume that \(P(T)\neq 0\mod\mathfrak{m}_{K}\) and that \(U^{\prime}(0)\) is not a root of unity. Then there exists a finite extension \(E\) of \(K\), a Lubin-Tate formal group \(S\) defined over \(\mathcal{O}_{L}\), where \(E/L\) is a finite extension, endomorphisms of this formal group \(P_{S}\) and \(U_{S}\) over \(\mathcal{O}_{E}\), and a power series \(h(T)\in T\mathcal{O}_{E}\llbracket T\rrbracket\) such that \(P\circ h=h\circ P_{S}\) and \(U\circ h=h\circ U_{S}\) if and only if the following two conditions are satisfied:_
1. _there exists_ \(V\in T\cdot\mathcal{O}_{K}\llbracket T\rrbracket\)_, commuting with_ \(P\)_, and an integer_ \(d\geq 1\) _such that_ \(Q(T)=T^{p^{d}}\mod\mathfrak{m}_{K}\) _where_ \(Q=V\circ P\) _;_
2. _there exists a finite extension_ \(E\) _of_ \(K\) _and a sequence_ \((\alpha_{n})_{n\in\mathbf{N}}\) _where_ \(\alpha_{0}\neq 0\) _is a root of_ \(Q\) _and_ \(Q(\alpha_{n+1})=\alpha_{n}\) _such that for all_ \(n\geq 1\)_, the extension_ \(E(\alpha_{n})/E\) _is Galois._
The proof relies mainly on the same tools and strategy used in [10], which are the tools developed by Lubin in [11] to study \(p\)-adic dynamical systems, the "canonical Cohen ring for norms fields" of Cais and Davis [12] and tools of \(p\)-adic Hodge theory following Berger's strategy in [13].
As a corollary of our main theorem, we obtain the following result, which is a new instance of Lubin's conjecture:
**Theorem 0.3**.: _Assume that \(P(T)\in T\cdot\mathcal{O}_{K}[\![T]\!]\) is such that \(P(T)=T^{p}\mod\mathfrak{m}_{K}\) and that there exists \(U\in T\cdot\mathcal{O}_{K}[\![T]\!]\), commuting with \(P\), such that \(U^{\prime}(0)\) is not a root of unity. Then there exists a finite extension \(E\) of \(K\), a Lubin-Tate formal group \(S\) defined over \(\mathcal{O}_{L}\), where \(E/L\) is a finite extension, endomorphisms of this formal group \(P_{S}\) and \(U_{S}\) over \(\mathcal{O}_{E}\), and a power series \(h(T)\in T\mathcal{O}_{E}[\![T]\!]\) such that \(P\circ h=h\circ P_{S}\) and \(U\circ h=h\circ U_{S}\)._
In order to prove our main theorem, we also need to prove that some extensions are strictly APF, which is a technical condition on the ramification of the extension. Cais and Davis have considered in [12] what they called "\(\varphi\)-iterate" extensions, and later on proved with Lubin that those extensions are strictly APF [12]. Here, we show that that this result still holds for more general extensions which generalize the \(\varphi\)-iterate extensions of Cais and Davis:
**Theorem 0.4**.: _Let \(K_{\infty}/K\) be an extension generated by a sequence \((u_{n})\) of elements of \(\overline{\mathbf{Q}}_{p}\) such that there exists a power series \(P(T)\in T\cdot\mathcal{O}_{K}[\![T]\!]\) with \(P(T)=T^{d}\), where \(d\) is a power of the cardinal of \(k_{K}\), and an element \(\pi_{0}\) of \(\mathfrak{m}_{K}\) such that \(u_{0}=\pi_{0}\) and \(P(u_{n+1})=u_{n}\)._
_Then \(K_{\infty}/K\) is strictly APF._
### Organization of the note
The first section recalls the construction and properties of some rings of periods which are used in the rest of the paper. The second section is devoted to the proof of theorem 0.4, using the rings of periods of the first section in order to do so. In the third section we recall the main result of [11] which explains why "Lubin's conjecture" seems reasonable. In section 4, we prove that our version of Lubin's conjecture implies that the two conditions of theorem 0.2 are satisfied. Section 5 and 6 show how to use \(p\)-adic Hodge theory, using the same strategy as in [10], along with results from [11], in order to prove that the infinite extension generated by such a \(Q\)-consistent sequence is actually generated by the torsion points of a formal Lubin-Tate group. In section 7, we show how to use the "canonical Cohen ring for norms fields" of
Cais and Davis [15] to prove that there is indeed an isogeny from an endomorphism of a formal Lubin-Tate group to \(Q\). Section 8 is devoted to the proof of theorem 0.3.
## 1 Rings of periods
Let \(K\) be a finite extension of \(\mathbf{Q}_{p}\), with uniformizer \(\pi_{K}\), and let \(K_{0}=\mathbf{Q}_{p}^{\mathrm{unr}}\cap K\) denote the maximal unramified extension of \(\mathbf{Q}_{p}\) inside \(K\). Let \(q=p^{h}\) be the cardinality of \(k_{K}\), the residue field of \(K\), and let \(e\) be the ramification index of \(K\), so that \(eh=[K:\mathbf{Q}_{p}]\). Let \(v_{K}\) denote the \(p\)-adic valuation on \(K\) normalized so that \(v_{K}(K^{\times})=\mathbf{Z}\) and let \(v_{K}\) still denote its extension to \(\overline{\mathbf{Q}}_{p}\). Let \(c>0\) be such that \(c\leq v_{K}(p)/(p-1)\). If \(F\) is a subfield of \(\mathbf{C}_{p}\), let \(\mathfrak{a}_{F}^{c}\) be the set of elements of \(F\) such that \(v_{K}(x)\geq c\).
We now recall some definition of properties of some rings of periods which will be used afterwards. We refer mainly to [15][16] for the properties stated here. The slight generalization to the classical rings by tensoring by \(\mathcal{O}_{K}\) over \(\mathcal{O}_{K_{0}}\) can for example be found in [1].
Let \(\mathcal{O}_{\mathbf{C}_{p}}^{\flat}:=\underset{x\to x^{q}}{\underset{x\to x ^{q}}{\underset{x\to x^{q}}{\underset{x\to x^{q}}{\underset{\mathbf{C}_{p}}}{ \underset{\mathbf{C}_{p}}}{\underset{\mathbf{C}_{p}}}{\underset{\mathbf{C}_{p}} }{\underset{\mathbf{C}_{p}}}{\underset{\mathbf{C}_{p}}}{\underset{\mathbf{C}_{ p}}}{\underset{\mathbf{C}_{p}}}{\underset{\mathbf{C}_{p}}}{\underset{ \mathbf{C}_{p}}}{\underset{\mathbf{C}_{p}}}{\underset{\mathbf{C}_{p}}}{ \underset{\mathbf{C}_{p}}}}}\). This is the tilt of \(\mathcal{O}_{\mathbf{C}_{p}}\) and is perfect ring of characteristic \(p\), whose fraction field \(\widetilde{\mathbf{E}}\) is algebraically closed. It is endowed with a valuation \(v_{\mathbf{E}}\) induced by the one on \(K\). We let \(W_{K}(\cdot)=\mathcal{O}_{K}\otimes_{\mathcal{O}_{K_{0}}}W(\cdot)\) denote the \(\mathcal{O}_{K}\)-Witt vectors, and let \(\widetilde{\mathbf{A}}^{+}=W_{K}(\widetilde{\mathbf{E}}^{+})\) and \(\widetilde{\mathbf{A}}=W_{K}(\widetilde{\mathbf{E}})\).
Any element of \(\widetilde{\mathbf{A}}\) (resp. \(\widetilde{\mathbf{A}}^{+}\)) can be uniquely written as \(\sum_{i\geq 0}\pi_{K}^{k}[x_{i}]\) with the \(x_{i}\in\widetilde{\mathbf{E}}\) (resp. \(\widetilde{\mathbf{E}}^{+}\)). We let \(w_{k}:\widetilde{\mathbf{A}}{\longrightarrow}\mathbf{R}\cup\{+\infty\}\) defined by \(w_{k}(x)=\inf_{i\leq k}v_{\mathbf{E}}(x_{i})\).
For \(r\in\mathbf{R}_{+}\), we let \(\widetilde{\mathbf{A}}^{\dagger,r}\) denote the subset of \(\widetilde{\mathbf{A}}\) of elements \(x\) such that \(w_{k}(x)+\frac{pr}{e(p-1)}k\) is \(\geq 0\) for all \(k\) and whose limits when \(k{\longrightarrow}+\infty\) is \(+\infty\). We let \(n(r)\) be the smallest integer such that \(r\leq p^{nh-1}(p-1)\).
We also let \(\widetilde{\mathbf{A}}=\bigcup_{r>0}\widetilde{\mathbf{A}}^{\dagger,r}\).
**Lemma 1.1**.: _Let \(x\in\widetilde{\mathbf{A}}^{\dagger,r}+\pi_{K}^{k}\widetilde{\mathbf{A}}\), then \(\frac{x}{|\overline{x}|}\) is a unit of \(\widetilde{\mathbf{A}}^{\dagger,r^{\prime}}+\pi_{K}^{k}\widetilde{\mathbf{A}}\), with \(r^{\prime}=r+\frac{(p-1)e}{p}v_{\mathbf{E}}(\overline{x})\)._
Proof.: Since \(x\in\widetilde{\mathbf{A}}^{\dagger,r}+\pi_{K}^{k}\widetilde{\mathbf{A}}\), we can write \(x=\sum_{i=0}^{k-1}\pi_{K}^{i}[x_{i}]\), where \(x_{0}=\overline{x}\), and \(w_{i}(x)+\frac{pr}{e(p-1)}i\geq 0\) for all \(i\) between \(0\) and \(k-1\).
Now we can write \(\frac{x}{|\overline{x}|}\in\widetilde{\mathbf{A}}\) as \(\sum_{i\geq 0}\pi_{K}^{i}[y_{i}]\), where \(y_{i}=\frac{x_{i}}{\overline{x}}\) for \(i\) between \(0\) and \(k-1\). In particular, \(y_{0}=1\). Now a direct computation leads to the fact that \(w_{i}(\frac{x}{|\overline{x}|})+\frac{pr^{\prime}}{e(p-1)}i\geq 0\) for all \(i\leq k-1\), where \(r^{\prime}=r+\frac{(p-1)e}{p}v_{\mathbf{E}}(\overline{x})\).
Using the fact that \(\frac{x}{|\overline{x}|}\in(\widetilde{\mathbf{A}}^{\dagger,r^{\prime}}+\pi_{K}^ {k}\widetilde{\mathbf{A}})\cap(1+\pi_{K}\widetilde{\mathbf{A}})\), we obtain that its inverse also lies into \(\widetilde{\mathbf{A}}^{\dagger,r^{\prime}}+\pi_{K}^{k}\widetilde{\mathbf{A}}\).
Let \(\varphi_{q}:\widetilde{\mathbf{E}}^{+}\to\widetilde{\mathbf{E}}^{+}\) denote the map \(x\mapsto x^{q}\). This extends to a map \(\widetilde{\mathbf{E}}\to\widetilde{\mathbf{E}}\) also given by \(x\mapsto x^{q}\), and by functoriality of Witt vectors those maps extend into maps \(\varphi_{q}\) on \(\widetilde{\mathbf{A}}^{+}\) and \(\widetilde{\mathbf{A}}\).
Recall that there is a surjective map \(\theta:\widetilde{\mathbf{A}}^{+}\to\mathcal{O}_{\mathbf{C}_{p}}\) which is a morphism of rings. Moreover, if \(x\in\widetilde{\mathbf{A}}^{+}\) and \(\overline{x}=(x_{n})\in\widetilde{\mathbf{E}}^{+}\), then \(\theta\circ\varphi_{q}^{-n}(x)=x_{n}\mod\mathfrak{a}_{\mathbf{C}_{p}}^{c}\).
Also recall that, for \(n\geq n(r)\), the maps \(\theta\circ\varphi_{q}^{-n}:\widetilde{\mathbf{A}}^{+}\to\mathcal{O}_{\mathbf{ C}_{p}}\) extend into surjective maps \(\theta\circ\varphi_{q}^{-n}:\widetilde{\mathbf{A}}^{\dagger,r}\to\mathcal{O}_{ \mathbf{C}_{p}}\).
## 2. Strictly APF extensions
A theorem of Cais, Davis and Lubin [1] gives a necessary and sufficient condition for an infinite algebraic extension \(L/K\) to be strictly APF. In particular, this condition implies that what Cais and Davis have called a "\(\varphi\)-iterate" extension in [1] is strictly APF.
Recall that a (slight generalization of what Cais and Davis in [1] have called a) \(\varphi\)-iterate extension \(K_{\infty}/K\) is an extension generated by a sequence \((u_{n})\) of elements of \(\overline{\mathbf{Q}}_{p}\) such that there exists a power series \(P(T)\in T\cdot\mathcal{O}_{K}[\![T]\!]\) with \(P(T)=T^{d}\), where \(d\) is a power of the cardinal of \(k_{K}\), and a uniformizer \(\pi_{0}\) of \(\mathcal{O}_{K}\) such that \(u_{0}=\pi_{0}\) and \(P(u_{n+1})=u_{n}\).
The main theorem of [1] gives a necessary and sufficient condition for an infinite algebraic extension \(L/K\) to be strictly APF, and in particular implies directly that those \(\varphi\)-iterate extensions are strictly APF.
In this section we will prove that this result remain true if we remove the assumption in the definition above that \(\pi_{0}\) is a uniformizer of \(\mathcal{O}_{K}\), and instead just assume that \(\pi_{0}\in\mathfrak{m}_{K}\). We even allow \(\pi_{0}\) to be equal to \(0\), which is basically what we'll consider when looking at consistent sequences attached to a noninvertible stable power series.
If \(L\) is a finite extension of \(\mathbf{Q}_{p}\), we let \(v_{L}\) denote the \(p\)-adic valuation on \(L\) normalized such that \(v_{L}(L^{\times})=\mathbf{Z}\), and we still denote by \(v_{L}\) its extension to \(\overline{\mathbf{Q}}_{p}\). If \(L/M\) is a finite extension, we also let \(\operatorname{Emb}_{M}(L,\overline{\mathbf{Q}}_{p})\) denote the set of \(M\)-linear embeddings of \(L\) into \(\overline{\mathbf{Q}}_{p}\).
For the rest of this section, we let \(P(T)\in T\cdot\mathcal{O}_{K}[\![T]\!]\) with \(P(T)=T^{s}\), where \(s\) is a power of the cardinal of \(k_{K}\), we let \(\pi_{0}\) be any element of \(\mathfrak{m}_{K}\), and we define a sequence \((v_{n})_{n\in\mathbf{N}}\) of elements of \(\overline{\mathbf{Q}}_{p}\) as follows: we let \(v_{0}=\pi_{0}\), and for \(n\geq 0\), we let \(v_{n+1}\) be a root of \(P(T)-v_{n}\). We let \(K_{n}=K(v_{n})\) the field generated by \(v_{n}\) over \(K\), and we let \(K_{\infty}=\bigcup_{n}K_{n}\). If \(v_{0}=0\), then we choose \(v_{1}\) to be \(\neq 0\), so that the null sequence is excluded from our considerations.
**Proposition 2.1**.: _There exists \(n_{0}\geq 0\) and \(d\geq 1\) such that, for all \(n\geq n_{0}\), we have \(v_{K_{n}}(v_{n})=d\) and the extension \(K_{n+1}/K_{n}\) is totally ramified of degree \(s\)._
Proof.: The fact that the Weierstrass degree of \(P\) is greater than 1 along with Weierstrass preparation theorem show that the sequence \(v_{p}(v_{n})\) is strictly decreasing. In particular, there exists \(n_{0}\geq 0\) such that for \(n\geq 0\), the Newton polygon of \(P-v_{n}\) has only one slope, equal to \(\frac{1}{s}v_{p}(v_{n})\). This implies that for \(n\geq n_{0}\), we have \(v_{p}(v_{n+1})=\frac{1}{s}v_{p}(v_{n})\), and thus \(v_{K_{n}}(v_{n+1})=\frac{1}{s}v_{K_{n}}(v_{n})\).
Recall that, if \(M/L/\mathbf{Q}_{p}\) are finite extensions, then we have \([M:L]v_{L}\geq v_{M}\), with equality if and only if \(M/L\) is totally ramified. Let \(d_{n}:=v_{K_{n}}(v_{n})\). Since \(s\) is the degree of a non zero polynomial with coefficients in \(K_{n}\) whose root is \(v_{n+1}\), we know that \([K_{n+1}:K_{n}]\leq s\). This implies that \(sv_{K_{n}}\geq[K_{n+1}:K_{n}]v_{K_{n}}\geq v_{K_{n+1}}\). For \(n\geq n_{0}\), we have \(d_{n}=s\cdot v_{K_{n}}(v_{n+1})\geq[K_{n+1}:K_{n}]v_{K_{n}}(v_{n+1})\geq v_{K_ {n+1}}(v_{n+1})=d_{n+1}\), so that the sequence \((d_{n})_{n\in\mathbf{N}}\) is decreasing. Since this sequence takes its values in \(\mathbf{N}\), it is stationary and therefore there exists \(n_{1}\geq n_{0}\) such that, for all \(n\geq n_{1}\), \(d_{n+1}=d_{n}\). In particular, this implies that the inequalities above are all equalities and thus that for \(n\geq 1\), \(s=[K_{n+1}:K_{n}]\) and that \(K_{n+1}/K_{n}\) is totally ramified, and we can take \(d=d_{n_{1}}\).
Let us write \(d=p^{k}m\) where \(m\) is prime to \(p\).
Since \(P(T)=T^{s}\mod\mathfrak{m}_{K}\), the sequence \((v_{n})\) gives rise to an element \(\overline{v}\) of \(\widetilde{\mathbf{E}}^{+}=\varprojlim_{\stackrel{{\leftarrow}}{ {x\mapsto x^{s}}}}\mathcal{O}_{\mathbf{C}_{p}}/\pi_{0}\). We let \(\varphi_{s}\) denote the \(s\)-power Frobenius map on \(\widetilde{\mathbf{E}}^{+}\) and \(\widetilde{\mathbf{A}}^{+}\).
**Proposition 2.2**.: _There exists a unique \(v\in\widetilde{\mathbf{A}}^{+}\) lifting \(\overline{v}\) such that \(\varphi_{s}(v)=v\). Moreover, we have \(\theta\circ\varphi_{s}^{-n}(v)=v_{n}\)._
Proof.: One can use the same argument as in [1, Rem. 7.16] to produce an element in \(\widetilde{\mathbf{A}}^{+}\) such that \(P(v)=\varphi_{s}(v)\) and such that \(\theta\circ\varphi_{s}^{-n}(v)=v_{n}\) (note that one also needs to extend the results from ibid to the case where the Frobenius is replaced by a power of the Frobenius, which is straightforward).
Such an element automatically lifts \(\overline{v}\) by definition of the theta map. For the uniqueness, one checks that the map \(x\mapsto\varphi_{s}^{-1}(P(x))\) is a contracting map on the set of elements of \(\widetilde{\mathbf{A}}^{+}\) which lift \(\overline{v}\), so that \(v=\lim_{m\longrightarrow+\infty}\varphi_{s}^{-m}(P^{\circ m}([\overline{v}]))\) and is thus unique.
Since \(\widetilde{\mathbf{E}}\) is algebraically closed, there exists \(\overline{u}\in\widetilde{\mathbf{E}}\) such that \(\overline{u}^{m}=\overline{v}\). Since such a \(\overline{u}\) necessarily has positive valuation, it actually belongs to \(\widetilde{\mathbf{E}}^{+}\).
Since \(P(T)=T^{s}\mod\pi_{0}\), we can write \(P(T)=T^{s}(1+\pi h(T))\), with \(h(T)\in\frac{1}{T^{s}-1}\mathcal{O}_{K}\llbracket T\rrbracket\). Let \(Q(T)=T^{s}(1+\pi_{0}h(T^{m}))^{1/m}\in\mathcal{O}_{K}\widehat{\llbracket T \rrbracket[1}/T]\), which is well defined
because \(m\) is prime to \(p\). Note that \(Q(T)\) is overconvergent, meaning that it converges on some annulus bounded by the \(p\)-adic unit circle.
**Proposition 2.3**.: _There exists \(u\in\widetilde{\mathbf{A}}^{\dagger}\), \(u^{m}=v\)._
Proof.: We first construct \(u\) such that \(\varphi_{s}(u)=Q(u)\). Just as the proof as in 2.2, the map \(x\mapsto\varphi_{s}^{-1}(Q(x))\) is a contracting map on the set of elements of \(\widetilde{\mathbf{A}}\) lifting \(\overline{u}\), so that \(u=\lim_{m\longrightarrow+\infty}\varphi_{s}^{-m}(Q^{om}([\overline{u}]))\) and is unique.
Therefore, there exists \(u\in\widetilde{\mathbf{A}}\) such that \(\varphi_{s}(u)=Q(u)\). Since \(\overline{u}\in\widetilde{\mathbf{E}}^{+}\), we can write \(u=[\overline{u}]+\pi_{0}z_{1}\in\widetilde{\mathbf{A}}^{+}+\pi_{0}\widetilde {\mathbf{A}}\). Let \(r\) be such that \(\frac{\pi_{0}}{[\overline{u}]^{d}}\in\widetilde{\mathbf{A}}^{\dagger,r}\) and let \(f=\frac{(p-1)e}{p}v_{\mathbf{E}}(\overline{x})\). Let us write \(Q(T)=T^{s}(1+\frac{\pi_{0}}{T^{s}}g(T))^{1/m}\), with \(g(T)\in\mathcal{O}_{K}[\![T]\!]\).
Now assume that there exists some \(k\geq 1\) and \(r^{\prime}>0\) such that \(u\in\widetilde{\mathbf{A}}^{\dagger,r^{\prime}}+\pi_{0}^{k}\). We can thus write \(u=u_{k}+\pi_{0}^{k}z_{k}\), where \(u_{k}\in\widetilde{\mathbf{A}}^{\dagger,r^{\prime}}\) and \(z_{k}\in\widetilde{\mathbf{A}}\). We have
\[Q(u)=Q(u_{k}\pi_{0}^{k}z_{k})=(u_{k}\pi_{0}^{k}z_{k})^{s}(1+\frac{\pi_{0}}{(u _{k}\pi_{0}^{k}z_{k})^{s}}g(u_{k}\pi_{0}^{k}z_{k}))^{1/m}.\]
Using the fact that \(\frac{u}{[\overline{u}]}\) is a unit in \(\widetilde{\mathbf{A}}^{\dagger,r^{\prime}+f}+\pi_{0}^{k}\widetilde{\mathbf{ A}}\), we obtain that \(Q(u)\in\widetilde{\mathbf{A}}^{\dagger,r^{\prime\prime}}+\pi_{0}^{k+1} \widetilde{\mathbf{A}}\), where \(r^{\prime\prime}=\max(s*r^{\prime},r^{\prime}+f)\).
Since \(\varphi_{s}^{-1}(Q(u))=u\), this implies that \(u\in\widetilde{\mathbf{A}}^{\dagger,r^{\prime\prime}/s}+\pi_{0}^{k+1} \widetilde{\mathbf{A}}\).
By successive approximations, we have \(u\in\widetilde{\mathbf{A}}^{\dagger}\).
Finally, we compute \(\varphi_{s}(u^{m})=\varphi_{s}(u)^{m}=Q(u)^{m}=P(u^{m})\) by construction of \(Q\), so that \(\varphi_{s}(u^{m})=P(u^{m})\). Since \(u^{m}\) lifts \(\overline{u}^{m}=\overline{v}\), we have \(u^{m}=v\) by unicity in proposition 2.2.
Recall that since \(u\in\widetilde{\mathbf{A}}^{\dagger}\), there exists some \(r>0\) such that \(u\in\widetilde{\mathbf{A}}^{\dagger,r}\) and there exists \(n(r)\geq 0\) such that, for all \(n\geq n(r)\), the element \(u_{n}:=\theta\circ\varphi_{s}^{-1}(u)\) is well defined and belongs to \(\mathcal{O}_{\mathbf{C}_{p}}\). Actually, since \(u^{m}=v\), we have that \(u_{n}^{m}=v_{n}\), and in particular we know that \(v_{K}(u_{n}){\longrightarrow}0\).
**Lemma 2.4**.: _There exists a constant \(c>0\), independent of \(n\), such that for any \(n\geq n(r)\) and for any \(g\in\mathcal{G}_{K_{n}}\) and any \(i\geq 1\), we have_
\[v_{K}(g(u_{n+i})-u_{n+i})\geq c.\]
Proof.: Let \(n\geq n(r)\). We have \(u_{n+i}^{m}=v_{n+i}\), so that \(v_{K}(g(u_{n+i})^{m}-u_{n+i}^{m})=v_{K}(g(v_{n+i}-v_{n+i})\). This means that
\[v_{K}(g(v_{n+i})-v_{n+i})=v_{K}(g(u_{n+i})-u_{n+i})+(m-1)v_{K}(u_{n+i})\]
since \(m\) is prime to \(p\).
Since \(m\) is fixed and \(v_{K}(u_{n}){\longrightarrow}0\), it suffices to prove that there exists \(c>0\) independent on \(n\) such that \(v_{K}(g(v_{n+i})-v_{n+i})\geq c\) for all \(g\in{\mathcal{G}}_{K_{n}}\).
Since \(P(T)=T^{s}\mod\mathfrak{m}_{K}\), and since \(P^{oj}(v_{n+i})=v_{n}\), we already know that for all \(n\geq 0\) and for all \(g\in{\mathcal{G}}_{K_{n}}\), we have \(v_{K}(g(v_{n+i})-v_{n+i})\geq 1\), so that \(v_{K}(\frac{g(v_{n+i}}{v_{n+i}}-1)\geq 1-v_{K}(v_{n+i})\geq 1-v_{K}(v_{n})\). The statement follows from the fact that \(v_{K}(v_{n}){\longrightarrow}0\) when \(n{\longrightarrow}+\infty\).
Recall that \(d=p^{k}m\), where \(d\) is such that \(v_{K_{n}}(v_{n})=d\) for \(n\gg 0\). Recall also that \(s\) is a power of \(p\), and let \(j\geq 0\) be such that \(s^{j}\geq p^{k}>s^{j-1}\). Let \(f\geq 0\) be such that \(p^{-f}s^{j}=p^{k}\). In particular, we have \(v_{K_{n}}(u_{n+i}^{p^{f}})=p^{f}s^{-j}v_{K_{n}}(u_{n})=\frac{1}{mp^{k}}v_{K_{n }}(v_{n})=\frac{d}{d}=1\).
We let \(E_{\infty}=\bigcup_{n\geq 0}K(u_{n})\), and \(F={\mathbf{Q}}_{p}^{\rm unr}\cap E_{\infty}\) be the maximal unramified extension of \({\mathbf{Q}}_{p}\) inside \(E_{\infty}\). Finally, we let \(F^{(m)}\) denote the unramified extension of \(F\) generated by the elements \([x^{1/m}]\), \(x\in k_{F}\).
For \(n\geq n_{0}\), let \(\pi_{n}\) denote a uniformizer of \({\mathcal{O}}_{K_{n}}\). Since for all \(n\geq n_{0}\) the extensions \(K_{n+1}/K_{n}\) are totally ramified, the minimal polynomial of \(\pi_{n+1}\) over \(K_{n}\) is an Eisenstein polynomial, and we choose the \(\pi_{n}\) so that \(N_{K_{n+1}/K_{n}}(\pi_{n+1})=\pi_{n}\) for all \(n\geq n_{0}\).
**Lemma 2.5**.: _For any \(n\geq n(r)\), we can write \(\pi_{n}=[h]\cdot u_{n+j}^{p^{f}}(1+x)\), with \(x\in{\mathcal{O}}_{K_{n+j}}\) and \(h\in k_{F^{(m)}}\)._
Proof.: Note that \(v_{K_{n}}(\pi_{n}^{m})=v_{K_{n}}(v_{n+j}^{p^{f}})\) and that both elements belong to \({\mathcal{O}}_{K_{n+j}}\), so that we can write
\[\frac{\pi_{n}^{m}}{v_{n+j}^{p^{f}}}=[h_{0}]+\pi_{n+j}(\cdots),\]
with \(h_{0}\in k_{F}\). Taking the \(m\)-th root, this implies that there exists \(h_{1}\in k_{F^{(m)}}\) such that
\[\frac{\pi_{n}}{u_{n+j}^{p^{f}}}=[h_{1}](1+\pi_{n+j}(\cdots)),\]
where the coefficients belong to \({\mathcal{O}}_{K_{n+j}}\) and \(h_{1}\in k_{F^{(m)}}\).
**Theorem 2.6**.: _The extension \(K_{\infty}/K\) is strictly APF._
Proof.: In order to prove the theorem, it suffices by [3, Prop. 1.2.3] to prove that the extension \(F^{(m)}\cdot K_{\infty}/F^{(m)}\cdot K_{n_{0}}\) is strictly APF.
To prove that \(F^{(m)}\cdot K_{\infty}/F^{(m)}\cdot K\) is strictly APF, it suffices to prove that the \(v_{K}\) valuations of the non constant and non leading coefficients of the Eisenstein polynomial of \(\pi_{n+1}\) over \(F^{(m)}\cdot K_{n}\), for \(n\geq n_{0}\), are bounded below by a positive constant independent of \(n\), so that \(F^{(m)}\cdot K_{\infty}/F^{(m)}\cdot K_{n_{0}}\) satisfies the criterion of the main theorem (Thm 1.1) of [1]. Let \(n\geq n_{0}\).
By the lemma 2.5 and by induction, we can write
\[\pi_{n+1}=u_{n+j+1}^{p^{\prime}}([h_{0}]+u_{n+1+2j}^{p^{\prime}}([h_{1}]+\cdots))\]
where the \(h_{i}\) belong to \(k_{F^{(m)}}\).
Let \(g\in\mathcal{G}_{F^{(m)}.K_{n}}\). We have
\[g(\pi_{n+1})-\pi_{n+1}=g(u_{n+j+1}^{p^{\prime}})([h_{0}])-u_{n+j+1}^{p^{\prime }}([h_{0}])+\cdots\]
where all the terms on the RHS have \(v_{K}\)-valuation at least equal to \(c>0\) by lemma 2.4, so that \(v_{K}(g(\pi_{n+1})-\pi_{n+1})\geq c>0\).
The conjugates of \(\pi_{n+1}\) over \(K_{n}\) are the elements \(g(\pi_{n+1})\), for \(g\in\mathcal{G}_{K_{n}}\), and satisfy the conditions \(v_{K}(g(\pi_{n+1})-\pi_{n+1})\geq c>0\), which ensures that the \(v_{K}\) valuations of the non constant and non leading coefficients of the Eisenstein polynomial of \(\pi_{n+1}\) over \(F^{(m)}\cdot K_{n}\) are bounded below by a positive constant independent of \(n\), which is what we wanted.
## 3 Non archimedean dynamical systems
Let \(K\) be a finite extension of \(\mathbf{Q}_{p}\), with ring of integers \(\mathcal{O}_{K}\), uniformizer \(\pi\), maximal ideal \(\mathfrak{m}_{K}\) and residual field \(k\) of cardinal \(q=p^{h}\). We let \(K_{0}=K\cap\mathbf{Q}_{p}^{\mathsf{nr}}\) be the maximal unramified extension of \(\mathbf{Q}_{p}\) inside \(K\) and we let \(\mathcal{O}_{K_{0}}\) denote its ring of integers. We let \(\mathbf{C}_{p}\) denote the \(p\)-adic completion of \(\overline{\mathbf{Q}}_{p}\). Let \(P,U\in T\cdot\mathcal{O}_{K}\llbracket T\rrbracket\) such that \(P\circ U=U\circ P\), with \(P^{\prime}(0)\in\mathfrak{m}_{K}\) and \(U^{\prime}(0)\in\mathcal{O}_{K}^{\times}\). In this note, we assume that the situation is "interesting", namely that \(P(T)\neq 0\mod\mathfrak{m}_{K}\) and that \(U^{\prime}(0)\) is not a root of unity.
**Proposition 3.1**.: _There exists a power series \(H(T)\in T\cdot k\llbracket T\rrbracket\) and an integer \(d\geq 1\) such that \(H^{\prime}(0)\in k^{\times}\) and \(P(T)=H(T^{p^{d}})\mod\mathfrak{m}_{K}\)._
Proof.: This is theorem 6.3 and corollary 6.2.1 of [10].
Near the end of his paper [11], Lubin remarked that "Experimental evidence seems to suggest that for an invertible series to commute with a noninvertible series, there must be a formal group somehow in the background." This has led some authors to prove some cases (see for instance [12], [12], [13], [14], [15], [16], [17], [18]) of this "conjecture" of Lubin. The various results obtained in this direction can be thought of as cases of the following conjecture:
**Conjecture 3.2**.: _Let \(P,U\in T\cdot\mathcal{O}_{K}\llbracket T\rrbracket\) such that \(P\circ U=U\circ P\), with \(P^{\prime}(0)\in\mathfrak{m}_{K}\) and \(U^{\prime}(0)\in\mathcal{O}_{K}^{\times}\) not a root of unity, and such that \(P(T)\neq 0\mod\mathfrak{m}_{K}\). Then there exists a finite extension \(E\) of \(K\), a formal group \(S\) defined over \(\mathcal{O}_{E}\), endomorphisms of this
formal group \(P_{S}\) and \(U_{S}\), and a power series \(h(T)\in T\cdot\mathcal{O}_{E}\llbracket T\rrbracket\) such that \(P\circ h=h\circ P_{S}\) and \(U\circ h=h\circ U_{S}\)._
**Remark 3.3**.: _While in many instances of the cases where this conjecture is proven, the formal group is actually defined over \(\mathcal{O}_{K}\)[1, 1, 2], one can produce instances where the formal group is defined over the ring of integers of a finite unramified extension of \(\mathcal{O}_{K}\)[1, SS3]. The author does not know of a case where the extension \(E\) the formal group is defined over is ramified over \(K\) so it might be possible that the assumption that \(E\) is an unramified extension of \(K\) can be enforced._
## 4 Endomorphisms of a formal Lubin-Tate group
Let \(P,U\in T\cdot\mathcal{O}_{K}\llbracket T\rrbracket\) such that \(P\circ U=U\circ P\), with \(P^{\prime}(0)\in\mathfrak{m}_{K}\) and \(U^{\prime}(0)\in\mathcal{O}_{K}^{\times}\) not a root of unity, and such that \(P(T)\neq 0\mod\mathfrak{m}_{K}\). In this section, we assume that there exists a finite extension \(E\) of \(K\), a Lubin-Tate formal group \(S\) defined over \(\mathcal{O}_{L}\) with \(E/L/K\) finite, a power series \(h\in T\cdot\mathcal{O}_{E}\llbracket T\rrbracket\) and an endomorphism \(P_{S}\) of \(S\) such that \(h\) is an isogeny from \(P_{S}\) to \(P\).
**Lemma 4.1**.: _There exists \(V\in T\cdot\mathcal{O}_{K}\llbracket T\rrbracket\), commuting with \(P\), and an integer \(d\geq 1\) such that \(Q(T)=T^{p^{d}}\mod\mathfrak{m}_{K}\) where \(Q=V\circ P\). Moreover, there exists \(Q_{S}\) endomorphism of \(S\) such that \(h\) is an isogeny from \(Q_{S}\) to \(Q\)._
Proof.: First note that for any \(V_{S}\) invertible series commuting with \(P_{S}\), there corresponds an invertible power series \(V\) commuting with \(P\). Since \(S\) is a formal Lubin-Tate group over \(\mathcal{O}_{L}\), \(P_{S}\) corresponds to the multiplication by an element \(\alpha\in\mathfrak{m}_{L}\). Let \([\varpi_{L}]\) denote the multiplication by \(\varpi_{L}\) on \(S\), a uniformizer of \(\mathcal{O}_{L}\) such that \([\varpi_{L}](T)=T^{\operatorname{Card}(k_{L})}\mod\mathfrak{m}_{L}\) (we can find such a uniformizer since \(S\) is a Lubin-Tate formal group defined over \(\mathcal{O}_{L}\)). Since \(\alpha\in\mathfrak{m}_{L}\), there exists \(c\in\mathcal{O}_{L}^{\times}\) and an integer \(d\geq 1\) such that \(\alpha=c\cdot\varpi_{L}^{d}\). In particular, we have \(\operatorname{wideg}([\alpha])=\operatorname{wideg}(P)=\operatorname{wideg}([ \varpi_{L}^{d}])=\operatorname{Card}(k_{L})^{d}\).
We let \(V\) denote the power series commuting with \(P\) such that \(h\circ[c^{-1}]=V\circ h\). We then have that \(h\circ[c^{-1}]\circ[\alpha]=V\circ P\circ h\), and that \(h\circ[c^{-1}]\circ[\alpha]=h\circ[\varpi_{L}^{d}]\), so that \(h\) is an isogeny from \([\varpi_{L}^{d}]\) to \(Q:=V\circ P\). Reducing modulo \(\mathfrak{m}_{L}\), we get that
\[h(T)^{\operatorname{Card}(k_{L})^{d}}=h(T^{\operatorname{Card}(k_{L})^{d}})=h \circ Q\mod\mathfrak{m}_{L}\]
so that \(Q=T^{\operatorname{Card}(k_{L})^{d}}=T^{\operatorname{wideg}(P)}\mod\mathfrak{ m}_{L}\).
Let \((u_{n})_{n\in\mathbf{N}}\) be a sequence of elements of \(\overline{\mathbf{Q}}_{p}\) such that \(u_{0}\neq 0\) is a root of \(Q_{S}\), and \(Q_{S}(u_{n+1})=u_{n}\). In Lubin's terminology (see the definition on page 329 of [1]), the
sequence \((v_{n})\) is called a \(Q_{S}\)-consistent sequence. Let \(E_{n}=E(u_{n})\) and let \(E_{\infty}=\bigcup_{n}E_{n}\). Then for all \(n\geq 1\), the extensions \(E_{n}/E\) are Galois.
Let \(Q\) as in lemma 4.1 and let \(v_{n}:=h(u_{n})\).
**Lemma 4.2**.: _The sequence \((v_{n})_{n\in\mathbf{N}}\) is \(Q\)-consistent, and the extensions \(E(v_{n})/E\) are Galois for all \(n\geq 1\)._
Proof.: We know that \(E_{n}/E\) are Galois abelian extensions. Since \(E\subset E(v_{n})\subset E_{n}\), this implies that the extensions \(E(v_{n})/E\) are Galois. The fact that the sequence \((v_{n})_{n\in\mathbf{N}}\) is \(Q\)-consistent follows directly from the fact that \(h\) is an isogeny from \(Q_{S}\) to \(Q\).
## 5 Embeddings into rings of periods
Let \(L:=K_{n_{0}}\) with \(n_{0}\) as in proposition 2.1. Since \(P(T)=T^{p^{d}}\mod\mathfrak{m}_{K}\), there exists \(m\geq 1\) such that \(P^{om}\) acts trivially on \(k_{L}\), so that the degree \(r\) of \(Q\) is a power of the cardinal of \(k_{L}\), and we let \(Q:=P^{om}\) after having chosen such an \(m\). We let \(w_{0}=v_{n_{0}}\) and \((w_{n})\) be a sequence extracted from \((v_{n})\) such that \(Q(w_{n+1})=w_{n}\). Let \(L^{\prime}=\mathbf{Q}_{p}^{\mathrm{unr}}\cap L\) be the maximal unramified extension of \(\mathbf{Q}_{p}\) inside \(L\), and let \(\widetilde{\mathbf{A}}^{+}:=\mathcal{O}_{L}\otimes_{\mathcal{O}_{L^{\prime}}}W (\widetilde{\mathbf{E}}^{+})\).
Since \(K_{\infty}/L\) is strictly APF, there exists by [10, 4.2.2.1] a constant \(c=c(K_{\infty}/L)>0\) such that for all \(F\subset F^{\prime}\) finite subextensions of \(K_{\infty}/L\), and for all \(x\in\mathcal{O}_{F^{\prime}}\), we have
\[v_{L}(\frac{N_{F^{\prime}/F}(x)}{x^{[F^{\prime}:F]}}-1)\geq c.\]
We can always assume that \(c\leq v_{L}(p)/(p-1)\) and we do so in what follows. By SS2.1 and SS4.2 of [10], there is a canonical \(\mathcal{G}_{L}\)-equivariant embedding \(\iota_{L}:A_{L}(K_{\infty})\hookrightarrow\widetilde{\mathbf{E}}^{+}\), where \(A_{L}(K_{\infty})\) is the ring of integers of \(X_{L}(K_{\infty})\), the field of norms of \(K_{\infty}/L\). We can extend this embedding into a \(\mathcal{G}_{L}\)-equivariant embedding \(X_{L}(K_{\infty})\hookrightarrow\widetilde{\mathbf{E}}\), and we note \(\mathbf{E}_{K}\) its image.
It will also be convenient to have the following interpretation for \(\widetilde{\mathbf{E}}^{+}\):
\[\widetilde{\mathbf{E}}^{+}=\varprojlim_{x\to x^{p}}\mathcal{O}_{\mathbf{C}_{p }}=\{(x^{(0)},x^{(1)},\dots)\in\mathcal{O}_{\mathbf{C}_{p}}^{\mathbf{N}}\ :(x^{(n+1)})^{p}=x^{(n)}\}.\]
To see that this definition coincides with the one given in SS1, we refer to [1, Prop. 4.3.1].
Note that, even though \(\mathbf{E}_{K}\) depends on \(K_{\infty}\) rather than on \(L\), it is still sensitive to \(L\):
**Proposition 5.1**.: _Let \(K^{\prime}\) be a finite extension of \(L\) contained in \(K_{\infty}\). Let \(K_{1}\) (resp. \(K_{1}^{\prime}\)) be the maximal tamely ramified extension of \(K_{\infty}/L\) (resp. \(K_{\infty}/K^{\prime}\)). Then as subfields
of \(\widetilde{\mathbf{E}}\), \(\mathbf{E}_{K^{\prime}}\) is a purely inseparable extension of \(\mathbf{E}_{K}\) of degree \([K^{\prime}_{1}:K_{1}]\). In particular, \(\mathbf{E}_{K_{1}}=\mathbf{E}_{K}\)._
Proof.: See [1, Prop. 4.14].
The sequence \((w_{n})\) defines an element \(\overline{w}\in\widetilde{\mathbf{E}}^{+}\).
**Proposition 5.2**.: _There exists a unique \(w\in\widetilde{\mathbf{A}}^{+}\) lifting \(\overline{w}\) such that \(Q(w)=\varphi_{r}(w)\). Moreover, we have that \(\theta\circ\varphi_{r}^{-n}(w)=w_{n}\)._
Proof.: This is the same proof as for the proposition 2.2.
For all \(k\geq 0\), we let
\[R_{k}:=\{x\in\widetilde{\mathbf{A}}^{+},\theta\circ\varphi_{d}^{-n}(x)\in \mathcal{O}_{L_{n+k}}\text{ for all }n\geq 1\}.\]
**Proposition 5.3**.: _For all \(k\geq 0\), there exists \(z_{k}\in R_{k}\) such that \(R_{k}=\mathcal{O}_{L}\llbracket z_{k}\rrbracket\)._
Proof.: Note that for all \(k\geq 0\), \(R_{k}\) is an \(\mathcal{O}_{L}\)-algebra, separated and complete for the \(\pi_{L}\)-aidc topology, where \(\pi_{L}\) is a uniformizer of \(\mathcal{O}_{L}\). If \(x\in R_{k}\), then its image in \(\widetilde{\mathbf{E}}^{+}\) belongs to \(\lim\limits_{x\mapsto x^{r}}\mathcal{O}_{L_{n+k}}/\mathfrak{a}_{L_{n+k}}^{c}\).
Note that the natural map \(R_{k}/\pi_{L}R_{k}\to\widetilde{\mathbf{E}}^{+}\) is injective. To prove this, we need to prove that \(\pi_{L}\widetilde{\mathbf{A}}^{+}\cap R_{k}=\pi_{L}R_{k}\). Let \(x\in R_{k}\cap\pi_{L}\widetilde{\mathbf{A}}^{+}\) and let \(y\in\widetilde{\mathbf{A}}^{+}\) be such that \(x=\pi_{L}y\). Then since \(x\in R_{k}\) we have that \(\theta\circ\varphi_{r}^{-n}(x)\in\mathcal{O}_{L_{n+k}}\) and thus \(\theta\circ\varphi_{r}^{-n}(y)\in\frac{1}{\pi_{L}}\mathcal{O}_{L_{n+k}}\). But since \(\theta\circ\varphi_{r}^{-n}\) maps \(\widetilde{\mathbf{A}}^{+}\) into \(\mathcal{O}_{\mathbf{C}_{p}}\) we get that \(\theta\circ\varphi_{r}^{-n}(y)\in L_{n+k}\cap\mathcal{O}_{\mathbf{C}_{p}}= \mathcal{O}_{L_{n+k}}\). Therefore the natural map \(R_{k}/\pi_{L}R_{k}\to\widetilde{\mathbf{E}}^{+}\) is injective.
We know by the theory of field of norms that \(\lim\limits_{x\mapsto x^{r}}\mathcal{O}_{L_{n}}/\mathfrak{a}_{L_{n}}^{c}\simeq k _{L}\llbracket\overline{v}\rrbracket\) for some \(\overline{v}\in\widetilde{\mathbf{E}}^{+}\), so that the valuation induced by \(v_{L}\) on \(\widetilde{\mathbf{E}}^{+}\) is discrete on \(R/\pi_{L}R\). Let \(\overline{u}\in R/\pi_{L}R\) be an element of minimal valuation within
\[\{x\in R/\pi_{L}R,v_{L}(x)>0\}.\]
Since the valuation on \(R/\pi_{L}R\) is discrete, and since this set is nonempty because it contains the image of the element \(w\) given by proposition 5.2, such an element \(\overline{u}\) exists, and we have \(R/\pi_{L}R=k_{L}\llbracket\overline{u}\rrbracket\), so that \(R=\mathcal{O}_{L}\llbracket u\rrbracket\) for \(u\in R\) lifting \(\overline{u}\) since \(R\) is separated and complete for the \(\pi_{L}\)-adic topology.
**Proposition 5.4**.: _There exists \(k_{0}\geq 0\) such that, for all \(k\geq k_{0}\), we can take \(z_{k+1}=\varphi_{r}^{-1}(z_{k})\) and we let \(z=z_{k_{0}}\)._
Proof.: The proof of proposition 5.3 shows that \(R_{k}/\pi_{L}R_{k}\) injects into \(\lim\limits_{x\mapsto x^{r}}\mathcal{O}_{L_{n+k}}/\mathfrak{a}_{L_{n+k}}^{c}\). By [1, Prop. 4.2.1]\(\lim\limits_{x\mapsto x^{r}}\mathcal{O}_{L_{n+k}}/\mathfrak{a}_{L_{n+k}}^{c}\) is the image of ring of integers of the field of
norms of \(L_{\infty}/L_{k}\) inside \(\widetilde{\mathbf{E}}\) by the embedding \(\iota_{L}\), and we will denote \(\underset{x\mapsto x^{r}}{\lim}\mathcal{O}_{L_{n+k}}/\mathfrak{a}_{L_{n+k}}^{c}\) by \(Y_{k}\). We normalize the valuation of \(Y_{k}\) so that \(v_{Y_{k}}(Y_{k})=\mathbf{Z}\). By proposition 5.1, we get that for \(k\geq n_{0}\), we have \(Y_{k+1}=\varphi_{r}^{-1}(Y_{k})\) and thus the valuation \(v_{Y_{k+1}}\) is equal to \(rv_{Y_{k}}\).
Now let \(v(k):=v_{Y_{k}}(\overline{z_{k}})\) for \(k\geq 0\). We know by definition of the sets \(R_{k}\) that \(\varphi_{r}^{-1}(z_{k})\in R_{k+1}\) for all \(k\geq 1\) and thus \(v_{Y_{k+1}}(\overline{z_{k+1}})\leq rv_{Y_{k}}(\varphi_{r}^{-1}(\overline{z_{ k}}))\) by construction of the \(z_{k}\). This implies that the sequence \((v(k))_{k\geq n_{0}}\) is nonincreasing, and since it is bounded below by \(1\), this implies that there exists some \(k_{0}\geq n_{0}\) such that, for all \(k\geq k_{0}\), we have \(v(k)=v(k_{0})>0\). Thus for all \(k\geq k_{0}\) we have \(v_{Y_{k+1}}(\overline{z_{k+1}})=v_{Y_{k}}(\overline{z_{k}})\) and by construction of the \(z_{k}\) this implies that we can take \(z_{k+1}=\varphi_{r}^{-1}(z_{k})\) which concludes the proof.
We now let \(k_{0}\) be as in proposition 5.4. Note that in particular, for all \(k\geq k_{0}\), we have \(R_{k}=\varphi_{r}^{k_{0}-k}(\mathcal{O}_{E}[\![w]\!])=\mathcal{O}_{E}[\![ \varphi_{r}^{k_{0}-k}(w)]\!]\).
**Lemma 5.5**.: _The ring \(\mathcal{O}_{L}[\![z]\!]\) is stable by \(\varphi_{r}\). Moreover, there exists \(a\in\mathfrak{m}_{L}\) such that if \(z^{\prime}=z-a\) then there exists \(S(T)\in T\cdot\mathcal{O}_{L}[\![T]\!]\) such that \(S(z^{\prime})=\varphi_{r}(z^{\prime})\) and \(S(T)\equiv T^{r}\mod\mathfrak{m}_{L}\)._
Proof.: The set
\[\left\{x\in\widetilde{\mathbf{A}}^{+},\theta\circ\varphi_{r}^{-n}(x)\in \mathcal{O}_{L_{n+k_{0}}}\text{ for all }n\geq 1\right\}\]
is clearly stable by \(\varphi_{r}\) and equal to \(\mathcal{O}_{L}[\![z]\!]\) by proposition 5.4, so that \(\varphi_{r}(z)\in\mathcal{O}_{L}[\![z]\!]\) and so there exists \(R\in\mathcal{O}_{L}[\![T]\!]\) such that \(R(z)=\varphi_{r}(z)\). In particular, we have \(\overline{R}(\overline{z})=\overline{z}^{r}\) and so \(R(T)\equiv T^{r}\mod\mathfrak{m}_{L}\).
Now let \(\widetilde{R}(T)=R(T+a)\) with \(a\in\mathfrak{m}_{L}\) and let \(z^{\prime}=z-a\). Then \(\varphi_{r}(z^{\prime})=\varphi_{r}(z-a)=R(z)-a=\widetilde{R}(z^{\prime})-a\) and we let \(S(T)=\widetilde{R}(T)-a\) so that \(\varphi_{r}(z^{\prime})=S(z^{\prime})\). For \(S(0)\) to be \(0\), it suffices to find \(a\in\mathfrak{m}_{L}\) such that \(R(a)=a\). Such an \(a\) exists since we have \(R(T)\equiv T^{r}\mod\mathfrak{m}_{L}\) so that the Newton polygon of \(R(T)-T\) starts with a segment of length \(1\) and of slope \(-v_{p}(R(0))\).
Now, we have \(S(z^{\prime})=\varphi_{r}(z^{\prime})\) and so \(\overline{S}(\overline{z^{\prime}})=\overline{z^{\prime}}^{r}\), so that \(S(T)\equiv T^{r}\mod\mathfrak{m}_{L}\).
Lemma 5.5 shows that one can choose \(z\in\left\{x\in\widetilde{\mathbf{A}}^{+},\theta\circ\varphi_{r}^{-n}(x)\in \mathcal{O}_{L_{n+k_{0}}}\text{ for all }n\geq 1\right\}\) such that \(\varphi_{r}(z)=S(z)\) with \(S(T)\in T\cdot\mathcal{O}_{L}[\![T]\!]\), and we will assume in what follows that such a choice has been made.
**Lemma 5.6**.: _Assume that there exists \(m_{0}\geq 0\) such that for all \(m\geq m_{0}\), the extension \(L_{m}/L_{m_{0}}\) is Galois. Then the ring \(\mathcal{O}_{L}[\![z]\!]\) is stable under the action of \(\operatorname{Gal}(K_{\infty}/L_{m_{0}})\), and if \(g\in\operatorname{Gal}(K_{\infty}/L_{m_{0}})\), there exists a power series \(H_{g}(T)\in\mathcal{O}_{L}[\![T]\!]\) such that \(g(z)=H_{g}(z)\)._
Proof.: Let \(f_{0}=\max(m_{0},k_{0})\). Since for all \(m\geq m_{0}\), \(L_{m}/L_{m_{0}}\) is Galois, the set
\[\left\{x\in\widetilde{\mathbf{A}}^{+},\theta\circ\varphi_{r}^{-n}(x)\in\mathcal{ O}_{L_{n+f_{0}}}\text{ for all }n\geq 1\right\}\]
is stable under the action of \(\operatorname{Gal}(K_{\infty}/L_{m_{0}})\), and by proposition 5.4, this set is equal to \(\mathcal{O}_{L}[\![\varphi_{r}^{k_{0}-f_{0}}(z)]\!]\). In particular, if \(g\in\operatorname{Gal}(K_{\infty}/L_{m_{0}})\), then \(g(\varphi_{r}^{k_{0}-f_{0}}(z))\in\mathcal{O}_{L}[\![\varphi_{r}^{k_{0}-f_{0}} (z)]\!]\) and so there exists \(H_{g}(T)\in\mathcal{O}_{L}[\![T]\!]\) such that \(H_{g}(\varphi_{r}^{k_{0}-f_{0}}(z))=g(\varphi_{r}^{k_{0}-f_{0}}(z))\), and thus \(H_{g}(z)=g(z)\).
## 6 \(p\)-adic Hodge theory
Let us assume that there exists \(m_{0}\geq 0\) such that for all \(m\geq m_{0}\), the extension \(L_{m}/L_{m_{0}}\) is Galois. Lemma 5.6 shows that in this case we are in the exact same spot as the situation after lemma 5.15 of [10]. In particular, the exact same techniques apply.
We keep the notations from SS4 and we let \(\kappa:\operatorname{Gal}(K_{\infty}/L_{m_{0}}){\longrightarrow}\mathcal{O}_{L }^{\times}\) denote the character \(g\mapsto H_{g}^{\prime}(0)\).
**Proposition 6.1**.: _The character \(\kappa:\operatorname{Gal}(K_{\infty}/L_{m_{0}}){\longrightarrow}\mathcal{O}_{ L}^{\times}\) is injective and crystalline with nonnegative weights._
Proof.: This is the same as corollary 5.17 and proposition 5.19 of [10].
For \(\lambda\) a uniformizer of \(L_{m_{0}}\), let \((L_{m_{0}})_{\lambda}\) be the extension of \(L_{m_{0}}\) attached to \(\lambda\) by local class field theory. This extension is generated by the torsion points of a Lubin-Tate formal group defined over \(L_{m_{0}}\) and attached to \(\lambda\), and we write \(\chi_{\lambda}^{L_{m_{0}}}\ :\operatorname{Gal}((L_{m_{0}})_{\lambda}/L_{m_{0}}) \rightarrow\mathcal{O}_{L_{m_{0}}}^{\times}\) the corresponding Lubin-Tate character. Since \(K_{\infty}/L_{m_{0}}\) is abelian and totally ramified, there exists \(\lambda\) a uniformizer of \(\mathcal{O}_{L_{m_{0}}}\) such that \(K_{\infty}\subset(L_{m_{0}})_{\lambda}\).
**Proposition 6.2**.: _There exists \(F\subset L\) and \(r\geq 1\) such that \(\kappa=N_{L_{m_{0}}/F}(\chi_{\lambda}^{L_{m_{0}}})^{r}\)._
Proof.: Theorem 5.27 of [10] shows that there exists \(F\subset L_{m_{0}}\) and \(r\geq 1\) such that \(\kappa=N_{L_{m_{0}}/F}(\chi_{\lambda}^{L_{m_{0}}})^{r}\). The fact that \(\kappa\) takes its values in \(\mathcal{O}_{L}^{\times}\) shows that \(F\) is actually a subfield of \(L\).
Recall that relative Lubin-Tate groups are a generalization of usual formal Lubin-Tate group given by de Shalit in [11].
**Theorem 6.3**.: _There exists \(F\subset L\) and \(r\geq 1\) such that \(\kappa=N_{L/F}(\chi_{\lambda}^{L})^{r}\). Moreover, there exists a relative Lubin-Tate group \(S\), relative to the extension \(F^{\rm unr}\cap L\) of \(F\), such
that if \(L^{S}_{\infty}\) is the extension of \(L\) generated by the torsion points of \(S\), then \(L_{\infty}\subset L^{S}_{\infty}\) and \(L^{S}_{\infty}/L_{\infty}\) is a finite extension._
Proof.: This is the same as [10, Thm. 5.28] using proposition 6.2 instead of theorem 5.27 of ibid.
## 7 Isogenies
By theorem 6.3, there exists \(F\subset L\) and a relative Lubin-Tate group \(S\), relative to the extension \(F^{\mathrm{unr}}\cap L\) of \(F\), such that if \(L^{S}_{\infty}\) is the extension of \(L\) generated by the torsion points of \(S\), then \(L_{\infty}\subset L^{S}_{\infty}\) and \(L^{S}_{\infty}/L_{\infty}\) is a finite extension.
Let \(\alpha\) be an element of \(F^{\mathrm{unr}}\cap L\) such that \(L^{S}_{\infty}\) is the field cut out by \(<\alpha>\) of \(F^{\mathrm{ab}}\) by local class field theory, so that the relative Lubin-Tate group \(S\) is attached to \(\alpha\). Up to replacing \(L\) by a finite extension, we can assume that \(L^{S}_{\infty}=L_{\infty}\) and we do so in what follows. We let \(u_{0}=0\) and let \((u_{n})_{n\in\mathbf{N}}\) be a nontrivial compatible sequence of roots of iterates of \([\alpha]\), the endomorphism of \(S\) corresponding to the multiplication by \(\alpha\), so that \([\alpha](u_{n+1})=u_{n}\) with \(u_{1}\neq 0\). We let \(q\) denote the cardinal of the residue field of \(F^{\mathrm{unr}}\cap L\) so that \(\mathrm{wideg}([\alpha])=q\). Let \(\overline{u}=(u_{0},\ldots)\in\widetilde{\mathbf{E}}^{+}\). By SS9.2 of [10], there exists \(u\in\widetilde{\mathbf{A}}^{+}\) whose image in \(\widetilde{\mathbf{E}}^{+}\) is \(\overline{u}\) and such that \(\varphi_{q}(u)=[\alpha](u)\), \(g(w)=[\chi_{\alpha}(g)](u)\) for \(g\in\mathcal{G}_{L}\).
Recall that Cais and Davis have defined a "canonical ring" attached to \(L_{\infty}/L\), denoted by \(\mathbf{A}^{+}_{L_{\infty}/L}\) which is a subring of \(\widetilde{\mathbf{A}}^{+}\) and is defined _via_ the tower of elementary extensions attached to \(L_{\infty}/L\) by ramification theory. The following lemma shows that this canonical ring is related to the ring \(\mathcal{O}_{L}\llbracket u\rrbracket\) for the extension \(L_{\infty}/L\):
**Lemma 7.1**.: _There exists \(k\geq 0\) such that \(\mathbf{A}^{+}_{L_{\infty}/L}=\varphi_{q}^{-k}(\mathcal{O}_{L}\llbracket u \rrbracket)\)._
Proof.: See [10, Lemm. 8.1]. Be mindful that here \(L\) and \(u\) play respectively the role of \(E\) and \(w\) in ibid.
Recall that \((w_{n})_{n\in\mathbf{N}}\) is a \(Q\)-consistent sequence, where \(Q\) commutes with \(P\) and is such that \(Q(T)=T^{s}\mod\mathfrak{m}_{L}\), and that \(w\in\widetilde{\mathbf{A}}^{+}\) is such that \(\theta\circ\varphi_{r}^{-n}(w)=w_{n}\).
**Proposition 7.2**.: _There exists \(i\geq 0\) such that \(\varphi_{r}^{i}(w)\in\mathbf{A}^{+}_{L_{\infty}/L}\)._
Proof.: The proof is exactly the same as in [10, Prop. 8.2].
**Proposition 7.3**.: _There exists \(d\geq 1\) such that there is an isogeny from \([\alpha^{d}]\) to \(Q\)._
Proof.: Lemma 7.1 and proposition 7.2 show that there exist \(i\geq 0\) and \(h(T)\in\mathcal{O}_{L}\llbracket T\rrbracket\) such that \(w=h(\varphi_{r}^{-i}(u))\). Let \(d\) be such that \(\varphi_{r}=\varphi_{q}^{od}\) and let \(\widetilde{u}=\varphi_{r}^{-i}(u)\), so that
\(h(\widetilde{u})\). For \(g\in\mathcal{G}_{L}\), we have \(\varphi_{r}(w)=Q(w)\) so that \(Q(w)=\varphi_{r}(w)=\varphi_{r}(h(\widetilde{(}u))=h(\varphi_{r}(\widetilde{u}))\) and thus \(Q\circ h(\widetilde{u})=h\circ[\alpha^{d}](\widetilde{u})\) which means that \(Q\circ h=h\circ[\alpha^{d}]\).
**Theorem 7.4**.: _Let \((P,U)\) be a couple of power series in \(T\cdot\mathcal{O}_{K}\llbracket T\rrbracket\) such that \(P\circ U=U\circ P\), with \(P^{\prime}(0)\in\mathfrak{m}_{K}\) and \(U^{\prime}(0)\in\mathcal{O}_{K}^{\times}\), and we assume that \(P(T)\neq 0\mod\mathfrak{m}_{K}\) and that \(U^{\prime}(0)\) is not a root of unity. Then there exists a finite extension \(E\) of \(K\), a Lubin-Tate formal group \(S\) defined over \(\mathcal{O}_{L}\), where \(E/L\) is a finite extension, endomorphisms of this formal group \(P_{S}\) and \(U_{S}\) over \(\mathcal{O}_{E}\), and a power series \(h(T)\in T\mathcal{O}_{E}\llbracket T\rrbracket\) such that \(P\circ h=h\circ P_{S}\) and \(U\circ h=h\circ U_{S}\) if and only if the following two conditions are satisfied:_
1. _there exists_ \(V\in T\cdot\mathcal{O}_{K}\llbracket T\rrbracket\)_, commuting with_ \(P\)_, and an integer_ \(d\geq 1\) _such that_ \(Q(T)=T^{p^{d}}\mod\mathfrak{m}_{K}\) _where_ \(Q=V\circ P\) _;_
2. _there exists a finite extension_ \(E\) _of_ \(K\) _and a sequence_ \((\alpha_{n})_{n\in\mathbf{N}}\) _where_ \(\alpha_{0}\neq 0\) _is a root of_ \(Q\) _and_ \(Q(\alpha_{n+1})=\alpha_{n}\) _such that for all_ \(n\geq 1\)_, the extension_ \(E(\alpha_{n})/E\) _is Galois._
Proof.: Lemmas 4.1 and 4.2 of SS2 imply that if such a Lubin-Tate formal group exist then the two conditions are satisfied.
If those two conditions are satisfied, then proposition 7.3 shows that there exists a finite extension \(E\) of \(K\), a subfield \(F\) of \(E\), a relative Lubin-Tate group \(S\), relative to the extension \(F^{\mathrm{unr}}\cap E\) of \(F\), and an endomorphism \(Q_{S}\) of \(S\) such that there exists an isogeny from \(Q_{S}\) to \(Q\). Thus there exists an isogeny from an endomorphism \(P_{S}\) of \(S\) to \(P\). In order to conclude, it suffices to notice that a relative Lubin-Tate formal group \(S\), relative to an extension \(F^{\mathrm{unr}}\cap E\) of \(F\) is actually isomorphic over \(F^{\mathrm{unr}}\cap E\) to a Lubin-Tate formal group \(S^{\prime}\) defined over \(F\).
## 8 A particular case of Lubin's conjecture
We now apply the results from the previous sections to the particular case where \(P(T)=T^{p}\mod\mathfrak{m}_{K}\). Let \(P,U\in T\cdot\mathcal{O}_{K}\llbracket T\rrbracket\) such that \(P\circ U=U\circ P\), with \(P(T)=T^{p}\mod\mathfrak{m}_{K}\) and \(U^{\prime}(0)\in\mathcal{O}_{K}^{\times}\) not a root of unity. We consider as in SS3 a \(P\)-consistent sequence \((v_{n})\) and we let \(K_{n}=K(v_{n})\) for \(n\geq 0\). We let \(n_{0}\) be as in proposition 2.1.
**Proposition 8.1**.: _There exists \(m_{0}\geq 0\) such that for all \(m\geq m_{0}\), the extension \(K_{m}/K_{m_{0}}\) is Galois._
Proof.: By [1, Prop. 3.2], the roots of iterates of \(P\) are exactly the fixed points of the iterates of \(U\). Up to replacing \(U\) by some power of \(U\), we can assume that \(U^{\prime}(0)=1\)
mod \(\mathfrak{m}_{K}\) and that there exists \(n\geq n_{0}\) such that \(U(v_{n})=v_{n}\) but \(U(v_{n+1})\neq v_{n+1}\) (since \(U(T)-T\) admits only a finite number of roots in the unit disk).
Since \(U(v_{n})=v_{n}\) and \(U\) commutes with \(P\), this implies that \(U(v_{n+1})\) is also a root of \(P(T)-v_{n}\). The discussion on page 333 of [10] shows that the set \(\{U^{\circ k}(v_{n+1})\}_{k\in\mathbf{N}}\) has cardinality a power of \(p\), and is not of cardinal \(1\) since \(U(v_{n+1})\neq v_{n+1}\) by assumption. Since \(P(T)-v_{n}\) has exactly \(p\) roots, this implies that the set \(\{U(v_{n+1})\}\) has cardinality \(p\), and thus all the roots of \(P(T)-v_{n}\) are contained in \(K_{n+1}\), so that \(K_{n+1}/K_{n}\) is Galois.
Let \(m>n\). The extension \(K_{m}/K_{n}\) is generated by all the roots of \(P^{\circ(m-n)}(T)-v_{n}=P^{\circ(m-n)}(T)-U(v_{n})\). Since \(U\) swaps all the roots of \(P(T)-v_{n}\), it is easy to see that the \(U\)-orbit \(\{U^{\circ k}(v_{m})\}_{k\geq 0}\) contains all the roots of \(P^{\circ(m-n)}(T)-v_{n}\), so that \(K_{m}/K_{n}\) is Galois. This prove the proposition.
We are now in the conditions of our theorem 7.4, which yields the following:
_Corollary 8.2_.: _Lubin's conjecture is true for \((P,U)\)._
|
2309.03282 | How galaxy properties vary with filament proximity in the SIMBA
simulations | We explore the dependence of global galaxy properties in the SIMBA simulation
as a function of distance from filaments identified using DisPerSe. We exclude
halos with mass $M_h>10^{13}M_\odot$ to mitigate the impact of group and
cluster environments. Galaxies near filaments are more massive and have more
satellites, which we control for by examining deviations from best-fit scaling
relations. At $z=0$, star formation (SF) is significantly suppressed within
$\lesssim 100$ kpc of filaments, more strongly for satellites, indicating
substantial pre-processing in filaments. By $z=2$, the trend is weak and if
anything indicates an increase in SF activity close to filaments. The
suppression at $z\lesssim 1$ is accompanied by lowered \HI fractions, and
increased metallicities, quenched fractions, and dispersion-dominated systems.
$H_2$ fractions are not strongly suppressed when controlling for stellar mass,
suggesting that star formation efficiency drives the drop in SF. By comparing
amongst different SIMBA feedback variant runs, we show that the majority of SF
suppression owes to filamentary shock-heating, but there is a non-trivial
additional effect from AGN feedback. When looking around massive
($M_h>10^{13}M_\odot$) halos, those galaxies near filaments behave somewhat
differently, indicating that filaments provide an additional environmental
effect relative to halos. Finally, we compare SIMBA results to EAGLE and
IllustrisTNG at $z=0$, showing that all models predict SF suppression within
$\lesssim 100$ kpc of filaments, nonetheless, detailed differences may be
observationally testable. | Teodora-Elena Bulichi, Romeel Dave, Katarina Kraljic | 2023-09-06T18:00:59Z | http://arxiv.org/abs/2309.03282v2 | # How galaxy properties vary with filament proximity in the Simba simulations
###### Abstract
We explore the dependence of global galaxy properties in the Simba simulation as a function of distance from filaments identified using DisPerSE. We exclude halos with mass \(M_{h}>10^{13}M_{\odot}\) to mitigate the impact of group and cluster environments. Galaxies near filaments are more massive and have more satellites, which we control for by examining deviations from best-fit scaling relations. At \(z=0\), star formation (SF) is significantly suppressed within \(\la 100\) kpc of filaments, more strongly for satellites, indicating substantial pre-processing in filaments. By \(z=2\), the trend is weak and if anything indicates an increase in SF activity close to filaments. The suppression at \(z\la 1\) is accompanied by lowered H i fractions, and increased metallicities, quenched fractions, and dispersion-dominated systems. \(H_{2}\) fractions are not strongly suppressed when controlling for stellar mass, suggesting that star formation efficiency drives the drop in SF. By comparing amongst different Simba feedback variant runs, we show that the majority of SF suppression owes to filamentary shock-heating, but there is a non-trivial additional effect from AGN feedback. When looking around massive (\(M_{h}>10^{13}M_{\odot}\)) halos, those galaxies near filaments behave somewhat differently, indicating that filaments provide an additional environmental effect relative to halos. Finally, we compare Simba results to EAGLE and IllustrisTNG at \(z=0\), showing that all models predict SF suppression within \(\la 100\) kpc of filaments, nonetheless, detailed differences may be observationally testable.
keywords: cosmology: large-scale structure of Universe - galaxies: evolution - methods: numerical
## 1 Introduction
The Universe on large scales is comprised of a network of galaxies, gas and dark matter forming the so-called cosmic web (e.g. Bond et al., 1996; Aragon-Calvo et al., 2010). This large-scale structure (LSS) consisting of void regions, sheet-like walls, filaments, and nodes is predicted by the Zel'dovich's model for the gravitational collapse of ellipsoidal fluctuations in the matter density field (Zel'dovich, 1970a,b). The features of the cosmic web have been brought to light via systematic galaxy redshift surveys (e.g. de Lapparent et al., 1986; Geller & Huchra, 1989; Colless et al., 2001; Tegmark et al., 2004), and are also supported by simulations which predict the hierarchical formation of voids, walls and filaments assuming the well-established cold dark matter (CDM) paradigm (e.g. Springel, 2005).
Within the cosmic web, galaxies continuously grow and evolve, their properties being strongly correlated with their local environments. Denser environments show over-abundances of massive halos due to the enhanced dark matter densities and the proto-halo's earlier collapse (e.g. Bond et al., 1996), which favours the formation of massive galaxies. Massive galaxies are today known to be predominantly elliptical and red, thus giving rise to the long observed morphology-density and colour-density relations (Dressler, 1980; Postman & Geller, 1984; Dressler, 1986; Kauffmann et al., 2004; Baldry et al., 2006; for reviews see Boselli & Gavazzi, 2006, 2014).
Traditionally, environmental effects have been studied by contrasting galaxies within groups and clusters versus those in the "field". In such studies, the environment is a proxy for halo mass, with dense environments representing massive halos with virial mass \(\ga 10^{13}M_{\odot}\). Yet the field population itself may not be homogeneous in terms of its environmental dependence. For instance, field galaxies within a filamentary environment could have enhanced galaxy growth due to the greater availability of gas relative to void regions, or else could be retarded if that gas were shock-heated on large-scale structure. The filamentary web may be the site of "pre-processing", in which galaxy properties are altered prior to entering into group and cluster environments (e.g. Fujita, 2004; Wetzel et al., 2013). Disentangling these effects is important for fully characterising the role of the environment in galaxy evolution.
However, identifying the imprint of the cosmic web on galaxy properties beyond the dominant effect of local density and mass has been shown to be a daunting task. Early observational works struggled to find clear evidence of such signature. Among them, Alpaslan et al. (2015) found that the galaxies' properties are primarily influenced by stellar mass, rather than the environment. Similarly, Eardley et al. (2015) suggested that the observed cosmic web environmental effects on galaxy properties can be explained solely by their corre
sponding local densities. These contradicting results may be partly explained by the inability to properly distinguish between the effects of present local densities and past large-scale environments, given their strong correlation. To sort out these issues, it is of crucial importance to distinguish between mass- and environmental-driven effects, as well as clearly separate group- and cluster-like environments from large-scale cosmic web features.
From galaxy surveys there is substantial evidence that galaxies close to filaments are more massive and show lower levels of star formation. This has been shown using the Sloan Digital Sky Survey (SDSS; Abazajian et al., 2009) by Chen et al. (2017); Kautma et al. (2017); Poudel et al. (2017), using the VIMOS Public Extragalactic Redshifts Survey Multi-\(\lambda\) Survey (VIPERS-MLS; Moutard et al., 2016; Scodeggio et al., 2018) by Malavasi et al. (2017), using COSMOS-2015 (Laigle et al., 2016) by Laigle et al. (2018), using the Galaxy and Mass Assembly survey (GAMA; Driver et al., 2009) by Alpaslan et al. (2015); Kraljic et al. (2018), and using the WISASuperCOSMOS survey (WISExSCOS; Bilicki et al., 2016) by Bonjean et al. (2020) (but see Darvish et al., 2014; Vulcani et al., 2019, for observational results finding enchanced levels of (specific) star formation in the proximity of filaments). These mass and SFR trends have also been supported several studies focused on cosmic voids showing that galaxies residing within them tend to be less massive, bluer, and more star-forming (e.g. Grogin & Geller, 2000; Rojas et al., 2004; Kreckel et al., 2011; Hoyle et al., 2012; Beygu et al., 2016) compared to higher density environments (but see Kreckel et al., 2015; Ricciardelli et al., 2014; Wegner et al., 2019, for claims on no significant impact of void environment on galaxy properties).
Galaxy formation simulations within a cosmological context should naturally yield such environmental trends as a consequence of the interplay between galaxy accretion and large-scale structure. However, again, the results are mixed. Kraljic et al. (2018) and Malavasi et al. (2022) investigated galaxy properties near filaments using the HorizonAGN (Dubois et al., 2014) and IllustrisTNG (Pillepich et al., 2018) simulations, respectively, and generally reported suppressed star formation in agreement with some observational results. On the other hand, when looking at high-z massive dense filaments and dwarf galaxies, Zheng et al. (2022) found a slight increase in the SFR, using the Auriga simulations (Grand et al., 2017). Additionally, Kotecha et al. (2022) found that galaxies close to filaments tend to be more star-forming when looking at simulated clusters from the Three Hundred Project (Cui et al., 2018). This proves once again that, apart from the different prescriptions of simulations, the sample selection and environment classification play a vital role in such analyses too. Reconciling all these results likely requires considering survey selection effects and the specific techniques used to characterise filamentary structure, but in principle, the properties of galaxies within the filamentary large-scale structure should provide a novel test of galaxy formation models.
Other properties have also been investigated in terms of the cosmic web environment. Kuutma et al. (2017); Poudel et al. (2017) showed using SDSS that at fixed environment density, the elliptical fraction is higher close to filaments. Poudel et al. (2017) proposed that the differences in galaxy star formation properties result from the higher abundances of elliptical galaxies close to filaments. A similar result was also observed by Castigani et al. (2022), when looking out to 12 virial radii from the Virgo cluster. Salerno et al. (2020), using the Six Degree Field Galaxy Survey (6dFGS, Jones et al., 2004), found that galaxies arriving at clusters by following filaments are more quenched than galaxies that accrete onto clusters isotropically (see also Gouin et al., 2020 and Malavasi et al., 2022). While these results suggest that the morphology-density and colour-density relations are established during the pre-processing phase, they are still focused on the vicinity of massive halos rather than the full cosmic web.
Gas and metal content have also been explored in terms of the filamentary web. For the atomic hydrogen content in galaxies (H i), Kleiner et al. (2017) and Crone Odekon et al. (2018) reported different results. Kleiner et al. (2017) showed, using the 6dFGS that galaxies more massive than \(M_{*}>10^{11}M_{\odot}\) show higher H i to stellar mass ratio (H i fraction) near filaments, while no trend is observed for lower-mass galaxies. Crone Odekon et al. (2018) used the ALFALFA H i survey (Giovanelli et al., 2005) to show that the H i fraction increases at increasing distance from filaments, at fixed local density and stellar mass. These last results are supported by Castigani et al. (2022), who added that the molecular hydrogen (H\({}_{2}\)) does not show a clear trend with respect to the distance from filaments. For metallicity, this has only been explored in both observations and simulations, with Winkel et al. (2021) reporting that central in SDSS are more metal enriched close to cosmic web structures and Donnan et al. (2022) showed via IllustrisTNG simulations and SDSS that both the gas-phase and stellar metallicities are higher for galaxies closer to filaments and nodes.
One way forward to make sense of these controversies is to use state-of-the-art galaxy formation simulations to disentangle these effects. In this context, state-of-the-art refers to simulations that reproduce the global galaxy trends in star formation rate, gas content, and metallicity versus mass; restricting to such models then provides a plausible baseline for teasing out subtle environmental effects. To this end, this study explores how the scalar galaxy properties vary with respect to the distance to the closest filament identified using Discrete Persistent Structure Extractor (DisPerSE; Soubie, 2011; Soubie et al., 2011) within the Simba (Dave et al., 2019) simulation suite. Simba reproduces many global trends related to SFR, quenched fractions, H i content, and many other global properties (e.g. Dave et al., 2019, 2020). In this first paper, we quantify trends in 3-D space (i.e. not accounting for redshift space distortions) using all resolved Simba galaxies to better understand the intrinsic impact of filaments on galaxy properties. We examine stellar mass (\(M_{*}\)), specific SFR (SSFR), H i and H\({}_{2}\) fractions, metallicity, and quenched fractions versus distance to the nearest filament. We focus on galaxies outside of massive halos (\(M_{h}\leq 10^{13}M_{\odot}\)) in order to restrict ourselves to a classical field galaxy sample. We study both intrinsic quantities versus distance to filament, as well as departures from the "main sequence" of these quantities vs. \(M_{*}\) in order to control for the fact that galaxies near filaments are more massive. In addition, we use Simba's feedback variants to understand which trends come from large-scale structure versus particular feedback processes (as modeled in Simba). Finally, we apply the same procedure to the EAGLE (Schaye et al., 2015) and IllustrisTNG (Pillepich et al., 2018; Genel et al., 2018) simulations to determine whether the trends seen with Simba are robust to variations in galaxy formation model. We leave for future work a redshift-space comparison to observations, applying observational selection and uncertainties to robustly quantify constraints on galaxy formation models.
The rest of the paper is structured as follows: SS2 describes the tools and methods implemented in this study; SS3 presents the trends of the galaxy properties of interest with respect to their proximity to filaments, while SS4 investigates further the deviations of these properties from their scaling relations with \(M_{*}\), with respect to distance from filaments; SS5 and SS6 compare the impact of the Simba's feedback variants and massive halos respectively with the impact of the cosmic web; SS7 compares our main findings with the results of the EAGLE and IllustrisTNG simulations and SS8 provides a summary of this study.
We adopt the cosmological constants of the Planck Collaboration (Planck Collaboration et al., 2016), implemented in Simha: \(\Omega_{m}=0.3,\Omega_{\Lambda}=0.7,\Omega_{b}=0.048,H_{0}=60\,\mathrm{kms^{-1} Mpc^{-1}}\,h^{-1}\), \(\sigma_{8}\,=\,0.82\) and \(n_{s}\,=\,0.97\).
## 2 Simulations and Analysis
In this section, we discuss the methods employed throughout our study in order to obtain the relevant galaxy properties and cosmic web features.
### Simha
We use the large-scale cosmological hydrodynamical Simha simulations (Dave et al., 2019) for this work. Simha builds upon its predecessor Mueasa(Dave et al., 2016) which uses the Meshless Finite Mass version of the Gizmo code (Hopkins, 2015), together with the GADGET-3 tree-particle-mesh gravity solver (Springel, 2005). We refer the reader to Dave et al. (2019) for a full description and here summarise the relevant features of this work.
Simha models non-equilibrium cooling from primordial elements along with metal line cooling using Grackle-3.1 (Smith et al., 2017), employing a spatially uniform photo-ionising background attenuated with a simple prescription for self-shielding in dense regions. The chemical enrichment module makes use of yield tables for Type II supernovae (SNII, Nomoto et al., 2006), Type Ia supernovae (SNIa, Iwamoto et al., 1999) and asymptotic giant branch (AGB) stars (Oppenheimer & Dave, 2006). Using the metallicity and local column density, the H\({}_{2}\) fraction in each gas element is computed via the subgrid recipe from Krumholz & Gnedin (2011). Star formation then proceeds assuming a Schmidt (1959) relation law with 2% of the H\({}_{2}\) mass being converted into stars in a local dynamical time, with a minimum density of \(n_{H}>0.13\) cm\({}^{-3}\) for star formation to occur. Galactic winds, putatively driven by SNII, are modeled in a kinetic manner, with kick probability and velocity assigned to roughly mimic scalings with galaxy stellar mass as predicted by the Feedback in Realistic Environments (FIRE) simulations (Muratov et al., 2015; Angles-Alcazar et al., 2017). After the kick, a wind element does not feel hydrodynamic forces or cooling until it reaches a density 1% of the threshold density for star formation, or 2% of a Hubble time from launch. 30% of the winds are heated to the temperature provided by SNII, and winds are metal loaded by assigning a metallicity \(dZ\) to each wind particle via \(dZ=f_{\mathrm{SNII}}y_{\mathrm{SNII}}(Z)/MAX(\eta,1)\), where \(f_{\mathrm{SNII}}=0.18\) is the stellar mass fraction lost to supernova, \(y_{\mathrm{SNII}}(Z)\) is the metal-dependent Type II SN yield for each species and \(\eta\) is the mass loading factor. Simha locks individual metals into dust, removing them from the gas phase, following Li et al. (2019). Taking all these aspects into consideration, Simha predicts mass-metallicity relation (MZR) evolution (Dave et al., 2019), and star formation rate evolution (Katsianis et al., 2021) in agreement with observations, typically as well or better than other comparable simulations.
Simha simulates black hole growth via torque-limited accretion (Hopkins & Quataert, 2011; Angles-Alcazar et al., 2013, 2015) for cool gas and Bondi accretion for hot gas. Black hole feedback is modeled as a mixture of kinetic feedback and X-ray energy feedback. The kinetic mode is designed to reproduce the observed two-mode feedback, separated via the Eddington fraction \(f_{\mathrm{Edd}}\). In the high accretion mode, radiative AGN winds are modeled by assigning outflow velocities to the gas particles surrounding the black hole, dependent on the corresponding black hole mass. In the jet mode, initiated once \(f_{\mathrm{Edd}}<0.2\) and maximised when \(f_{\mathrm{Edd}}<0.02\), the assigned outflow velocities adopt considerably larger values than in the high accretion modes, increasing with decreasing \(f_{\mathrm{Edd}}\). The X-ray feedback is introduced in a full-speed jet scenario, and when the ratio \(M_{\mathrm{gas}}/M_{*}<0.2\). This involves injecting energy into the surrounding gas usually via a spherical outwards push. In Simha the accretion energy determines galaxy quenching, with the jet mode feedback primarily responsible for this aspect and the X-ray feedback contributing significantly to suppressing residual star formation. As a result of this AGN growth and feedback model, Simha reproduces the observed stellar mass function evolution from \(z=6\to 0\), the star-forming main sequence, and quenched fractions in agreement with observations (Dave et al., 2019).
Galaxies and halos are identified in post-processing using the Caesar package, as described in Dave et al. (2019). During the run, particles are grouped into halos using a 3D friends-of-friends (FoF) algorithm using a linking length of 0.2 times the mean inter-particle spacing. Within each halo, Caesar identifies galaxies using a 6D FoF with a smaller linking length, applied to cool gas and stars only. Stellar mass and SFR are computed as the total values among all particles grouped into a single galaxy (or halo). The metallicity is computed in several ways, including stellar mass-weighted (from star particles) and SFR-weighted (from gas elements). Given that considerable amounts of H i can be present in extended regions outside the star-forming regions of galaxies, Caesar assigns each particle to the galaxy to which it is most gravitationally bound, then sums the total H i from those particles. H\({}_{2}\) is computed similarly, though the vast majority of H\({}_{2}\) lies within Caesar galaxies. There is good agreement between the observations and the Simha simulated galaxy H i and H\({}_{2}\) fractions, as well as their scaling relations with stellar mass (Dave et al., 2020).
In this work, we focus on the (100 comoving Mpc \(h^{-1}\))\({}^{3}\) main Simha run, evolved from \(z=249\to 0\) with \(1024^{3}\) gas elements and \(1024^{3}\) dark matter particles. The minimum (adaptive) gravitational softening length for this simulation is \(\epsilon_{\mathrm{min}}=0.5h^{-1}\mathrm{kpc}\). The mass resolution for the initial gas element and dark matter particles are \(m_{\mathrm{gas}}=1.82\times 10^{7}M_{\odot}\) and \(m_{\mathrm{DM}}=9.60\times 10^{7}M_{\odot}\), respectively. This gives a minimum resolved stellar mass for galaxies as \(M_{*,\mathrm{min}}=5.8\times 10^{8}M_{\odot}\), i.e. 32 gas element masses. We utilize the Caesar catalog to infer the galaxy properties of interest, at redshifts \(z=0,1,2\) (i.e. snapshots 151, 105, 78). We also use the feedback variant runs of Simha, which have the exact same resolution except in a 50 Mpc \(h^{-1}\) box with \(2\times 512^{3}\) particles, and whose runs turn off individual feedback modules as we will detail later.
### Tracing the cosmic web with DisPerSE
To identify filaments of the cosmic web, we use the publicly available code DisPerSE (Sousbie, 2011; Sousbie et al., 2011), using the Discrete Morse theory and the theory of persistence. DisPerSE measures the gradient of the density field via Delaunay Tessellation (e.g. Schaap & van de Weygaert, 2000) to identify the critical points, defined as the points where the gradient of the density field is null. Filaments are then constructed as segments connecting a maximum to a saddle point, representing the ridges of the Delaunay density field.
We applied DisPerSE to the distribution of galaxies in Simha, adopting a \(3\sigma\) persistence threshold in order to remove the filaments affected by the Poisson noise of the density distribution. As explained in e.g. Codis et al. (2018) and Kraljic et al. (2020), we also noted that a higher threshold would result in more robust structures with a significant drop in the number of filaments generated, hence
the 3\(\sigma\) value adopted throughout this work represents an optimal choice. Additionally, we also applied a smoothing to the positions of the filaments' edges, by averaging their positions with those of the edges of contiguous segments. This gives rise to a smoother filamentary skeleton by reducing unphysical filamentary shapes. These procedures generally mimic previous works that applied DisPerSE to simulations.
For better visualisation, Fig. 1 shows the filaments extracted with DisPerSE and a 2D view of a Simba simulation at redshift \(z=0\), using a slice from the 50 \(h^{-1}\)Mpc box with a width of 10 \(h^{-1}\)Mpc. The galaxies are overplotted (centrals as circles and satellites as triangles), colour-coded by their total stellar masses. This plot provides a qualitative image that the DisPerSE skeleton generally traces out the large-scale structure that one picks out by eye. One can see further that the most massive galaxies tend to lie closer to filaments, together with their low-mass companions/satellites; this will be quantified in SS2.3 and SS3.1.
### The galaxy sample
For the purpose of this work, we quantify the galaxies' positions in the cosmic web via the distance to the closest filament (\(d\)). DisPerSE reports the 3D positions of the filaments' edges, which we used to compute the minimum distance between each galaxy and the corresponding closest filament midpoint. This is slightly different than the perpendicular closest distance, but negligibly so as pointed out in e.g. Tudorache et al. (2022), since each DisPerSE filament is actually comprised of a large number of small segments.
As mentioned earlier, we are specifically interested in trends versus filamentary environment within the field galaxy population. To this end, we remove all galaxies in halos with a virial mass above \(10^{13}\,M_{\odot}\), corresponding to removing all galaxies in structures of poor group size and larger, and only report all statistics based on the remaining sample of galaxies. This does not preclude some effect on galaxies from being in the vicinity of large halos, so we are still including any effects associated with pre-processing outside of large halos. Nonetheless, such large halos are fairly rare, and as one can see in Fig. 1, much of the filamentary structure is located in regions far from the most massive nodes.
We will further consider the impact of the environment on central and satellite galaxies separately. Centrals are taken to be the most massive galaxy within its halo, and with few exceptions typically lie within the inner 10-20% of their halo. Satellites can be impacted by distinct physical processes such as ram pressure and tidal stripping, and although such processes have traditionally been associated with group and cluster environments, it is possible that the denser and hotter environment around filaments could also have an impact.
Figure 2 shows the stellar mass distribution of satellites (blue dots) and central galaxies (red dots), with respect to the distance to the closest filament, for redshifts \(z=0,1,2\) (upper left to right panels). Corresponding probability distributions of satellites and centrals are shown in lower panels Corresponding lower panels show the probability distribution of satellites (blue curve) and centrals (red). The upper mass threshold, visible most noticeably in the \(z=0\) upper left plot, comes from the aforementioned halo mass cut at \(M_{h}<10^{13}\,M_{\odot}\).
Overall, central galaxies skew to be more massive towards the filament spine. At short distances, only small satellites remain. The immediate proximity of filaments (\(d\lesssim 0.1\) Mpc) shows high-mass centrals accompanied by their respective low-mass satellites in the low-redshift Universe, while this distribution appears to be more spread out at redshift \(z=2\), indicating a dynamical evolution of the satellite population. At all redshifts, the satellites distribution peaks at \(<1\) cMpc from the closest filament, while for centrals the corresponding peak is at a few cMpc. This reflects the trend of the halo occupation distribution with \(M_{*}\), since \(M_{*}\) is well correlated with \(M_{h}\)(e.g. Cui et al., 2021) and the halo occupancy rises quickly towards higher \(M_{h}\). These galaxies represent the sample that we will use for the majority of the analysis that we discuss next.
## 3 Galaxy properties in the cosmic web
In this section, we correlate key global galaxy properties with their proximity to filaments as measured by \(d\). We consider each property in turn: \(M_{*}\), sSFR, \(Z_{*}\), \(f_{HI}\), and \(f_{H2}\), where the latter two are the
Figure 1: A 2D view of a 50 \(h^{-1}\)Mpc Shira simulation slice at redshift \(z=0\), with a slide width of 10 \(h^{-1}\)Mpc. The galaxies are represented by circles (centrals) and triangles (satellites), colour coded by their total stellar mass. The filaments extracted from DisPerSE are plotted in pink. The figure provides a qualitative representation of the galaxies’ distribution within the cosmic web, showing that the overdensities (filaments and nodes) show predominantly massive centrals and accompanying low-mass satellites.
gas mass fractions in each phase with respect to stellar mass. We also consider quenched fractions \(f_{Q}\) and elliptical fractions \(f_{e}\). We present results for \(z=0,1,2\), separated into centrals and satellites.
### Stellar mass
The galaxy stellar masses dependence on \(d\) is shown in Fig. 3, showing the medians of the binned values for redshifts \(z=0\) (purple), \(z=1\) (maroon), and \(z=2\) (green). Solid and dashed lines show centrals and satellites, respectively. The shaded regions represent the standard deviation on the mean in each bin, illustrating higher uncertainties in the filaments' proximity owing primarily to the lower number of galaxies in this region. This same presentation scheme will be retained throughout this section.
At all three redshifts considered, the masses of the central galaxies decrease with increasing distance from the closest filament. Despite our finding from the previous section suggesting that at \(z=2\) galaxies adopt a broad range of masses in the filaments proximity, Fig. 3 shows that the median values of these masses are in fact higher for galaxies closer to the filaments. This finding can be explained via the environmental effects on the halo mass function, which predicts more massive halos to lie closer to filaments, which then leads to higher mass galaxies in these regions (e.g. Alam et al., 2019). This highlights that we must be careful when interpreting trends with distance from filaments to ensure that they are not simply reflecting trends with stellar mass, since one of the crucial aspects of this study is to disentangle the effects of mass and environment on galaxy properties.
The satellites curiously show a reverse trend for short distances (i.e. within log(\(d\)/cMpc) \(\lesssim\) -1) at redshifts \(z=0\) and \(z=1\), as their masses increase with distance. For larger distances, they also adopt a subtle overall decreasing trend, considerably weaker than for centrals. No clear trend is observed for satellites at redshift \(z=2\).
In agreement with our qualitative findings from SS2.3, it appears that most massive central galaxies lie within short distances from the filaments and host a large number of low-mass satellites. This finding also supports the predictions of the environmental effects on the halo mass function, as the massive systems in the filaments to proximity are more likely to accrete more low-mass satellites. The disappearance of more massive satellites very close to filaments could further owe to dynamical effects such as accelerated dynamical friction and tidal stripping in dense environs; we defer a detailed analysis of this to future works. For now, we note this as an interesting prediction from Simha.
Figure 3: Distance from filaments dependence of stellar masses for centrals (continuous lines) and satellites (dashed lines) for three redshifts: \(z=0\) (purple), \(z=1\) (dark red), \(z=2\) (green). The lines were obtained by binning the results in terms of distance and interpolating the corresponding medians (i.e. running medians). The shaded regions represent the corresponding standard errors in each bin. In agreement to our qualitative findings from §2.3, most massive centrals lie within short distances from the filaments and host a large number of low-mass satellites.
Figure 2: Upper panels show the stellar mass of satellites (blue dots) and centrals (red dots) versus the distance to their corresponding closest filament in comoving Mpc, \(d\) at redshifts \(z=0,1,2\). Lower panels likewise present the probability distribution function of satellites (blue line) and centrals (red line) vs. \(d\). Within 1 cMpc there are fewer low-mass centrals and more satellites. The upper envelope visible especially at \(z=0\) results from our halo mass threshold of \(M_{h}<10^{13}M_{\odot}\). This figure strengthens the qualitative findings from Fig. 1, but also shows that the largest fraction of satellites and centrals are located at \(\sim\) 1 and 10 cMpc of a filament respectively.
### (Specific) star formation rate
Galaxies near filaments could be enhanced in star formation relative to those far away since there is more gas in the vicinity, or they could be suppressed because the filaments are heated which can suppress accretion. Hence SFR provides a key barometer for the interplay between galaxy growth and filamentary large-scale structure. To mitigate the fact that galaxies closer to filaments have larger \(M_{*}\) as found in the previous section, we consider the specific SFR, although the broad results are similar if we consider the SFR itself.
Figure 4 presents the dependence of the specific star formation rate (sSFR) with respect to distance from filaments at \(z=0,1,2\). The line styles and colours mimic Fig. 3. For all non-star forming galaxies (SFR=0), we set log(sSFR)\(=-14\). Since we are considering medians, the analysis is not sensitive to the exact value as long as it is very low.
As seen in Fig. 4, central galaxies clearly show a reduction in sSFR at close proximity to filaments at \(z=1\) and 0 relative to galaxies far away from filaments at \(z=1\), the dip is only seen at very small distances (\(\lower 2.15pt\hbox{$\;\buildrel<\over{\sim}\;$}30\) kpc), but by \(z=0\) the sSFR is suppressed farther out to \(\sim 100\) kpc. At \(z=2\), if anything there is a rise in the sSFR towards filaments, perhaps indicating a reversal in the star formation-density relation; we will explore this further in future work. Far away from filaments, there is an overall reduction in sSFR that reflects the global decline in cosmic star formation over time (e.g. Daddi et al., 2007; Dave, 2008) primarily due to falling accretion rates (Dekel et al., 2009). Hence the growth of centrals via star formation is clearly retarded even in filamentary regions, indicating evidence for pre-processing of galaxies before falling into galaxy groups or clusters.
Examining the satellites, at \(z=0\) the satellites' median sSFR vanishes within 100 kpc of filament, indicating that more than half the satellites are fully quenched. Then the sSFR shows a rapid increase with increasing distance, converging subsequently to the same value as centrals by \(\sim 10\) Mpc. At redshift \(z=1\), the sSFR for satellites follows that of centrals but shows somewhat more suppression close to filaments, while becoming similar to centrals at \(\lower 2.15pt\hbox{$\;\buildrel>\over{\sim}\;$}2\) Mpc. Meanwhile, at \(z=2\), satellites show only a very mild suppression with respect to centrals. These trends indicate that satellites are more impacted by filamentary environments than centrals, albeit in a qualitatively similar way, and the additional impact grows rapidly from \(\sim 1\to 0\).
To sum up, our results show that galaxies close to filaments are suppressed in stellar growth rates out to \(z\lower 2.15pt\hbox{$\;\buildrel>\over{\sim}\;$}1\), with a majority of satellites in the filaments' proximity at redshift \(z=0\) being fully quenched. Within Simba, it is seen that galaxy circum-galactic and even intergalactic media are strongly impacted by AGN jet feedback (Appleby et al., 2021; Sorini et al., 2022), which thereby grows the quenched galaxy population (Dave et al., 2019). Our results here suggest that AGN feedback also may be a contributor to suppressing SFRs, particularly in satellites close to filaments, although it could also owe to increased shock heating owing to the growth of large-scale structure; we will explore this further using Simba variants in SS5. In any case, these results clearly demonstrate that both centrals and satellites undergo pre-processing outside of galaxy groups.
### Gas content
Given that the sSFR is suppressed towards filaments, one would expect that the gas contents that fuel star formation would also be lowered. For molecular hydrogen, Simba directly ties the \(H_{2}\) content to star formation, but for H i, this provides a reservoir on larger scales that could be more influenced by environmental effects. Hence it is interesting to examine the molecular and atomic contents of galaxies
Figure 4: Distance from filaments dependence of specific star formation rates for centrals (continuous lines) and satellites (dashed lines) for three redshifts: \(z=0\) (purple), \(z=1\) (dark red), \(z=2\) (green). The lines were obtained by binning the results in terms of distance and interpolating the corresponding medians (i.e. running medians). The shaded regions represent the corresponding standard errors in each bin. Galaxies close to filaments show suppressed levels of star formation, with a majority of satellites in the filaments’ proximity at redshift \(z=0\) being fully quenched.
Figure 5: Distance from filaments dependence of H i (_left_) and H\({}_{2}\) (_right_) content for centrals (continuous lines) and satellites (dashed lines) for three redshifts: \(z=0\) (purple), \(z=1\) (dark red), \(z=2\) (green). The lines were obtained by binning the results in terms of distance and interpolating the corresponding medians (i.e. running medians). The shaded regions represent the corresponding standard errors in each bin. The H i fraction increases with increasing distance from filaments at all three redshifts considered. \(f_{H{2}}\) shows similar trends at redshifts \(z=0\) and \(z=1\), while (similar to sSFR, see Fig. 4) showing no clear trend for \(f_{H{2}}\) versus distance from filament at redshift \(z=2\).
as a function of distance to filaments. Again, to mitigate the overall trend that the gas contents increase with mass (at least among star-forming systems), we consider the atomic and molecular Hydrogen fractions \(f_{\rm HI}\equiv M_{\rm HI}/M_{*}\) and \(f_{\rm H2}\equiv M_{\rm H2}/M_{*}\).
Fig. 5 shows \(f_{HI}\) (top) and \(f_{H2}\) (bottom) as a function of filamentary distance, using the same colour and line type scheme as the previous plots. For central galaxies (solid lines), the trends broadly mimic that for the sSFR: galaxies have suppressed gas contents close to filaments, with that suppression increasing in strength and extent towards lower redshifts. In detail, H\({}_{2}\) traces sSFR more faithfully, while H i even shows suppression near filaments even at \(z=2\), and a larger range of suppression.
The satellites however behave substantially differently from sSFR. At \(z=2\), there is little difference in \(f_{H2}\) between centrals and satellites, but for \(f_{HI}\) the satellites show an increased suppression near filaments. However, these trends tend to reverse near filaments at lower redshifts: the satellite gas fractions are generally comparable to (at \(z=1\)) or even higher (\(z=0\)) near filaments versus the centrals. This is an odd turn, which may have to do with the way that the gas fractions are computed by including all gas within halos that is most bound to a given galaxy. In galaxy groups, it has been observed that H i can be present throughout the group environment (e.g. Lucero et al., 2015), and in our H i assignment scheme that gas may have been associated with group satellites. If the same effect is happening in the densest regions of filaments, this H i may in detail not be associated with individual galaxies, but rather the overall environment. It is less easy to understand the upturn in \(f_{H2}\) at a low distance. This could partially be explained by the HI association for satellites, but in order to disentangle these effects and make proper predictions for comparison to data we would need to create mock data cubes of these systems (as in Glowacki et al., 2021), which is beyond the scope here.
Overall, the galaxies do not seem to show as much suppression in gas content as in sSFR (more visible for satellites). Since sSFR can be decomposed into \(f_{H2}\) times the star formation efficiency (SFE \(\equiv M_{H2}/M_{*}\)), this suggests that there are variations in the efficiency of converting gas into stars, with the SFE generally being lower closer to filaments.
In summary, our results show that the H i fraction increases with increasing distance from filaments at all three redshifts considered, with satellites at redshift \(z=0\) showing an initial decrease in gas fractions near filaments. \(f_{H2}\) shows similar trends at redshifts \(z=0\) and \(z=1\), while (like sSFR) showing no clear trend for \(f_{H2}\) versus distance from filament at redshift \(z=2\). The satellites show a curious increase in gas fractions at \(z\leq 1\) very close to filaments, which may owe to analysis methodology, and highlights the difficulty of unambiguously assigning particularly H i to galaxies in dense environments.
### Metallicity
Another key global galaxy property is its metallicity. Galaxies are known to have a strong correlation of \(M_{*}\) with \(Z\) known as the mass-metallicity relation (e.g. Trager et al., 2000; Tremonti et al., 2004; Maiolino and Mannucci, 2019). This is believed to be set by a competition between pristine inflow diluting metallicity, star formation enhancing metallicity, outflows removing metals, and the re-accretion of outflows providing an additional source of metals (e.g. Finlator and Dave, 2008; Dave et al., 2012). Particularly the latter effect could be impacted by environment, because the gas in denser regions may be more enriched from previous star-formation activity and may retard outflows leading to more re-accretion. For satellites in denser regions, one expects there to be less pristine inflow and more enriched recycling, leading to higher metallicities; indeed this is found in simulations (e.g. Dave et al., 2011). These trends are for the gas-phase metallicity, but models generally predict the stellar metallicity traces this reasonably well. Here, because we want to compare across both gas-rich and gas-poor galaxies, we will employ the stellar metallicity (Z\({}_{*}\)) versus filamentary distance since this can be computed for all galaxies.
Fig. 6 shows the median \(Z_{*}(d)\), using the same colour and line scheme as in previous plots. In all cases, the metallicities are overall higher in the filaments proximity. Given the MZR, or even the Fundamental Metallicity Relation (FMR, Mannucci et al., 2010), our results can be explained via the higher mass and low SFR galaxies present in the filaments' proximity.
The centrals show an abrupt drop in metallicity when moving away from filaments, and the differences between centrals and satellites, in this case, are less obvious compared to the previous quantities (SS3.2). In general, the satellites have slightly lower metallicity, but this is likely most explained by their lower \(M_{*}\). However, when comparing to Fig. 3 and considering the MZR, this similarity between centrals and satellites is actually somewhat surprising. This is because the satellites' median \(M_{*}\)'s are quite flat with distance, yet their metallicity continues to increase strongly towards filaments just like the centrals. This indicates that there is a strong effect from the suppression of SFR (i.e. the FMR), and perhaps also an effect from the environment.
Additionally, the trends observed appear to be weaker at low redshift. A possible explanation for this might be the steeper mass-metallicity correlation observed in Simha at high redshift, \(z\sim 2\)(see Dave et al., 2019). However, the change is quite dramatic from \(z=1\to 0\), with galaxies far from filaments being much more enriched at late times. This suggests another effect, such as wind recycling of enriched materials even far from filaments, being more important since \(z\sim 1\). We plan to investigate the detailed evolution of metallicity as a function of environment in future work.
In short, our results show that galaxies close to filaments are more metal-enriched, the centrals and satellites showing similar behaviours. The trend with the satellites, combined with the lack of a trend in \(M_{*}\) in contrast with the dramatic evolution in satellites' SFR,
Figure 6: Distance from filaments dependence of stellar metallicities for centrals (continuous lines) and satellites (dashed lines) for three redshifts: \(z=0\) (purple), \(z=1\) (dark red), \(z=2\) (green). The lines were obtained by binning the results in terms of distance and interpolating the corresponding medians (i.e. running medians). The shaded regions represent the corresponding standard errors in each bin. Galaxies close to filaments are more metal-enriched, the centrals and satellites showing similar behaviours.
suggests that the FMR is an important driver in setting the environmental trends. The trend substantially weakens by \(z=0\), owing to some complex interplay between the environment and the various physical processes governing galaxy metallicities.
### Quenching and morphology
We have seen that galaxies, and satellites in particular, have lower sSFRs near filaments at \(z\lesssim 1\). Another way to quantify this is using the quenched fraction \(f_{\rm Q}\), which we compute via the Williams et al. (2009) UVJ diagram. Quenching is also well-known to be correlated with morphology. Hence we also examine the elliptical fraction \(f_{\rm e}\), which we define using the fraction of kinetic energy in rotation (Sales et al., 2012) since Kraljic et al. (2020) found that this was the most well-correlated measure reflecting visual morphology in Simba. We chose a threshold of 0.3, meaning that galaxies with a smaller fraction of kinetic energy in rotation are considered elliptical, but varying this from 0.25 or 0.35 does not affect the results significantly.
Fig. 7 shows the galaxies' quenched fraction (\(f_{\rm Q}\), top panel) and elliptical fraction (\(f_{\rm e}\), bottom) as a function of distance from filament, using the same scheme as in previous plots. We note that the distance range adopted for this analysis is smaller than in the previous cases. This is because at large enough distances from filaments (\(\log(d/{\rm cMpc})\gtrsim 0.5\)) the corresponding behaviours are reasonably well converged.
As expected from SS3.2, both centrals and satellites are more quenched close to filaments. A decreasing trend is not visible at redshift \(z=2\), but becomes prominent at \(z\lesssim 1\). The quenched fraction increases with time, which follows the trend seen in the overall galaxy population in Simba, as well as in observations.
The trend is similar for both centrals and satellites, but the satellites have a higher overall \(f_{\rm Q}\). Within \(\lesssim 100\) ckpc of filaments at \(z=0\), \(\sim 90\%\) of satellites are quenched, as opposed to \(\sim 70\%\) of centrals. This shows again the important role that the environment plays in quenched satellite galaxies, which is already prominent around filamentary structures. At \(z=2\) however, little difference is seen between the central and satellite quenched fractions. Hence the environmental effects are restricted to lower redshifts, where large-scale structure causes more shock heating and AGN feedback provides significant energy into the IGM in Simba. These findings echo our results from SS3.2, quantified in a different way that provides another testable prediction from Simba.
In general, \(f_{\rm e}\) increases with time, and the satellites' \(f_{\rm e}\) are higher than that of the centrals, with the difference between them also growing with time. However, the elliptical fraction as defined here shows overall less evolution than the quenched fractions, and particularly at \(z\leq 1\) the elliptical fractions for centrals very close to filaments show no evolution. At large distances, centrals and satellites show similar elliptical fractions. In general, it is more difficult to compare our elliptical fractions defined via \(\kappa_{\rm rot}\) to observations since this is not so easy to measure, but this at least qualitatively demonstrates that in general Simba galaxies become less rotationally supported with time and proximity to filament.
To sum up, our results show that galaxies close to filaments are more elliptical and quenched, the trends being more prominent for satellites. For both centrals and satellites, the quenched fraction are similar at redshift \(z=2\), although the elliptical fractions are still a bit higher for satellites at \(z=2\). The higher elliptical fraction suggests galaxies that are likely to be more massive, passive, and metal-enriched, with lower gas content, tend to have less rotational support, concordant with the results noticed in previous sections (SS3.1-SS3.4). The fact that \(f_{\rm e}\) and \(f_{\rm Q}\) do not mimic each other exactly even qualitatively in their trends with \(d\) and redshift suggests that quenching and morphological transformation are not happening in exactly the same galaxies at the same time within Simba. However, the crudeness of the morphological measure begs further investigation to disentangle the relation between quenching and morphology, perhaps requiring higher resolution simulations capable of resolving the scale height of typical disks.
## 4 Deviations from the scaling relations
We have seen that proximity to DisPerSE-identified filaments has the effect of lowering star formation and gas content, raising the metallicity, and increasing the elliptical fraction of galaxies at \(z\lesssim 1\), effects that are enhanced amongst satellite galaxies. However, these effects with \(d\) are qualitatively degenerate with stellar mass - that is, galaxies with higher mass which tend to be found closer to filaments also share these general trends. Thus it is important to examine whether the effects are truly due to the location in the cosmic web.
To do this, as mentioned earlier, in this section we investigate many of the same quantities and their trends with \(d\), but now at fixed stellar mass \(M_{*}\). For this purpose, we compute the deviations of the quantities of interest from their scaling relations with stellar
Figure 7: Distance from filaments dependence of quenched fractions (_left_) and elliptical fractions (_right_) for centrals (continuous lines) and satellites (dashed lines) for three redshifts: \(z=0\) (purple), \(z=1\) (dark red), \(z=2\) (green). The lines were obtained by binning the results in terms of distance and interpolating the corresponding medians (i.e. running medians). Galaxies close to filaments are more elliptical and quenched at low redshift, the trends being more prominent for satellites
mass, specifically the star formation main sequence (SFMS), mass metallicity relation (MZR), and the H i and H\({}_{2}\) fractions relations with \(M_{*}\). By comparing these to the corresponding ones in the previous section, we can see how much of the effect owes to its stellar mass dependence and how much owes to the impact of filamentary large-scale structure.
### Deviation from star-forming main sequence
The (specific) star formation rate is expected to have a clear dependence on mass via the SFMS (see e.g. Noeske et al., 2007), which, as mentioned in SS2.1, is reasonably reproduced in Simba. We obtain the SFMS by fitting the medians of sSFR in bins of \(M_{*}\) for all star-forming galaxies defined as log(sSFR/yr\({}^{-1}\)) \(>-10.8+0.3z\), same as in Dave et al. (2019). We have kept our definition of star-forming galaxies simple despite there being more sophisticated ways to define the SFMS (see e.g. Hahn et al., 2019) in order to be more straightforwardly comparable with observations in the future. For instance, changing the \(z=0\) threshold from \(-10.5\rightarrow-11\) has little effect on the results, since we are only concerned with the relative deviation of galaxies near filaments versus those of the overall galaxy population.
Fig. 8 shows the deviation from the SFMS (\(\Delta\log\) sSFR) as a function of \(d\) at \(z=0\), 1, 2, for centrals (solid) and satellites (dashed). This can be compared to Fig. 4, which shows the corresponding plot for sSFR itself.
Overall, the trends look qualitatively, and for the most part quantitatively, similar to those for sSFR: the centrals show an increasing suppression of star formation activity with time very close to filament centres, with no strong trend at \(z=2\) while by \(z=0\) the typical galaxy at the centre of a filament is quenched. The satellites likewise show a strong trend with redshift, with suppression of star formation activity extending to quite large distances by \(z=0\). This indicates that the effects in sSFR seen previously do not owe primarily to any mass dependence in galaxies as a function of \(d\).
In detail, the trends in \(\Delta\log\) sSFR are slightly weaker than those seen in sSFR. For instance, at \(z=0\), the central galaxies close to filaments lie \(\sim 1\) dex below those at \(z=2\) in \(\Delta\log\) sSFR, while in Fig. 4 the difference is closer to \(\sim 2\) dex. But much of that difference is explained by the fact that sSFR's at \(z=2\) are generally higher than at \(z=0\). We conclude that the trends in \(M_{*}\) as a function of \(d\) are not an important factor in establishing the suppression of star formation activity near filaments and that such effects genuinely owe to environmental effects from the cosmic web.
### Distance from H i and H\({}_{2}\) mean relations
The H i and H\({}_{2}\) fractions are known to have a clear dependence on galaxy stellar mass (e.g. Obreschkow and Rawlings, 2009; Catinella et al., 2010; Maddox et al., 2015). Simba reproduces these trends fairly well, as shown in Dave et al. (2020). We obtain the underlying scaling relations by fitting a running median of each gas fraction as a function of \(M_{*}\). This then allows us to compute the corresponding deviations \(\Delta\log(M_{\rm H}/M_{*})\) and \(\Delta\log(M_{\rm H2}/M_{*})\).
Figure 9 shows these quantities \(\Delta f_{H\rm I}\) and \(\Delta f_{H2}\) as a function of
Figure 8: Distance from filaments dependence of the specific star formation rates’ deviations from the star formation main sequence for centrals (continuous lines) and satellites (dashed lines) for three redshifts: \(z=0\) (purple), \(z=1\) (dark red), \(z=2\) (green). The lines were obtained by binning the results in terms of distance and interpolating the corresponding medians (i.e. running medians). The shaded regions represent the corresponding standard errors in each bin. The centrals show an increasing suppression of star formation activity with time very close to filament centres, with no strong trend at \(z=2\) while by \(z=0\) the typical galaxy at the centre of a filament is quenched. The satellites likewise show a strong trend with redshift, with suppression of star formation activity extending to quite large distances by \(z=0\) (see also Fig. 4)
Figure 9: Distance from filaments dependence of the H i (_left_) and H\({}_{2}\) (_right_) fractions’ deviations from the corresponding scaling relations with mass, for centrals (continuous lines) and satellites (dashed lines) for three redshifts: \(z=0\) (purple), \(z=1\) (dark red), \(z=2\) (green). The lines were obtained by binning the results in terms of distance and interpolating the corresponding medians (i.e. running medians). The shaded regions represent the corresponding standard errors in each bin. Satellites are depleted in H i at a given \(M_{*}\) near to filaments at all three redshifts considered, \(z=0\), 1 and 2, while \(f_{H\rm I2}\) at a given \(M_{*}\) is not significantly depleted.
distance from filament \(d\), at \(z=0,1,2\) for centrals and satellites. This plot can be compared to Fig. 5, which uses the same line style scheme.
Unlike \(\Delta\)sSFR discussed in the previous section, there are noticeable differences between \(\Delta f_{HI}\) and \(\Delta f_{H2}\) versus \(d\) and the corresponding trends in the gas fractions themselves. For central galaxies, \(f_{HI}\) and \(f_{H2}\) both show clear declines towards filaments, but the declines are much weaker or absent when considering \(\Delta f_{HI}\) and \(\Delta f_{H2}\). Indeed, at \(z=2\), the gas fractions at a given \(M_{*}\) are actually enhanced close to filaments, and this remains true for H i even at \(z=0\). At \(z=1\) there is an odd feature, present in both gas fractions as well as the sSFR, which is difficult to explain and may owe to small number statistics. That aside, it appears that at high redshifts, the filamentary large-scale structure brings in more cool gas to supply galaxies, rather than suppressing it via shock heating. Overall, the central galaxies do not show any significant suppression of gas contents near filaments.
For satellites, \(\Delta f_{HI}\) shows significant suppression towards filaments, following similar trends for \(f_{HI}\), including the upturn in H i content close to filaments at \(z=0\). This indicates that for H i, the mass dependence of satellites is not critical in establishing trends of gas fractions with \(d\).
In contrast, satellites show decidedly less amounts of suppression in \(\Delta f_{H2}\) close to filaments than \(f_{H2}\). This is particularly surprising since sSFR and \(\Delta\)sSFR both show significant suppression, and the H\({}_{2}\) content is directly responsible for feeding star formation. This shows that the molecular gas fractions have a stronger \(M_{*}\) dependence which yields more significant trends with \(d\) in \(f_{H2}\) than in \(\Delta f_{H2}\). Combined with the variations in star formation efficiency discussed in SS3.3, this leads to a lack of a strong trend in \(\Delta f_{H2}\) (\(d\)).
To sum up, we can state that satellites are depleted in H i at a given \(M_{*}\) near to filaments at all three redshifts considered, \(z=0\), 1, and 2. In contrast, \(f_{H2}\) at a given \(M_{*}\) is not significantly depleted. Meanwhile, centrals show only weak trends, with a hint of an enhanced gas content close to filaments particularly at higher redshifts. With respect to quenching, it can be seen that the gas depletion for satellites starts at higher redshifts than the star formation suppression (SS4.1), in agreement with the recent results of Hasan et al. (2023). This may be expected as gas removal does not immediately result in reduced star formation, hence we expect the gas depletion to start at earlier epochs.
### Deviations from mass metallicity relation
We likewise investigate the deviations from the mass metallicity relation (MZR). As mentioned in SS2.1, the MZR in Simba is seen to be in good agreement with observational results at all three redshifts considered in this study. We compute the MZR by fitting a running median for the mass-metallicity distribution of all the galaxies considered (both centrals and satellites), and then we compute the deviation \(\Delta\)MZR from this fit for each galaxy, interpolated to its \(M_{*}\). As previously discussed in SS3.4, we examine the mass-weighted stellar metallicity here.
Figure 10 shows \(\Delta z_{*}\) with respect to the distance from filaments \(d\). This can be compared to the plot of \(Z_{*}(d)\) shown in Fig. 15, which employs the same colour scheme.
The trends in \(\Delta Z_{*}(d)\) are noticeably different than for \(Z_{*}(d)\). Most obviously, centrals show essentially no trend of \(\Delta\)Z\({}_{*}\) with \(d\) at any redshift, whereas there was a strong trend of metallicity increasing closer to filaments. The implication is that the trend seen in the MZR owes entirely to the fact that more massive centrals live closer to filaments, and intrinsically there is no effect of centrals' metallicity caused by large-scale structure. Given that the sSFR is significantly impacted, the implication is that the fundamental metallicity relation (Mannucci et al., 2010) is in Simba predicted to be dependent on location within the cosmic web. We aim to explore a comparison of this prediction to observations in future work.
The satellites, in contrast, continue to show a significant dependency of metallicity with \(d\) at all redshifts. Nonetheless, one sees significant differences comparing the trends in the dashed lines in Fig. 6 versus Fig. 10. For instance, at \(z=0\), \(Z_{*}(d)\) shows a very flat dependence, while \(\Delta Z_{*}(d)\) shows essentially no deviation until reaching very close to the filament centre, and then a strong upturn. The trends at \(z=1,2\) are more similar between these two quantities in terms of the trend, although the overall evolution of the MZR is removed by considering \(\Delta Z_{*}\).
To sum up, our results show that only satellites are more metal-enriched than the mass-metallicity relation predictions in filaments' proximity, at all redshifts considered. A plausible explanation relies on the FMR, given the low levels of star formation at fixed stellar mass for satellites close to filaments (SS4.1). We note however that satellites do not show evidence of strong star formation suppression at redshift \(z=2\), which again suggests that the FMR is not invariant with the environment. We speculate that this finding might be caused by the early chemical enrichment expected to primarily influence satellites in dense regions, as reported in e.g. Bahe et al. (2017) and Urban et al. (2017). Meanwhile, centrals at a given \(M_{*}\) show no enhancement close to filaments.
Overall, for redshift \(z=0\) and \(z=1\) we have seen that there is a significant effect on many galaxy properties close to filaments, with the effects being much stronger for satellite galaxies than for centrals. Close to filaments, galaxies (particularly satellites) tend to be less star-forming, less gas-rich, more metal-rich and are more likely to be quenched and dispersion-dominated. In detail, the suppression in gas content (and particularly \(f_{H2}\)) is not as strong as seen for sSFR, and the stellar metallicities are not impacted by location in the cosmic web once the mass dependence of the MZR is taken out. These predictions provide a comprehensive view of the growing impact of filaments on galaxies, which could potentially be compared to observations.
Figure 10: Distance from satellites dependence of the stellar metallicities’ deviations from the mass-metallicity relation for centrals (continuous lines) and satellites (dashed lines) for three redshifts: \(z=0\) (purple), \(z=1\) (dark red), \(z=2\) (green). The lines were obtained by binning the results in terms of distance and interpolating the corresponding medians (i.e. running medians). The shaded regions represent the corresponding standard errors in each bin. Satellites close to filaments show significantly higher metallicities than the MZR expectations.
## 5 Simba Feedback Variants
In the next two sections, we investigate the underlying cause(s) of why galaxies near filaments have systematically different properties. We focus on the suppression of star formation since this is the clearest trend, and is correlated with the trends in other properties. As discussed earlier, two possibilities for why the sSFR is lower near filaments are that it owes to shock heating from large-scale structure, or feedback heating from either star formation or AGN. In particular, AGN feedback is circumstantially implicated because the environmental effects become strong at \(z\la 2\) and particularly at \(z\la 1\), which matches up with the era in which AGN feedback increasingly quenches galaxies.
With the Simba suite, we have an opportunity to directly test the impact of feedback mechanisms on galaxy properties in the cosmic web using the feedback variant runs. In this section, we investigate the star formation properties of central and satellites at redshift \(z=0\), in the case when individual feedback modes of Simba are turned off as described by SS2.1. These runs, done in a \((50h^{-1}\)Mpc\()^{3}\) volume with otherwise the same resolution and input physics as the main \((100h^{-1}\)Mpc\()^{3}\) volume run, exclude feedback modes one at a time.
The main motivation behind this analysis is, on the one hand, to gain a better understanding of the main causes of quenching found in SS3.2 and SS4.1, but also to separate the potential cosmic web effects from the feedback ones. As explained in SS4, we chose to present the results here as deviations from their corresponding scaling relations with stellar mass, in order to minimise the mass effects on our findings.
Figure 11 presents the deviations from the star formation main sequence with respect to distance from filaments, when feedback is excluded (green lines) with the previous results from SS4.1 (Fig. 8) overplotted for reference (purple lines). As before, the satellites are represented by dashed lines and centrals by solid lines. It can be seen that when feedback is not included, star formation suppression is still evident in the filaments' proximity, this effect being stronger for satellites.
In order to gain a better understanding of how/where feedback impacts star formation, Fig. 12 shows the deviations from the SFMS for satellites at redshift \(z=0\) in different feedback scenarios. Note that no errors are shown in this plot, due to how close the lines are, but the approximate size of the errors in these determinations can be inferred from Fig. 11. It can be seen that in all cases, star formation is suppressed for a broader range of distances when some feedback is included, with AGN feedback having the stronger contribution in the filaments' proximity.
It is worth mentioning that when feedback is excluded, the distance range for which star formation is suppressed is shorter than in the full run/partial feedback cases, as only galaxies in the immediate proximity of filaments (i.e. log(\(d\)/cMpc) \(\la\) -1.4) show evidence of star formation suppression. This finding shows that feedback effects play indeed a role in quenching, however, for galaxies close to filaments we need a different explanation for the suppressed levels of star formation.
## 6 The Cosmic Web around Massive Halos
As explained in SS3, we have only studied galaxies within non-group halos (\(M_{h}\leq 10^{13}M_{\odot}\)) in order to avoid the influence of massive halos on the galaxy properties in the resulting cosmic web. However, the mass and spatial range over which massive halos influence surrounding galaxies remains uncertain. For instance, Gabor & Dave (2015) showed that galaxies can undergo "neighbourhood quenching" out to several virial radii, owing to elongated satellite orbits or being within the shock-heated region around a massive halo that can extend beyond the halo virial radius for quite massive systems.
In this section, we aim to explore how massive halos impact their surroundings in relation to the cosmic web. Specifically, we investi
Figure 11: Distance from filaments dependence of the specific star formation rates’ deviations from the star formation main sequence for centrals (continuous lines) and satellites (dashed lines) at redshift \(z=0\). The green lines represent no feedback scenarios, while the purple lines resulted from the full Simba runs (same as the purple lines in Fig. 8). The lines were obtained by binning the results in terms of distance and interpolating the corresponding medians (i.e. running medians). The shaded regions represent the corresponding standard errors in each bin. Galaxies have sSFR suppressed near filaments even when feedback is excluded, suggesting that large-scale structure heating is the dominant primary of the suppression.
Figure 12: Distance from filaments dependence of the specific star formation rates’ deviations from the star formation main sequence for and satellites at redshift \(z=0\), in different feedback scenarios: no feedback (green lines) – same as in Fig. 11; no AGN (yellow lines); no jet (blue lines); no X-ray feedback (red lines); full run (purple lines) – same as in Fig. 8 and Fig. 11. The lines were obtained by binning the results in terms of distance and interpolating the corresponding medians (i.e. running medians). Note that no errors are plotted, due to the lines being very close to each other, but the approximate size of the errors in these determinations can be inferred from Fig. 11. Among feedback processes, AGN feedback has the most significant impact on providing extra suppression of sSFR near filaments.
gate further the quenching trend found near filaments in the previous sections (SS3.2, SS4.1), contrasted versus the influence of simply being near a massive halo regardless of the relation to a filament. Certainly, we expect the massive halo to be the dominant environmental influence in its vicinity, however, we would like to determine if there is an additional influence from being close to a filament near the massive halo.
### Star formation near massive halos
Figure 13 shows how the deviations of the specific star formation rate from the SFMS for the galaxies in our sample varies with the distance from the closest massive halo (\(M_{h}>10^{13}M_{\odot}\)), normalised by the virial radius \(R_{\rm vir}\) for each halo, at \(z=0\). We define the virial radius as enclosing 200 times the critical density. In the purple lines, we show the results for the centrals (solid) and satellites (dashed) within 1 \(\rm{\,\,\rm{\,\rm{\,\rm{\,\rm{\,\rm{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \
For both EAGLE and IllustrisTNG, we identified filaments by applying DisPerSE to their galaxy catalogs exactly as described in SS2.2. Additionally, we used the same mass cuts as for the previous analysis - the lower stellar mass limit comes from the Simba resolution and the upper limit from the halo mass cut (see SS2.1). As before (SS3.2) for all the galaxies showing a null SFR, we set log (sSFR/ yr\({}^{-1}\)) = -14 and we compute the SFMS based on the galaxies with log (sSFR/ yr\({}^{-1}\)) \(>-10.8+0.3z\) (see SS4.1). Using this DisPerSE skeleton we investigate the overall trends of all galaxies (centrals and satellites) at redshift \(z=0\) among these two models and compare to our results from Simba.
Figure 14 shows AsSFR versus filamentary distance \(d\) at \(z=0\) for Simba (grey), IllustrisTNG (blue), and EAGLE (orange). Errors on the running medians are indicated by the shaded regions.
In general, all models predict a strong departure towards quenched galaxies when within \(\la 100\) ckpc of a filament. However, the details of trends show significant differences. The decline in Simba is much more gradual than for EAGLE and IllustrisTNG, which show a very rapid transition from all galaxies being essentially on the SFMS to all galaxies being quenched. In Simba, this occurs for the satellites, but less so for the centrals (see Fig. 8). This suggests that the typical sSFR or quenched fractions in central galaxies \(\sim 100\) ckpc away from filaments may be a good discriminator between models.
Figure 15 presents the metallicity trends, investigating as before the deviation of the stellar metallicity from the stellar MZR with distance from the nearest filament. In this case, the differences are only strongly visible out to a few tens of kpc from the filament, and Simba and IllustrisTNG show substantial increases in \(Z_{*}\) while EAGLE shows almost none1. It is further interesting that the deviations from the global mean relations extend much farther out in sSFR than in \(Z_{*}\), showing that the two quantities are not simply inversely correlated via some sort of stellar fundamental metallicity relation, but rather have a more complex relationship. These predictions provide clear testable ways to distinguish between models if stellar metallicities can be measured for such samples.
Footnote 1: We note that the underlying MZR in these three models are quite different in both shape and amplitude, owing to differences in assumed yields, feedback efficiencies, and metal loading. It is beyond the scope of this work to examine in detail the origin of these variations; here we aim to mitigate the effects of such differences by considering only the deviations from the MZR self-consistently computed within each simulation.
To sum up, overall we see general agreement between the trends resulting from Eagle, IllustrisTNG and Simba in sSFR and \(Z_{*}\). Specifically, we find that galaxies close to filaments are less star-forming and more metal-enriched, though only satellites are expected to lie above the MZR predictions in this region (see SS4.3). These results strengthen the hypothesis of more quenched galaxies in the filaments' proximity, as noticed in the previous sections (SS3, SS4 and SS5). Nonetheless, there are some distinct differences that could potentially be testable with present-day or upcoming large spectroscopic surveys. In the future, we will investigate how these trends are diluted when confronted with observational limitations such as redshift space distortions.
## 8 Summary
Using Simba simulations and the cosmic web extractor DisPerSE, we have investigated how galaxy properties depend on their location with respect to the filaments spines of the cosmic web. We have done so at various redshifts from \(z=2\to 0\), and have examined how these trends are governed by specific modes of feedback as implemented in Simba. We have specifically excluded the cosmic web within halos of \(M_{h}>10^{13}M_{\odot}\) so as to focus on environments outside of galaxy groups. We also compared Simba's predictions with those from EAGLE and IllustrisTNG results derived using the same methodology with the same mass cuts. The main findings of this work are summarised as follows:
* Central galaxies close to filaments have typically higher stellar mass and are surrounded by more satellites than those far away from filaments, similarly at all redshifts considered \(z=0,1,2\) (Figs. 2 and 3), in agreement with various previous literature results, (e.g. Chen et al., 2017; Malavasi et al., 2017; Kraljic et al., 2018). It is important to control for these variations in mass and halo occupancy in order to isolate the effects owing to the cosmic web environment. We do so by considering centrals and satellites separately, and by either normalising to stellar mass or by computing deviations of quantities at a given \(M_{*}\). This represents a novel aspect of this study since disentangling between centrals and satellites is very challenging in observational studies.
* At redshifts \(z=0\) and 1, the specific star formation rate or just sSFR is suppressed for satellites and centrals close to filaments, and increasing with distance. This trend has been reported before (e.g. Kuutma et al., 2017; Poudel et al., 2017). We additionally note that this effect is considerably stronger for satellites and at later epochs, showing that satellites are more strongly impacted by the cosmic web environment over time. For instance, at redshift \(z=0\) satellites are typically fully quenched within several hundreds kpc of a filament and do not converge to the sSFR's of centrals until one reaches \(\sim 10\) cMpc from filaments (Fig. 4). This shows that pre-processing of satellites is already prevalent in filaments at \(z\la 1\), prior to reaching group environments. The effects on centrals are also noticeable, particularly within \(\la 100\) ckpc of filaments. One can thus regard 100 ckpc as a rough scale over which filamentary environment impacts star formation in galaxies.
* The cold gas fractions, characterised in this study via \(M_{\rm HI}/M_{*}\) and \(M_{\rm H2}/M_{*}\), show more subtle and challenging trends with the distance from the closest filaments (Fig. 5), especially for centrals.
Figure 15: Distance from filaments dependence of the stellar metallicities’ deviations from the mass-metallicity relation for all galaxies computed in EAGLE (orange line) and IllustrisTNG (blue line). The lines were obtained by binning the results in terms of distance and interpolating the corresponding medians. All the determinations are made for redshift \(z=0\). Galaxies in all simulations are more metal-enriched (mainly satellites) in the filaments proximity, but IllustrisTNG and Simba show strong increases close to filaments, while EAGLE shows only a modest increase.
Broadly, cold gas is suppressed towards filament spines, increasingly so to lower redshifts as with sSFR, in qualitative agreement with Crone Odekon et al. (2018) and disagreement with Kleiner et al. (2017). However, centrals can be more or less suppressed in cold gas than satellites depending on distance. One aspect that may be confusing is that it can be difficult to associate particularly H i with any particular galaxy within a denser environment, as H i arises in relatively diffuse gas. Hence a proper investigation of H i contents may require creating mock observations for a particular setup and conducting side-by-side analyses with data, which is beyond the scope here but will be feasible using upcoming multi-wavelength radio surveys.
* The stellar metallicity is higher close to filaments for both centrals and satellites, at all three redshifts considered (Fig. 6), in agreement with Winkel et al. (2021); Donnan et al. (2022). However, we additionally note that the trend with distance is much steeper at \(z=2,1\), and is diluted by \(z=0\). The trends for centrals and satellites are not markedly different.
* The quenched fraction and the elliptical fractions are both anti-correlated with distance to filament, for both satellites and centrals, generally tracking the trends for sSFR as expected. The quenched fraction trend fades out at redshift \(z=2\), while the elliptical fraction trend is more consistent at all three redshifts considered \(z=0,1\), and \(2\) (Fig. 7). At a given distance, satellites at \(z=0,1\) tend to be more quenched and elliptical. This is in spite of the fact that they are lower mass than the centrals (we have not controlled for stellar mass in this plot). Hence cosmic web environment impacts both colour and morphology.
* The trends noted above for sSFR are broadly similar when considering deviations from mean scaling relations rather than the quantities themselves (Fig. 8). This gives us confidence that the trends seen previously did not owe simply to trends with \(M_{*}\), and are instead genuinely due to being close to a filament. However, there are more significant differences in the case of metallicity \(Z_{*}\); unlike for the overall metallicity \(Z_{*}\) for which we saw a clear increase towards filaments, once we control for the \(M_{*}\) dependence via the MZR, we now see no deviation from the mean MZR with distance for central galaxies, while satellites show strong deviations from the mean MZR only within \(\,\sim\,\)100 ckpc (Fig. 10). Also, the trends in \(H_{2}\) fraction with distance are not very strong when considering deviations from the mean \(M_{H2}-M_{*}\) relation, indicating that the reduction in sSFRs close to filaments must owe primarily to a reduction in the star formation efficiencies of those galaxies (Fig. 9).
* We investigate whether the trends in sSFR owe to feedback or cosmic web growth by comparing amongst identical Simba runs with individual feedback modes turned off. We find that the predominant effect owes to the cosmic web, presumably via shock heating the gas to retard star formation near filaments. However, the effects of feedback are not negligible; they add to the effects of large-scale structure and increase the range out to which satellites are quenched at \(z=0\) by \(\sim\)\(\times 2\) (Fig. 11). The bulk of this extra suppression comes from AGN feedback; star formation feedback has a minor impact (Fig. 12).
* While we have mostly excluded massive halos from this analysis, we examine whether the impact of filaments is still noticeable around halos with \(M_{h}>10^{13}\,M_{\odot}\) by comparing trends in sSFR vs. halostimate distance for galaxies near filaments and far from filaments. We find that the majority of the suppression of sSFR owes to the fact that these galaxies live around massive halos. However, there are significant differences for the galaxies close to filaments feedback the massive halos; they show a different pattern of sSFR suppression for centrals, and slightly more extended suppression for satellites (Fig. 13). Hence location within the cosmic web generates an effect over and above that arising simply due to being close to a massive halos.
* We compare our Simba results to those from the EAGLE and IllustrisTNG simulations at \(z=0\), focusing on deviations from the mean sSFR and \(Z_{*}\) relations with mass, and considering centrals and satellites together. In general, all models show similar levels of sSFR suppression close to halos, although Simba's trend is more gradual while EAGLE and IllustrisTNG show a very sharp drop in median sSFR at \(\sim 100\) ckpc (Fig. 14). Meanwhile, \(Z_{*}\) shows a significant increase very close to filaments for IllustrisTNG and Simba, but such a trend is not seen in EAGLE (Fig. 15). These highlight possible avenues by which galaxy statistics relative to the cosmic web may provide discriminatory power between forefront simulations.
The overall distribution of galaxies with respect to filaments, specifically high-mass centrals with their accompanying low-mass satellites in the filaments' proximity can be explained by the environmental effects on the halo mass function (e.g. Alam et al., 2019). The satellites' suppressed star formation, enriched metallicity and suppressed gas content trends at fixed stellar mass (\(\lx@sectionsign\)4.1, 4.3 and 4.2) are consistent with a scenario where satellites close to filaments are quenched via H i reservoir depletion and lowered efficiency in converting \(H_{2}\) into stars putatively owing to an increase in shock-heated gas near filaments. This also results in higher metallicities owing to the lack of infalling (relatively) pristine gas. The corresponding trends are considerably weaker for centrals, indicating that the cosmic web effects are less efficient in this case, possibly because centrals live within denser gas near the bottom of their halos' potential wells.
Star formation suppression starts around \(z\sim 2\), since environmental trends are not evident then, and if anything show a reversed trend in which all galaxies close to filaments have slightly higher sSFR's. At earlier epochs, we find results very similar to those at \(z=2\), so we did not explicitly show them. Given that gas depletion and star formation suppression are mostly present around filaments even when feedback is excluded (\(\lx@sectionsign\)5), we argue that the interactions between satellites and the hot gas of cosmic web cause quenching via a combination of gas stripping, on shorter timescales, and starvation, on longer timescales. Nonetheless, feedback has an additional non-negligible impact, providing a way to constrain models of (particularly) AGN feedback.
Overall, we find that the cosmic web plays a non-negligible role in shaping galaxy properties, though these effects are secondary to, i.e. weaker than, the mass effects. Our results generally agree with similar recent studies (e.g. Kraljic et al., 2018; Malavasi et al., 2022; Bhambhani et al., 2022) and provide new perspectives by clearly separating centrals and satellites, controlling for stellar mass, considering only galaxies within low-mass halos and investigating the impact of various feedback scenarios. Observational results are required to test these predictions and potentially identify areas for improvements in simulations. Identifying the cosmic web ideally requires large-area spectroscopic surveys (although it may be possible to extract signals from the 2-D cosmic web from photo-\(z\)'s), which exist now with SDSS and GAMA but will soon be greatly boosted with new facilities like Euclid (Laureijs et al., 2011; Refregier, 2009; Cimatti et al., 2009), WEAVE (Dalton et al., 2012), PFS (Takada et al., 2014), 4MOST (de Jong et al., 2019), and DESI (DESI Collaboration et al., 2016). Combining these with multi-wavelength surveys to characterise the various physical properties of galaxies, provides an exciting new frontier to explore how galaxy evolution models can be constrained using the cosmic web.
## Acknowledgements
The authors would like to thank Katja Fahrion for helpful discussions and the developers of DisPerSE and Carsar for making their codes public. This work was supported by the Science and Technology Facilities Council (STFC). For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising from this submission.
## Data Availability
The Simba simulation data and Caesar galaxy catalogues are publicly available at [https://simba.roe.ac.uk](https://simba.roe.ac.uk). The derived data and DisPerSE outputs underlying this article will be shared on reasonable request to the corresponding author.
|
2306.17472 | Knowledge Base Completion for Long-Tail Entities | Despite their impressive scale, knowledge bases (KBs), such as Wikidata,
still contain significant gaps. Language models (LMs) have been proposed as a
source for filling these gaps. However, prior works have focused on prominent
entities with rich coverage by LMs, neglecting the crucial case of long-tail
entities. In this paper, we present a novel method for LM-based-KB completion
that is specifically geared for facts about long-tail entities. The method
leverages two different LMs in two stages: for candidate retrieval and for
candidate verification and disambiguation. To evaluate our method and various
baselines, we introduce a novel dataset, called MALT, rooted in Wikidata. Our
method outperforms all baselines in F1, with major gains especially in recall. | Lihu Chen, Simon Razniewski, Gerhard Weikum | 2023-06-30T08:37:55Z | http://arxiv.org/abs/2306.17472v1 | # Knowledge Base Completion for Long-Tail Entities
###### Abstract
Despite their impressive scale, knowledge bases (KBs), such as Wikidata, still contain significant gaps. Language models (LMs) have been proposed as a source for filling these gaps. However, prior works have focused on prominent entities with rich coverage by LMs, neglecting the crucial case of long-tail entities. In this paper, we present a novel method for LM-based-KB completion that is specifically geared for facts about long-tail entities. The method leverages two different LMs in two stages: for candidate retrieval and for candidate verification and disambiguation. To evaluate our method and various baselines, we introduce a novel dataset, called MALT, rooted in Wikidata. Our method outperforms all baselines in F1, with major gains especially in recall.
## 1 Introduction
**Motivation and Problem.** Knowledge base completion (KBC) is crucial to continuously enhance the scope and scale of large knowledge graphs (KGs). It is often cast into a link prediction task: infer an O(bject) argument for a given S(object)-P(redicate) pair. However, the task is focused on the KG itself as the only input, and thus largely bound to predict SPO facts that are also derivable by simple logical rules for inverse predicates, transitive predicates etc. Akrami et al. (2020); Sun et al. (2020). To obtain truly new facts, more recent methods tap into large language models (LMs) that are learned from huge text collections, including all Wikipedia articles, news articles and more. The most promising approaches to this end generate cloze questions for knowledge acquisition and ask LMs to generate answers Petroni et al. (2019). The LM input is often augmented with carefully crafted short prompts (e.g., a relevant Wikipedia paragraph) Shin et al. (2020); Jiang et al. (2020); Qin and Eisner (2021).
However, notwithstanding great success for question answering to humans, the LM-based approach falls short on meeting the high quality requirements for enriching a KG with crisp SPO facts. Even if most answers are correct, there is a non-negligible fraction of false or even "hallucinated" outputs by the LM, and large KGs, like Wikidata Vrandecic and Krotzsch (2014), cannot tolerate error rates above 10 percent. Moreover, even correct answers are not properly canonicalized: they are surface phrases and not unique entities in the KG. These problems are further aggravated when the to-be-inferred O arguments are _long-tail_ entities, with very few facts in Wikidata. Here, we call an entity _long-tail_ when it has less than 14 triples in Wikidata, because nearly 50% of the Wikidata entities have fewer than 14 triples. These are exactly the pain point that calls for KBC. This paper addresses this problem.
As an example, consider the late Canadian singer _Lhasa de Sela_. Wikidata solely covers basic biographic facts and selected awards, nothing about her music. However, text sources such as her Wikipedia article or other web pages provide expressive statements about her albums, songs, collaborations etc. For example, we would like to spot the facts that \(\langle\)_Lhasa de Sela, collaboratedWith, Bratsch_\(\rangle\) and \(\langle\)_Lhasa de Sela, performedSong, Anyone and Everyone_\(\rangle\). Note that capturing these as SPO facts faces the challenge of having to capture and disambiguate multi-word names (_"Lhasa de Sela"_) and common-noun phrases (_"anyone and everyone"_). When trying to extract such statements via cloze questions or more refined prompts to LMs such as GPT-3 Brown et al. (2020) or chatGPT, the outputs would often be _"Lhasa"_, which is highly ambiguous, or _"everyone"_, which is incomplete and impossible to interpret.
**Approach and Contribution.** This paper devises a novel method for knowledge base completion (KBC), specifically geared to cope with long-tail
entities. Although we will present experimental comparisons to prior works on relation extraction from text, we believe that ours is among the first works to successfully cope with the challenge of noise and ambiguity in the long tail.
Our method leverages Transformer-based language models in a new way. Most notably, we employ two different LMs in a two-stage pipeline, as shown in Figure 1. The first stage generates candidate answers to input prompts and gives cues to retrieve informative sentences from Wikipedia and other sources. The second stage validates (or falsifies) the candidates and disambiguates the retained answer strings onto entities in the underlying KG (e.g., mapping _"Lhasa"_ to Lhasa de Sela, and "Bratsch" to Bratsch (band)).
The novel contributions of this work are the following:
* the first KBC method that leverages LMs to cope with long-tail entities;
* a new dataset, called MALT, to benchmark methods with long-tail entities;
* experimental comparisons with baselines, using the MALT data.
Our code and data are available on both GitHub1 and mpi-inf.mpg.de2.
Footnote 1: [https://github.com/tigerchen52/long_tail_kbc](https://github.com/tigerchen52/long_tail_kbc)
Footnote 2: [https://www.mpi-inf.mpg.de/departments/databases-and-information-systems/research/knowledge-base-recall/lm4kbc](https://www.mpi-inf.mpg.de/departments/databases-and-information-systems/research/knowledge-base-recall/lm4kbc)
## 2 Related Work
**Knowledge Base Completion.** This task, KBC for short, has mostly been tackled as a form of link prediction: given a head entity S and a relation P, predict the respective tail entity O, using the KG as sole input. A rich suite of methods have been developed for this task, typically based on latent embeddings computed via matrix or tensor factorization, neural auto-encoders, graph neural networks, and more (see, e.g., surveys (Chen et al., 2020; Ji et al., 2022) and original references given there). However, the premise of inferring missing facts from the KG itself is a fundamental limitation. Indeed, several studies have found that many facts predicted via the above KBC techniques are fairly obvious and could also be derived by simple rules for transitivity, inverse relations etc. (Akrami et al., 2020; Sun et al., 2020).
**Language Models as Knowledge Bases.** The LAMA project (Petroni et al., 2019) posed the hypothesis that probing LMs with cloze questions is a powerful way of extracting structured facts from the latently represented corpus on which the LM was trained. A suite of follow-up works pursued this theme further and devised improvements and extensions (e.g., (Heinzerling and Inui, 2021; Jiang et al., 2020; Kassner and Schutze, 2020; Roberts et al., 2020; Shin et al., 2020; Zhong et al., 2021)). This gave rise to the notion of "prompt engineering" for all kinds of NLP tasks (Liu et al., 2021). In parallel, other works studied biases and limitations of the LM-as-KB paradigm (e.g., (Cao et al., 2021; Elazar et al., 2021; Razniewski et al., 2021; Jiang et al., 2020)). In this work, we investigate the feasibility of leveraging LMs to complete real-world KBs, and mainly focus on long-tail facts.
Figure 1: The framework of our two-stage KBC method.
## 3 Two-Stage KBC Method
We propose an unsupervised method for KBC that taps into LMs as latent source for facts that cannot be inferred from the KG itself. Our method operates in two stages:
1. For a given S-P pair, generate candidate facts \(\langle\)S,P,"O"\(\rangle\) where "O" is an entity name and possibly a multi-word phrase.
2. Corroborate the candidates, retaining the ones with high confidence of being correct, and disambiguate the "O" argument into a KG entity.
Candidate Generation.We devise a generic prompt template for cloze questions, in order to infer an "O" answer for a given S-P pair. This merely requires a simple verbalizer for the relation P:
"\(\langle\)S-type\(\rangle\) S \(\langle\)P-verb\(\rangle\) which \(\langle\)O-type\(\rangle\)?"
(e.g., "the song \(\langle\)S\(\rangle\) is performed by which person?" for the predicate performer). The S-type and O-type are easily available by the predicate type-signature from the KG schema. As additional context we feed a Wikipedia sentence from the S entity's article into the LM. This is repeated for all sentences in the respective Wikipedia article. Specifically, we employ the SpanBERT language model Joshi et al. (2020), which is fine-tuned on on the SQuAD 2.0 Rajpurkar et al. (2018) 3. Note that all of this is completely unsupervised: there is no need for any fine-tuning of the LM, and there is no prompt engineering.
Footnote 3: [https://huggingface.co/mrm8488/](https://huggingface.co/mrm8488/)
**Candidate Corroboration and Canonicalization.** The first stage yields a scored list of candidates in the form of pairs ("O", \(s\)) with an entity name and a Wikipedia sentence \(s\). In the corroboration stage, the candidates are fed into a second LM for re-ranking and pruning false positives. Specifically, we employ the generative entity disambiguation model GENRE De Cao et al. (2020), which in turn is based on BART Lewis et al. (2020) and fine-tuned on BLINK Wu et al. (2020) and AIDA Hoffart et al. (2011). We construct the input by the template:
"\(\langle\)S-type\(\rangle\) S \(\langle\)P-verb\(\rangle\) [ENT] this \(\langle\)O-type\(\rangle\) [ENT]" (e.g., "the song Anyone and Everyone is performed by [ENT] this person [ENT]"), contextualized with the sentence \(s\). GENRE generates a list of answer entities \(\epsilon\), taken from an underlying KG, like Wiki-data, that is, no longer just surface names. If the candidate name "O" approximately matches a generated \(\epsilon\) (considering alias names provided by the KG), then the entire fact, now properly canonicalized, is kept. Since we may still retain multiple facts for the same S-P input and cannot perfectly prevent false positives, the inferred facts are scored by an average of the scores from stage 1 and stage 2.
## 4 MALT: New Dataset for Benchmarking
Benchmarks for KBC and LM-as-KB cover facts for all kinds of entities, but tend to focus on prominent ones with frequent mentions. Likewise, benchmarks for relation extraction (RE) from text, most notably TACRED Zhang et al. (2017), DocRED Yao et al. (2019) and LAMA Petroni et al. (2019) do not reflect the difficulty of coping with long-tail
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline
**Subject Type** & **Relation** & **Wikidata ID** & **Triples** & **multi-token (\%)** & **ambiguous (\%)** & **long-tail (\%)** \\ \hline Business & founded by & P112 & 5720 & 97.3 & 21.1 & 91.2 \\ \hline \multirow{2}{*}{MusicComposition} & performer & P175 & 1876 & 91.1 & 62.0 & 47.3 \\ & composer & P86 & 3016 & 98.2 & 59.8 & 88.5 \\ \hline \multirow{4}{*}{Human} & place of birth & P19 & 13416 & 23.6 & 81.6 & 99.3 \\ & place of death & P20 & 7247 & 25.9 & 84.8 & 99.6 \\ \cline{1-1} & employer & P108 & 3503 & 96.5 & 37.4 & 81.4 \\ \cline{1-1} & educated at & P69 & 13386 & 99.6 & 38.7 & 72.2 \\ \cline{1-1} & residence & P551 & 886 & 32.1 & 87.1 & 96.4 \\ \hline Micro-Avg & - & - & - & 65.3 & 58.6 & 87.0 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics for MALT dataset.
\begin{table}
\begin{tabular}{c c c} \hline \hline Dataset & SPO triples & Long-tail fraction \\ \hline DocRED (2019) & 63K & 32.0 \% \\ LAMA-TREx (2019) & 34K & 39.6 \% \\ X-FACTR (2020a) & 46K & 49.6 \% \\ MALT (Ours) & 49K & 87.0 \% \\ \hline \hline \end{tabular}
\end{table}
Table 2: Estimated fractions of long-tail S entities across different datasets, where long-tail means at most 13 triples in Wikidata. The estimations are based on 200 samples across 8 relations.
entities and the amplified issue of surface-name ambiguity (see Table 2. Therefore, we developed a new dataset with emphasis on the long-tail challenge, called MALT (for "Multi-token, Ambiguous, **L**ong-**T**ailed facts").
To construct the dataset, we focus on three types of entities: Business, MusicComposition and Human, richly covered in Wikidata and often involving long-tail entities. We randomly select subjects from the respective relations in Wikidata, and keep all objects for them. We select a total of 8 predicates for the 3 types; Table 1 lists these and gives statistics.
The dataset contains 65.3% triple facts where the O entity is a multi-word phrase, and 58.6% ambiguous facts where the S or O entities share identical alias names in Wikidata. For example, the two ambiguous entities _,"Birmingham, West Midlands (Q2256)"_ and _"Birmingham, Alabama (Q79867)"_, have the same Label value _"Birmingham"_. In total, 87.0% of the sample facts have S entities in the long tail, where we define long-tail entities to have at most 13 Wikidata triples.
## 5 Experimental Evaluation
**Baselines.** To the best of our knowledge, there is no prior work on KBC or LM-as-KB that is specifically geared for coping with long-tail entities. As a proxy, we thus compare to several state-of-the-art methods for relation extraction (RE) from text. At test time, these methods receive the retrieved Wikipedia sentences for a ground-truth SPO fact and the SP pair as input, and are run to extract the withheld O argument (sentence-level extraction).
We compare to the following baselines:
* _NER + RC (CNN)_ uses TNER (Ushio and Camacho-Collados, 2022) to recognize entity mentions in context sentences, followed by a CNN-based relation classifier Nguyen and Grishman (2015). The RC component is trained
\begin{table}
\begin{tabular}{c c|c c|c c|c c|c c|c c|c c} \hline \hline \multicolumn{1}{c}{**Relation**} & \multicolumn{1}{c}{**ID**} & \multicolumn{3}{c|}{**NER + RC (CNN)**} & \multicolumn{3}{c|}{**REDEL**} & \multicolumn{3}{c|}{**KnowGL**} & \multicolumn{3}{c}{**GenIE**} & \multicolumn{3}{c}{**Ours**} \\ & & P & R & F1 & P & R & F1 & P & R & F1 & P & R & F1 \\ \hline founded by & P112 & 13.5 & 21.2 & 16.5 & 42.8 & 27.3 & 33.3 & 0.0 & 0.0 & 0.0 & 59.1 & 7.9 & 13.9 & 57.0 & 44.5 & 50.0 \\ \hline performer & P175 & 5.2 & 10.1 & 6.9 & 25.3 & 28.1 & 26.6 & 0.0 & 0.0 & 0.0 & 47.3 & 19.1 & 27.2 & 42.7 & 15.6 & 22.9 \\ composer & P86 & 17.3 & 20.5 & 18.8 & 37.9 & 27.7 & 32.0 & 37.6 & 25.7 & 30.6 & 70.0 & 16.6 & 26.8 & 67.3 & 65.6 & 66.4 \\ \hline place of birth & P19 & 4.7 & 4.7 & 4.7 & 4.9 & 20.5 & 28.9 & 49.4 & 23.4 & 31.7 & 64.1 & 9.2 & 16.1 & 47.9 & 61.4 & 53.8 \\ place of death & P20 & 12.5 & 4.7 & 6.8 & 25.6 & 11.8 & 19.2 & 66.6 & 9.4 & 16.5 & 47.5 & 3.0 & 5.6 & 46.6 & 48.2 & 47.4 \\ employer & P108 & 8.7 & 4.9 & 6.3 & 50.0 & 4.9 & 8.8 & 0.0 & 0.0 & 0.0 & 54.0 & 0.1 & 0.2 & 30.0 & 29.3 & 29.6 \\ educated at & P69 & 8.9 & 8.4 & 7.7 & 15.4 & 1.1 & 2.1 & 22.2 & 1.1 & 2.2 & 46.7 & 0.1 & 0.2 & 42.9 & 39.5 & 41.2 \\ residence & P551 & 0.0 & 0.0 & 0.0 & 33.3 & 8.3 & 13.3 & 33.3 & 8.3 & 13.3 & 44.4 & 0.2 & 0.4 & 19.2 & 41.7 & 26.3 \\ \hline Micro-Avg & - & 26.7 & 13.7 & 13.7 & 38.3 & 16.2 & 20.6 & 26.2 & 8.5 & 11.8 & 52.2 & 6.9 & 11.2 & 44.2 & 43.2 & 42.2 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance comparison on MALT data.
\begin{table}
\begin{tabular}{l l l} \hline \hline Relation & ID & Candidate Generation & Corborboration and Canonicalization \\ \hline founded by & P112 & the business [x] is founded by which person? & the business [x] is founded by [ENT] this person [ENT] \\ \hline performer & P175 & the song [x] is performed by which person? & the song [x] is performed by [ENT] this person [ENT] \\ \hline composer & P86 & the song [x] is composed by which person? & the song [x] is composed by [ENT] this person [ENT] \\ \hline place of birth & P19 & the person [x] was born in which place? & the person [x] was born in [ENT] this place [ENT] \\ \hline place of death & P20 & the person [x] died in which place? & the person [x] died in [ENT] this place [ENT] \\ \hline employer & P108 & the person [x] worked in which place? & the person [x] worked in [ENT] this place [ENT] \\ \hline educated at & P69 & the person [x] graduated from which place? & the person [x] graduated from [ENT] this place [ENT] \\ \hline residence & P551 & the person [x] lived in which place? & the person [x] lived in [ENT] this place [ENT] \\ \hline \hline \end{tabular}
\end{table}
Table 3: Prompts for relations in MALT. [x] is a placeholder for the subject entity and [ENT] is a special token for the mention.
on REBEL Cabot and Navigli (2021).
* _REBEL_ Cabot and Navigli (2021) is an end-to-end relation extraction for more than 200 different relation types in Wikidata.
* _KnowGL_ Rossiello et al. (2023) is an open-source system that can convert text into a set of Wikidata statements.
* _GenIE_ Josifoski et al. (2022) is an end-to-end closed triplet extraction model, which is trained on REBEL dataset Cabot and Navigli (2021). GenIE uses Wikidata as the target KB and can extract 5,891,959 entities and 857 relations.
**Setup.** There are two hyper-parameters for all competitors, the number of candidates \(k\) (or the "top-k" hyper-parameter for baseline models) and the threshold \(\alpha\) for cutting off the extracted triples. For our framework, \(k\) is 20 for all competitors and the threshold \(\alpha\) is learned by using a hold-out (20%) validation set. We report results for precision, recall and F1, with the original Wikidata triples as ground truth. Although MALT provides canonicalized entities, we consider the extracted O to be a correct prediction as long as it appears in the alias table because some baselines themselves cannot do disambiguation.
Our method is completely unsupervised, and the only additional cost is prompt. We manually design one template for each relation (as shown in Table 3).
**Results.** Table 4 shows the results from this experimental comparison. We observe that the GenIE baselines does well in terms of precision, but has very poor recall. In contrast, our two-stage method achieves both good precision and recall. Regarding precision, it is almost as good as GenIE (44% vs. 52%); regarding recall, it outperforms GenIE and the other baselines by a large margin (43% vs. 7%). Our method still leaves substantial room for further improvement, underlining the challenging nature of inferring facts for long-tail entities. We think of our method as a building block to aid a human curator by judicious suggestions for facts that would augment the KG.
Many of the inferred SPO facts are indeed completely missing in Wikidata; so they are also not in the withheld ground-truth samples for the above evaluation. To estimate how many facts we could potentially add to the KG and how good our automatically inferred predictions are, we picked 25 samples for each relation, a total of 250 fact candidates, and asked human annotators to assess their correctness. Over all relations, this achieved an average precision of 61%. For the relation educated at, our method even has 76% precision, and this is a case where the KG has enormous gaps: out of 10M sampled entities of type Human, only 65% have facts for this relation. For this case, our KBC method collected 1.2M candidate facts, showing the great potential towards closing these gaps.
## 6 Conclusion
We highlighted the challenge of knowledge base completion (KBC) for long-tail entities, introduced the MALT dataset for experimental comparisons and fostering further research, and presented a completely unsupervised method for augmenting knowledge bases with long-tail facts. Our method operates in two stages, candidate generation and candidate corroboration (incl. disambiguation), and leverages two different LMs in a complementary way. Experimental results show substantial gains over state-of-the-art baselines, and highlight the benefits of our two-stage design with two LMs complementing each other.
## Limitations
Although our dataset presents a significant advancement over previous benchmarks, it is still limited in that it only contains entities already known to Wikidata. One could argue that the very long tail is what is even beyond Wikidata.
In the second stage, our method harnesses an LM pre-trained for entity disambiguation. Therefore, our methodology, in its current form, cannot predict objects that are not already known to that LM and its underlying KB.
## Acknowledgements
This work was partially funded by ANR-20-CHIA-0012-01 ("NoRDF"). We thank Fabian M. Suchanek and Gael Varoquaux for their helpful feedback.
|
2309.11686 | SE-PEF: a Resource for Personalized Expert Finding | The problem of personalization in Information Retrieval has been under study
for a long time. A well-known issue related to this task is the lack of
publicly available datasets that can support a comparative evaluation of
personalized search systems. To contribute in this respect, this paper
introduces SE-PEF (StackExchange - Personalized Expert Finding), a resource
useful for designing and evaluating personalized models related to the task of
Expert Finding (EF). The contributed dataset includes more than 250k queries
and 565k answers from 3 306 experts, which are annotated with a rich set of
features modeling the social interactions among the users of a popular cQA
platform. The results of the preliminary experiments conducted show the
appropriateness of SE-PEF to evaluate and to train effective EF models. | Pranav Kasela, Gabriella Pasi, Raffaele Perego | 2023-09-20T23:40:32Z | http://arxiv.org/abs/2309.11686v2 | # SE-PEF: a Resource for Personalized Expert Finding
###### Abstract.
The problem of personalization in Information Retrieval has been under study for a long time. A well-known issue related to this task is the lack of publicly available datasets to support a comparative evaluation of personalized search systems. To contribute in this respect, this paper introduces SE-PEF (StackExchange - Personalized Expert Finding), a resource useful for designing and evaluating personalized models related to the Expert Finding (EF) task. The contributed dataset includes more than 250k queries and 565k answers from 3 306 experts, which are annotated with a rich set of features modeling the social interactions among the users of a popular cQA platform. The results of the preliminary experiments conducted show the appropriateness of SE-PEF to evaluate and to train effective EF models.
Question Answering, Expert Finding, User Model, Personalization. 2023
Footnote 2: [https://doi.org/10.1145/3624918.3625335](https://doi.org/10.1145/3624918.3625335)
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote † †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote †: [cs:12] 2023
+
Footnote † †: [cs:12] 2023
+
Footnote † †: [cs:12] 2023
+
Footnote † †: [cs:12] 2023
+
Footnote † †: [cs:12] 2023
+
Footnote † †: [cs:12] 2023
+
Footnote † †: [cs:12] 2023
+
Footnote † †: [cs:12] 2023
+
Footnote † †: [cs:12] 2023
+
Footnote † †: [cs:12] 2023
+
Footnote † †: [cs:12] 2023
+
Footnote † †: [cs:12] 2023
+
Footnote † †: [cs:12] 2023
+
Footnote † †: [cs:12] 2023
+
Footnote † † †: [cs:12] 2023
+
Footnote † † †: [cs:12] 2023
+
Footnote † † †: [cs:12] 2023
+
Footnote † † †: [cs:12] 2023
+
Footnote † † † †: [cs:12] 2023
+
Footnote † † † † †: [cs:12] 2023
+
Footnote †
## 2. The SE-PEF Dataset
The dataset proposed in (Kasella et al., 2017) is based on StackExchange1 and available under a CC BY-SA 4.0 license. It comprises questions and answers from 50 different stackexchange communities, written between _2008-09-10_ and _2022-09-25_. There are around 1.1 million questions and 2.1 million answers. The training, validation and test splits are based on a temporal condition and are already provided on zenodo(Kasella et al., 2017).
Footnote 1: [https://stackexchange.com](https://stackexchange.com)
In (Kasella et al., 2017) the authors show that personalization is more useful if multiple communities are used together in this dataset rather than using a single community to create the dataset. Meanwhile, previous works that use StackExchange for EF tasks focus only on a single community or a portion of a community, thus neglecting the domain diversity characterizing the questions and the various experts (Kasella et al., 2017; Kasella et al., 2017; Kasella et al., 2017).
### Accessing the SE-PEF dataset
SE-PEF dataset is made publicly available on zenodo2 according to the conditions detailed in the included CC BY-SA 4.0. license agreement and the code used for data creation, training, hyper-parameter optimization, and testing are available on github3.
Footnote 2: [https://top.5281/zenodo.8332747](https://top.5281/zenodo.8332747)[18]
Footnote 3: [https://github.com/pkasela/SE-PEF](https://github.com/pkasela/SE-PEF)
### SE-PEF Definition
In the following, we introduce the specific instance of EF task in which we are interested and illustrate how to address it by using the resources in SE-PEF.
Our EF task shares the same goal as the question-answering task: satisfy users' needs in a cQA forum in the most effective way. In a cQA forum, a user may ask a question that does not have any related answers in the answer collection. Since not receiving any answer can create a sense of frustration in a user posting a question, it is important for the community and the platform to identify and, eventually, notify domain experts who may be able to answer the question correctly. Finding good matches between unanswered questions and expert users can improve remarkably the engagement with the community. In fact, on the one hand, users posting a question can receive correct answers from the alerted experts in a short time; on the other hand, expert users can dedicate their time to answering questions specifically related to their expertise rather than searching for questions that they can respond.
Formally, let \(\mathcal{E}\) be a set of expert users \(\{e_{1},\ldots,e_{k}\}\). Given a question \(\mathbf{q}\) asked by user \(\mathbf{u}\), the EF task consists in retrieving from \(\mathcal{E}\) a list of \(k\) experts \(\{e_{q,1},\ldots,e_{q,k}\}\) ordered by their likelihood of answering correctly to question \(\mathbf{q}\).
StackExchange data has been used in several EF papers, e.g., in (Kasella et al., 2017; Kasella et al., 2017; Kasella et al., 2017). These works however mostly focus on solving the expert finding task for a single community. SE-PEF incorporates instead information from multiple communities to provide a dataset that can be used also to investigate models for generalist cQA forums that may not have separate channels for the discussed topics.
To create the dataset, we define as _best answer_ for a given question the answer selected as the best one by the user who asked the question, if available; otherwise, we assume the best answer to the one with the highest score, if it has received a score greater than a fixed threshold \(\gamma_{s}\)4. We note that this assumption, for the best answer being the most voted answer if no answer has been flagged as best by the user asking the question, is used only for the expert detection procedure, which will be explained subsequently and not as relevance judgement for the test data. In the test set we only consider the best answer, the answer explicitly labeled as such by the user asking the question. Exploiting high-scored answers as the best answers allows us to increase the number of questions successfully answered. Indeed this choice is justified by the observation that 87.6% of the answers, which are selected as best ones by the user asking the question, are also the most up-voted ones. On the other hand, we have observed that many users, once they satisfy the information need with a good answer, do not bother to mark the answer as the best.
Footnote 4: The \(\gamma\) thresholds used for SE–PEF are reported at the end of Section 3.
At this point, to identify the set of experts \(\mathcal{E}\), we follow the procedure indicated by Dargahi et al. (Dargahi et al., 2017) for their StackOverflow dataset:
* For each community \(C\), let \(\mathcal{U}\) be the set of users, and \(\mathcal{B}\) the set of best answers computed as explained above in the community \(C\). For each user \(\mathbf{u}\in\mathcal{U}\), let \(\mathcal{A}_{u,C}=\{u_{u,1},a_{u,2},\ldots,a_{u,n}\}\) be the set of answers given by \(\mathbf{u}\) in \(C\);
* Remove all users who do not have at least \(\gamma_{a}\) answers selected as best answers, i.e. define: \[\mathcal{E}^{\prime}=\mathcal{U}\setminus\{u\in\mathcal{U},\text{ s.t. }|\mathcal{A}_{u,C}\cap\mathcal{B}|<\gamma_{a}\}\]
Figure 1. Illustration of StackExchange data.
* Compute the acceptance rate for the users in \(\mathcal{E}^{\prime}\) given by the ratio between the number of accepted answers and the number of total answers of the user in that community. For each user \(\mathbf{e}^{\prime}\) we define \(\mathit{ar}_{u,C}\): \[\mathit{ar}_{\mathcal{E}^{\prime},C}=\frac{|\mathcal{A}_{\mathbf{e}^{\prime},C }\cap\mathcal{B}|}{|\mathcal{A}_{\mathbf{e}^{\prime},C}|}\]
* Compute the average acceptance rate \(\mathit{ar}_{C}\) for the users in a community and select as experts only those users who have an acceptance rate above the community average one: \[\mathcal{E}_{C}=\{\epsilon\in\mathcal{E}^{\prime}\text{ s.t. }\mathit{ar}_{\mathcal{E}^{\prime},C}\geq \mathit{ar}_{C}\}\]
The final set of experts \(\mathcal{E}\) is defined as the union of the sets of experts found for each community. The above process ensures that the selected experts have a high level of engagement and write high-quality answers having a high acceptance rate.
In Figure 2 we show the basic structure of the JSON file provided for training, validation, and test. The user_questions, user_answers contain the identifiers (ids) of the questions and the answers, written before the current question timestamp, of the user asking the question. The expert_questions, and expert_answers contain the ids of the questions and the answers of the expert that has given the best answer. The data is provided also with a collection of questions and a collection of answers; they are two very simple JSON files, where the keys correspond to the ids of the questions and answers respectively. The values of the keys are constituted by the texts of the questions and answers respectively. The data is provided also with multiple data-frames, curated from the original data found from archive.org, which can be used to add more features. These features are described on the Stack Exchange website.5
Footnote 5: [https://meta.stackexchange.com/questions/2677/database-schema-documentation-for-the-public-data-dump-and-sede](https://meta.stackexchange.com/questions/2677/database-schema-documentation-for-the-public-data-dump-and-sede)
## 3. Comparison with Available Datasets
Concerning the EF task, there are plenty of datasets available (Krishna et al., 2019), and some of them are based on data from cQA websites. For example, StackExchange is used to create a pre-trained BERT model for the EF task in (Krishna et al., 2019). However, the work focuses only on designing an EF pre-training framework based on a specific augmented masked language model able to learn the question-expert matching task. Other EF datasets derived from cQA forums come from: StackOverflow (Krishna et al., 2019; Krishna et al., 2020), Yahoo Answers(Krishna et al., 2019; Krishna et al., 2020), Wondir (Krishna et al., 2020) and Quora (Quora, 2020). Recently, a domain-specific expert finding task was tackled using Avvo (Krishna et al., 2019), a legal cQA website, but in this case, personalization is not possible due to the fact that users are anonymous. In Table 1 we report the basic dataset statistics of some of the commonly used datasets in EF for comparison.
A common issue with the existing datasets is that the experts are, in many cases, not well-defined, and determining what makes a user an expert is not trivial. Furthermore, most works among those previously cited either rely on a private dataset, or refer to a specific domain and make very strong assumptions simplifying the task addressed. Conversely, SE-PEF will be made publicly available, it has a well-defined definition of an expert, which is inspired by reasonable hypothesis common to other works (Krishna et al., 2019; Krishna et al., 2020; Krishna et al., 2020). Furthermore, it provides a rich set of social features usable for personalization and combines data from multiple communities, which, as we have already stated, increases dataset diversity and opens the possibility of exploiting cross-domain user information for EF.
To build the SE-PEF for EF we followed the procedure detailed in Section 2.2, by setting \(\gamma_{s}=5\) and \(\gamma_{a}=10\). Finally, we also remove from the training dataset the questions answered by experts who previously posted less than 5 answers to avoid the cold start problem for expert modeling. Using this procedure, we obtain SE-PEF, from starting from the dataset presented in (Krishna et al., 2019), including 81,252 users, 3,306 experts, 252,501 queries (218,647 for training, 16,710 for validation, and 19,995 for testing), and 564,690 answers.
## 4. Preliminary Experiments with Se-Pef
This section provides a concise overview of the experimental setup and introduces the methods employed to showcase the capabilities of SE-PEF in the EF task, defined and discussed in Section 2.2. Finally, we report and discuss the results of the conducted experiments.
### Experimental settings
For our EF task, we use a retrieval-based approach (Krishna et al., 2019), and simply cast the EF task to a cQA task where we use the similarity scores of the retrieved documents as experts' scores. We explain this in detail in the following paragraphs.
We adopt a two-stage ranking architecture that prioritizes efficiency and recall in the first stage. The primary objective of this first stage is to select for each query a set of candidate documents that are eventually re-ranked in a second stage by a precision-oriented ranker. The first stage is based on Elastic Search6, and uses BM25 as a fast ranker. We use the same BM25 hyperparameters as indicated in (Krishna et al., 2019): 1 and 1.75 for b and k1, respectively. In the second, precision-oriented stage, to re-rank the retrieved documents we utilize a linear combination of the set of available scores that includes the BM25 score, the similarity score computed by a neural re-ranker, and, when used, the score computed by a personalization model exploiting the user history. In all the experiments the second stage re-ranks the top-100 results retrieved with BM25.
\begin{table}
\begin{tabular}{l r r r r} \hline \hline Dataset & Questions & Answers & Users & Experts \\ \hline StackOverflow & 123 933 & N/A & 22 027 & 1845 \\ Quora & 444 138 & 887 771 & 95 915 & N/A \\ Wondir D5 & 752 391 QA pairs & N/A & 17 525 \\ Wondir D20 & 639 233 QA pairs & N/A & 5 025 \\ Yahoo U10 & 32 009 & 97 911 & 2 515 & N/A \\ Yahoo U15 & 28 404 & 89 144 & 1 339 & N/A \\ Yahoo U20 & 25 690 & 80 677 & 870 & N/A \\ StackExchange\({}_{\text{Gis}}\) & 50 718 & 70 034 & N/A & 3 168 \\ StackExchange\({}_{\text{English}}\) & 46 692 & 104 453 & N/A & 4 781 \\ StackExchange\({}_{\text{CodeReview}}\) & 36 947 & 57 622 & N/A & 2 242 \\
**SE-PEF** & _255 352_ & _564 690_ & _81 252_ & _3 306_ \\ \hline \hline \end{tabular}
\end{table}
Table 1. Comparison between SE-PEF and other cQA datasets for EF. When a specific definition of an expert is provided we distinguish normal users from experts.
_Non-personalized models._ As neural re-ranker in the second stage we use the following two models used also in (Kasela et al., 2017):
* DistilBERT. This model is obtained by fine-tuning the pre-trained _distilbert-base-uncased_ model1 for the task of answer retrieval tackled in (Kasela et al., 2017). We use the same training data and experimental settings used in (Kasela et al., 2017). Footnote 1: [https://huggingface.co/sites/different-base-uncased](https://huggingface.co/sites/different-base-uncased)
* MiniLM, based on MiniLM-L6-H384-uncased2. This model is used as it is, without any fine-tuning.
[MISSING_PAGE_POST]
validation set, used for combining the scores computed by BM25, DistilBERT / MiniLM, and TAG models. In the cases in which the optimal weight for the BM25 score is equal to 0 - i.e., BM25 does not contribute to re-ranking - we omit BM25 from the name of the model and \(\lambda_{1}=0\) from the weights column.
Differently from the cQA task tackled by the authors of (Kumar et al., 2017), we observe that on EF the performance gap of DistilBERT vs. MiniLMSBERT is sensibly reduced. The best-performing model among the ones tested is in fact DistilBERT + TAG which significantly outperforms both DistilBERT and MiniLMSBERT. Analogously to the cQA task, personalization is very effective for EF. The contribution of the TAG model allows for significantly improving all the non-personalized methods, with a performance boost exceeding three points in MRR@5 for the DistilBERT model. By looking at the optimized \(\lambda\) weights reported in all three tables, we see that the TAG model contribution is much higher for the EF task (\(\lambda_{TAG}\geq.5\)) than for the one obtained by the authors of (Kumar et al., 2017) (\(\lambda_{TAG}\leq.3\)).
## 5. Utility and Predicted Impact
The SE-PEF resource we make available to the research community as a step ahead toward a fair and robust evaluation of personalization approaches in Expert Finding. The features inherited from (Kumar et al., 2017) include explicit signals to create relevance judgments and a large amount of historical user-level information to design and test classical and novel personalization methods.
We expect the SE-PEF dataset being useful for many researchers and practitioners working in personalized IR and the application of machine/deep learning techniques for personalization. In recent years, significant efforts have been dedicated to the study of personalization techniques. However, there is still a lack of a comprehensive dataset for evaluating and comparing different approaches, which makes the comparison between different methods less reliable or, worse, not possible at all.
For this reason, we expect that the proposed dataset will impact the research community working on personalized EF as it provides a common ground of evaluation built on questions, answers, and experts from real users socially interacting via a community-oriented web platform.
In this proposal, the expert can have different domain backgrounds and share interests and knowledge in various communities. We also expect that training training on such rich and diverse data, like SE-PEF, should produce a more robust and generalizable model.
## 6. Conclusion and Future Work
SE-PEF (StackExchange - Personalized Expert Finding) is an extension of a previous work (Kumar et al., 2017), which presents a large real-world dataset for personalized cQA. The data inherits a rich set of user-level features modeling the interactions among the members of the online communities.
Our study provided a detailed description of the data creation and training process. Furthermore, we illustrated the methodologies adopted, explicitly focusing on IR techniques. We discussed how the similarity scores computed can be aggregated and combined to target the EF task adopted. For the retrieval, we adopted a two-stage architecture, where the second stage utilizes for re-ranking an optimized combination of the scores generated by BM25, DistilBERT/MiniLMSBERT, and TAG models.
The preliminary experiments conducted proved the effectiveness of personalization on this dataset, surpassing methods that rely on pre-trained and fine-tuned large language models by a statistically significant margin. We expect other researchers to develop more complex strategies to improve results on the SE-PEF resource. We leave such research as future work for us and the IR community working on personalized IR.
**Acknowledgements**. Funding for this research has been provided by: PNRR - M4C2 - Investiganto 1.3, Partenariato Esteso PE00000013 - "FAIR - Future Artificial Intelligence Research" - Spoke 1 "Human-centered AI" funded by the European Union (EU) under the NextGeneration EU programme; the EU's Horizon Europe research and innovation programme EFRA (Grant Agreement Number 101093026). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the EU or European Commission-EU. Neither the EU nor the granting authority can be held responsible for them.
|
2301.00181 | Smooth Mathematical Function from Compact Neural Networks | This is paper for the smooth function approximation by neural networks (NN).
Mathematical or physical functions can be replaced by NN models through
regression. In this study, we get NNs that generate highly accurate and highly
smooth function, which only comprised of a few weight parameters, through
discussing a few topics about regression. First, we reinterpret inside of NNs
for regression; consequently, we propose a new activation function--integrated
sigmoid linear unit (ISLU). Then special charateristics of metadata for
regression, which is different from other data like image or sound, is
discussed for improving the performance of neural networks. Finally, the one of
a simple hierarchical NN that generate models substituting mathematical
function is presented, and the new batch concept ``meta-batch" which improves
the performance of NN several times more is introduced.
The new activation function, meta-batch method, features of numerical data,
meta-augmentation with metaparameters, and a structure of NN generating a
compact multi-layer perceptron(MLP) are essential in this study. | I. K. Hong | 2022-12-31T11:33:24Z | http://arxiv.org/abs/2301.00181v1 | # Smooth Mathematical Function from Compact Neural Networks
###### Abstract
This is paper for the smooth function approximation by neural networks (NN). Mathematical or physical functions can be replaced by NN models through regression. In this study, we get NNs that generate highly accurate and highly smooth function, which only comprised of a few weight parameters, through discussing a few topics about regression. First, we reinterpret inside of NNs for regression; consequently, we propose a new activation function-integrated sigmoid linear unit (ISLU). Then special characteristics of metadata for regression, which is different from other data like image or sound, is discussed for improving the performance of neural networks. Finally, the one of a simple hierarchical NN that generate models substituting mathematical function is presented, and the new batch concept "meta-batch" which improves the performance of NN several times more is introduced.
The new activation function, meta-batch method, features of numerical data, meta-augmentation with metaparameters, and a structure of NN generating a compact multi-layer perceptron(MLP) are essential in this study.
smooth function approximation, artificial intelligence, neural network, compactness, smoothness, activation function, batch
Introduction
In many fields, such as astronomy, physics, and economics, someone may want to obtain a general function that satisfies a dataset through regression from numerical data, which are fairly accurate ([1; 2; 3; 4]). The problem of smoothly approximating and inferring general functions using neural networks (NNs) has been considered in the some literature. However, there is insufficient research on using NNs to completely replace the ideal mathematical functions of highly smooth levels, which are sufficiently precise to be problem-free when a simulation is performed. This study aims to completely replace such ideal mathematical functions.
Assuming a model \(M(X)\) was developed by regression on a dataset using an NN. \(M(X)\) for input \(X\) can be thought of as a replacement of a mathematical function \(f(X)\). In this study, such NN is called "_neural function(NF)_" as a mathematical function created by an NN. The components of an analytic mathematical function can be analyzed using a series expansion or other methods, whereas it is difficult for a NF.
In this study, we _created "highly accurate" and "highly smooth" NFs with a "few parameters" using metadata._ Particularly, we combined _a new activation function, a meta-batch method, and weight-generating network (WGN)_ to realize the desired performances.
The major contributions of this study can be summarized as follows.
* We dissected and interpreted the middle layers of NNs. The outputs of each layer are considered basis functions for the next layer; from this interpretation, we proposed a _new activation function_-integrated sigmoid linear unit (ISLU)-suitable for regression.
* The characteristics and advantages of metadata for regression problems were investigated. A training technique with _fectious metaparameters and data augmentation_, which significantly improves performance, was introduced. It was also shown that for regression problems, _the function values at specific locations_ could be used as metaparameters representing the characteristics of a task.
* NN structures that could _generate compact_1_NFs_ for each task from metaparameters were investigated, and a new batch concept- _'meta-batch'_-that could be used in the NFs was introduced. Footnote 1: Comprising few parameters
NNs for regression
Let's talk about an easy but non-common interpretation about regression with a multilayer perceptron (MLP). What do the outputs of each layer of an MLP mean? They can be seen as _basis functions that determine the function to be input to the next layer_.
The input \(x_{i+1}\) of the (\(i+1\))th layer can be expressed as follows:
\[x_{j}^{i+1}=\sum_{k}w_{j,k}^{i}*M_{k}^{i}(x_{0})+b_{j}, \tag{1}\]
where \(x_{0}\) denotes the input of the first layer, \(w_{j,k}^{i}\) denotes the weight that connects the \(k\)th node of the \(i\)th layer to \(j\)th node of the (\(i+1\))th layer, and \(M_{k}^{i}\) denotes a model comprising the \(0\)th to \(i\)th layers and having the \(k\)th node of the \(i\)th layer as the output. This is similar to the expression \(f(x)=\sum_{j}w_{j}\phi_{j}(x)+b\) of the radial basis function(RBF) kernel method. Clearly, the outputs of each layer act as basis functions for the next layer. Figure 2 shows the outputs of each layer of an MLP that learned the dataset \(D=\{(x_{i},y_{i})|y=0.2(x-1)x(x+1.5)\), \(x\in[-2,2]\}\) with the exponential linear unit (ELU) activation function.
To efficiently extract the final function, the output functions of the intermediate layers must be well-developed. If the output functions of each layer are well-developed, the desired final NF can be compact. In addition, for the final function of NN to be infinitely differentiable, the output functions of the intermediate layers should also be infinitely differentiable.
Figure 1: Perspective on MLP
Figure 2: The output graphs of each layer, trained with an MLP, where the nodes of each layer are
If the activation function is a rectified linear unit(ReLU), the output function bends sharply after every layer. If a one-dimensional regression problem is modeled with a simple MLP that has (k+1) layers with nodes \([N_{0},N_{1},N_{2}..N_{k}]\), the output function will bend more than \(N_{0}*N_{1}...N_{k}\). The ELU activation function weakens such bending but does not smoothen it for the first derivative. Moreover, apt attention is required when using the hyperbolic tangent function for all layers in a regression problem because the output function bends in two places after each layer.
Thus, the question is which activation function can develop the intermediate basis functions well? If the activation function starts as a linear function and bends at an appropriate curvature after each layer, the final result will be good. Therefore, we propose an activation function suitable for regression, called _"integrated sigmoid linear unit(ISLU)"_.
\[\log(\alpha+exp(\beta x))/\beta-\log(1+\alpha)/\beta, \tag{2}\]
where \(\alpha\) and \(\beta\) are positive numbers.
Our experiment shows that ISLU performs sufficiently well and is worth further research. It can improve the accuracy and smoothness of our experimental data. 2 Mathematically, ISLU for \(\alpha=1\) is a translated SoftPlus that passes the origin, but ISLU absolutely differs from SoftPlus. The purposes of their production differ, and there is a significant difference in their results.3
Figure 4: Scores. The numerical score table is shown in Appendix E.
The experimental results are shown in Figure 4.567 By default, a model structure is represented in the form "[the name of the model structure]_([the number of layers]L, [the number of nodes of all hidden layers)N]_[activation function]_[further information (option)]." The experimental metadataset is described in Appendix A <1>, which has \(B\), \(k\), and \(m\) as metaparameters and the corresponding task dataset for \(L\),\(t\),\(\phi\). 8 The average of the "sum of error squares" for eight tasks among the experimental metadatasets is considered a score.
Footnote 5: In our experiment, the Swish activation function was also tested, and its performance was comparable to that of ISLU. However, for consistency, we do not discuss it in the main text; the details are presented in Appendix B.
Footnote 6: All box plots in this study are arranged in order of small scores from the top, and items wherein the box is invisible have a larger score than the shown graph range.
In Figure 4a, we consider a basic MLP structure trained on one task; WGN and fMLP, which will be introduced hereinafter, in Figure 4b,4c are models trained using metadata. Considering ISLU[0], what is in [] represents a degree of freedom in which the activation function's shape can be changed. ISLU[0] is trained with \(\alpha=0.5\) and \(\beta=1\), ISLU[1]\({}_{a}\) is trained with \(\alpha=0.5\) and \(\beta=var\), and ISLU[1]\({}_{b}\) is trained with \(\alpha=0.5\) and \(\beta=1+var\), where \(var\) are trainable parameters. Because variables tend to be learned in a distribution with a mean near zero when training with an NN, ISLU[1]\({}_{a}\) bends slightly after each layer and ISLU[1]\({}_{b}\) bends to a certain amount and additionally adjusts its degree. 9
Footnote 7: All experimental conditions of NNs in this study are shown in Appendix D
Footnote 8: Most of the experiments in this study are done with the experimental dataset.
Footnote 9: The smaller the \(\beta\) value, the closer ISLU is to a straight line.
Considering the experimental results in Figure 4, the following is observed.
* (1) There is a significant difference in performance between SoftPlus and ISLU.
* (2) Considering an MLP, there is not much difference in performance between ISLU and ELU (Figure 4a). However, in all models trained with metadata, ISLU significantly outperforms ELU (Figure 4b,4c).
Figure 5: Comparison of ELU and ISLU when training with WGN. From left to right, the 0th, 1st, and 2nd derivatives of the curves with respect to time t in a task in the given metadatasets. Blue lines:
WGN_(4L,64N)_ELU_MB, Red lines: WGN_(4L,64N)_ISLU[1]a_MB
* (3) In Figure 3(b), when the number of nodes is high(64N), ISLU[0] outperforms ISLU[1], whereas when the number of nodes is low(15N,16N), ISLU[1] outperforms ISLU[0].
* (4) In Figure 3(c), ISLU[1]\({}_{b}\) always outperforms ISLU[0].
* (5) As shown in ISLU[1]\({}_{a}\) and ISLU[1]\({}_{b}\), there are slight differences in performance depending on what the shape of ISLU is based on.
The reason for (2) can be explained as follows: setting an activation function parameter entails giving a certain bias. When given well, it considerably helps in predicting results; otherwise, it may interfere. When using metadata, the performance is improved because biases are determined by referring to various data.
We now discuss the reasons for (3) and (4). In Figure 3(b), fMLP indicates an MLP structure trained with fictitious metadata10 for only one task. If an MLP has a lots of nodes, even if the curvature functions of all activations are set to be the same, several functions can be added and combined to produce curves with the desired shapes. Meanwhile, when the nodes are few, desired curves may not be obtained well without adjusting the curvatures of the activation functions. In Figure 3(c), WGN is a network structure 11 that learns the entire metadata at once. In this case, using ISLU[1] allows the activation shape to change between tasks, yielding better results than the fixed-shaped ISLU[0].
Footnote 10: described in II.2
Footnote 11: described in III.1
The ISLU presented in this study is an example of an activation function for creating desired curves; a better activation function can be studied.
### Perspectives of Metadata
In this study, _metadata_ are the data of datasets that are sometimes the sets of task datasets, _metafeatures_ are features of a task dataset, and _metalabels_ or _metaparameters_ are parameters representing metafeatures. Consider a case where a physical system has the relation \(y=f(x_{1},x_{2}..)\) and the function \(f\) depends on the variables \(a1,a2....\). For example, a pendulum's
Figure 6: Metadata structure.
kinetic energy \(E\) is \(E=f(\theta)\), where \(\theta\) denotes the angle between the string and gravitational field direction, and the function \(f\) depends on the string's length \(l\) or pendulum's mass \(m\).
In this case, the kinetic energy \(E\) can be viewed not only as \(f(\theta,l,m..)\) but also as \(f_{l,m}(\theta)\). The dataset \(\mathcal{D}=\{(l_{i},m_{i},\theta_{i},E_{i})|E_{i}=f(\theta_{i},l_{i},m_{i}..) \}=\{(l_{i},m_{i},D_{i})|D_{i}=D_{m_{i},l_{i}}(\theta)\}\) is metadataset and the numerical value \(l,m\) can be considered as metaparameters.
One might want to interpret the kinetic energy as \(E=f_{l,\theta}(m)\). This cannot be said to be wrong, and there may be various perspectives and interpretations for _a numerical dataset used for regression_.
### Advantages of Training with Metadata and Meta-Augmentation
Consider an experiment performed with the following metadata \(\mathcal{D}_{k}=\{(x_{i},y_{i})|y_{i}=A_{k}*\sin(p_{k}*x_{i}+\phi_{k}),x\in[0, 10],A_{k}\in[-1.5,1.5],p_{k}\in[0.5,1.5],\phi_{k}\in[0,2\pi]\}\). It can be seen from the perspective that the tasks \(\mathcal{D}=\{(x_{i},y_{i})|y_{i}=A*\sin(p*x_{i}+\phi)\}\) are given according to the metaparameters of \(A\), \(p\), and \(\phi\). In this case, if not only \(x\) but also \(A\), \(p\), and \(\phi\) were trained as training inputs, a curve could be created with zero shot just by setting \(A\), \(p\), and \(\phi\). 12 Consequently, if metadata are used to learn, the accuracy of each task increases.
Footnote 12: MLP with inputs \(A,p,\phi,\theta\) and the WGN in III.1 were used for the experiment.
Taking a hint from the fact that metadata improve inference accuracy for each task, it can be thought that even in a situation where only one task is given, fictitious metadata with fictitious metalabels (or metaparameters) can be generated to learn curves. If only fictitious metalabels are used and the data remain the same, curves would be learned in the direction of ignoring the metalabels; therefore, some data modifications are required. For the experiment, fictitious metadata comprising 10 tasks with the metaparameter \(a\) were created by moving the \(y_{i}\) value in parallel \(\pm 0.05\) for every \(a=\pm 0.02\) with the original data of \(a=0\) for a given task \(\mathcal{D}=\{(x_{i},y_{i})\}\). As a result of using fictitious metadata, the score improved significantly (Figure 9). The performance improvement was similar even when the fictitious metadata were generated by moving \(x_{i}\) instead of \(y_{i}\) according to the fictitious metalabel.
We reiterate that data augmentation _including ficitous meta-parameters_ is required to achieve significant performance improvement, otherwise there is little performance improvement. In this study, only the experimental results using MLP with fictitious metaparameters added to inputs are
shown; however, further experiments show that the performance improvement due to fictitious metadata occurs independent of the model structure.
### Learning Function with Restricted Metadata
The regression task for the numerical dataset \(\mathcal{D}=\{(x_{i},y_{i})|i=0,1,2..\}\) can have a significant advantage different from the image problems, i.e., \(y_{i}\)_values at particular locations can be metaparameters that represent the entire task dataset_. For the set of images, _if we know the RGB values at specific positions of pixels, it does not help to distinguish the features of images_. However, for a set of mathematical functions f(x)s such as fifth degree polynomial or sine curve sets, _just knowing f(x) at specific x positions can let us distinguish the functions well_. This can be shown in the experiments with sine curve datasets. For the tasks \(\mathcal{D}_{k}=\{(x_{i},y_{i})|y_{i}=A_{k}*\sin(p_{k}*x_{i}+\phi_{k}),x\in[0, 10],A_{k}\in[-1.5,1.5],p_{k}\in[0.5,1.5],\phi_{k}\in[0,2\pi]\}\) that are not given metaparameters \(A\), \(p\), and \(\phi\), it is possible to learn the sine curves just using the function values \(y_{i}\) at six points of \(x_{i}\) as metaparameters (Figure 7). In other words, it is possible to perform _few-shot learning_ simply without metalabels.
In addition, the relationship between the six \(y\) points and \(A\), \(p\), and \(\phi\) can be learned with a simple MLP that has six-dimensional inputs and three-dimensional outputs, indicating that the metaparameters \(A\), \(p\), and \(\phi\) can be completely extracted to generate a sine curve using on the six
points.
## III Function-generating networks
### Wgn
When learning metadata in a regression problem, one can think of an hierarchical NN structure in which a NF corresponding to each task is generated from corresponding meta parameters. The structure in which a model is generated from variables has been studied extensively ([5; 6]). We consider the one of the structure of a function-generating network called _weight generating network(WGN)_ in this study. As shown in Figure 6, WGN generates parameters such as the weight and bias of _main network_ through a simple MLP called _weight generator_ from metaparameters. If there are trainable parameters of the activation function on the main network, can also be generated from metaparameters.
WGN is expected to generate _NFs comprising a few parameters_ corresponding to each task through the weight generator.This is because enormous data and weight generators carefully generate the parameters of the main network. Experiments showed that WGN is effective in creating the main network with excellent performance, although it comprises only a few parameters.
What are the advantages of creating a NF with _only a few parameters_? First, because the number of times that a linear input function can be bent is reduced, it may have a regulation effect or help create a smooth function. Second, it may be helpful in interpreting and analyzing the network by directly adjusting the parameters. Third, because the number of weights is small and the inference speed is fast, it can be advantageous when a fast interference speed is required, such as a simulation.
### Meta-batch
When training a function-generating network, such as WGN, 'one' metalabel (or metaparameter) \(z_{i}\) is usually placed on the weight generator's input, and it is updated with the batch of the corresponding task on the main network. However, in this case, it becomes training with batch-size=1 for the metaparameters, and when it is updated with backpropagation at once, the metacorrelation between tasks is not used well. From these problems, the _meta-batch_ concept is proposed. To distinguish the _meta-batch_ from the conventional batch, the batch of each task corresponding to
one \(z_{i}\) is called "task batch." "Meta-batch" refers to both the batch of metaparameters and the corresponding batch of the tasks. The training method for WGN using the meta-batch is as follows.
Suppose a training metadataset \(\mathcal{D}=\{(\mathcal{D}_{k},z_{k})|k\in\{1..K\}\}\) comprising task training datasets \(\mathcal{D}_{k}=\{(x_{i}^{k},y_{i}^{k})\}_{i=1}^{N_{k}}\) are given, where \(N_{k}\) is the number of datapoints of \(\mathcal{D}_{k}\) task. For index sets \(M\subset\{1,,\,,K\},T_{k}\subset\{1,,\,,N_{k}\}\) that determines meta-batch and task batch, select the batch \(\mathcal{X}_{M}=\{(D_{m},z_{m})|m\in M\}\) and \(\mathcal{X}_{T}^{M}=\{(x_{t}^{m},y_{t}^{m})|t\in T_{m},m\in M\}\).
We denote the dimensions of \(x_{i},y_{i}\), and \(z_{i}\) as \(N[x],N[y]\), and \(N[z]\), respectively. \(w_{ij}^{l}\) denotes the weight between the \(l\)th and (\(l+1\))th layers of the WGN's main network, which has a shape \((N[w_{l}],N[w_{l+1}])\), where \(N[w_{i}]\) denotes the number of nodes at the \(i\)-th layer. The inputs \(\mathcal{X}_{T}^{M}\) of the main network are rank-3 tensors in the form of \((\mathrm{MB},\mathrm{TB},N[x])\), where \(\mathrm{MB}\) and \(\mathrm{TB}\) denote the sizes of \(M\) and \(T\), respectively.
If \(z_{m}\) enters to weight generator as inputs in the form of \((\mathrm{MB},N[z])\), \(G[w_{ij}^{l}](z_{m})\) generates a tensor in the form \((\mathrm{MB},N[w_{l}]*N[w_{l+1}])\) and it is reshaped as \((\mathrm{MB},N[w_{l}],N[w_{l+1}])\), where \(G[w_{ij}^{l}]\) denotes a generator that generates \(w_{ij}^{l}\). The outputs of the \(l\)-th layer of the main network, which has the shape \((\mathrm{MB},\mathrm{TB},N[w_{l}])\), are matrix-produced with the weights in the form \((\mathrm{MB},N[w_{l}],N[w_{l+1}])\), and then it becomes a tensor in the form \((\mathrm{MB},\mathrm{TB},N[w_{l+1}])\).13 Finally, the outputs of the main network with shape \((\mathrm{MB},\mathrm{TB},N[y])\) and \(y_{t}^{m}\) are used to calculate the loss of the entire network. Conceptually, it is simple as shown in Figure 10.
Footnote 13: All other parameters in the main network can be generated from weigh generators using a similar method
As a result of the experiment, Figure 1214 shows a significant difference in performance between
using and not using meta-batch, where "MB" means using meta-batch, and "ST" means training by inputting metaparameters individually without using meta-batch. Figure 12 also shows the difference between using WGN and just using a simple MLP.
Meta-batch can be used in any function-generating network structure that generates models from variables; another example is shown in Figure 11. The outputs of generators concatenate with the layers of the main network. As a result of experimenting with ISLU[1] in the structure shown in Figure 11, there was a performance difference of more than four times between using and not using meta-batch.
Figure 13 shows the results of using WGN and meta-batch compared with those of using only MLP. "sWGN" indicates a WGN trained with metaparameters that are the function values at 10 points of \((L,t,\phi)\) without using original metaparameters "\(B\), \(k\), and \(m\)." "mMLP" indicates an MLP that trained with a six-dimensional input combined with "\(L\), \(t\), and \(\phi\)" and the original metaparameters. "MLP" indicates a trained model for each task with just inputs "\(L\), \(t\), and \(\phi\)." This figure shows that using meta-batch, WGN outperformed MLP with fewer parameters. This also shows that WGN excels at learning all metadata and using them with only a few parameters.
Figures 14 and 15 shows the results of other metadatasets, which are described in Appendix A. The combinations of ISLU, meta-batch, and WGN give much better performance than MLP in terms of accuracy and compactness.
Figure 12: Comparison between using meta-batch and not using meta-batch
Figure 13: Scores for each task of metadata from different models.
## IV Conclusion
In this study, we focus on creating mathematical functions with desired shapes using an NN with a few parameters. Irregular and numerous parameters are helpful for generalizations because of randomness; however, this sometimes makes it difficult to interpret the network and reduces the smoothness of the functions.
In this study, we dissected NNs for regression; consequently, we proposed a new activation function. We looked at the special features of regression-related metadata, such as the possibilities to extract meta-parameters immediately, and how, given only one task, we could create fictitious meta-parameters and metadata to increase performance by more than a few times.
In addition, the network structures generating NFs from metaparameters were discussed and the _meta-batch_ method was introduced and tested for the structure called WGN. WGN makes it possible to provide smooth and desired-shaped NFs comprised of a few parameters because it carefully generates different parameters and shapes of activation functions for each task.
The findings of this study, as well as the insights obtained in the process, are significant for earning smooth and accurate functions from NNs. One of them is the perspective of obtaining desired output functions at _intermediate_ layers from enormous data. Regarding regression problems, it will help elucidate how to find the metafeature of each task and map to the corresponding metaparameter as well as how to get a smooth and compact NF of a desired shape. |
2304.00039 | Extreme rotational events in a forced-damped nonlinear pendulum | Since Galileo's time, the pendulum has evolved into one of the most exciting
physical objects in mathematical modeling due to its vast range of applications
for studying various oscillatory dynamics, including bifurcations and chaos,
under various interests. This well-deserved focus aids in comprehending various
oscillatory physical phenomena that can be reduced to the equations of the
pendulum. The present article focuses on the rotational dynamics of the
two-dimensional forced damped pendulum under the influence of the ac and dc
torque. Interestingly, we are able to detect a range of the pendulum's length
for which the angular velocity exhibits a few intermittent extreme rotational
events that deviate significantly from a certain well-defined threshold. The
statistics of the return intervals between these extreme rotational events are
supported by our data to be spread exponentially. The numerical results show a
sudden increase in the size of the chaotic attractor due to interior crisis
which is the source of instability that is responsible for triggering large
amplitude events in our system. We also notice the occurrence of phase slips
with the appearance of extreme rotational events when phase difference between
the instantaneous phase of the system and the externally applied ac torque is
observed. | Tapas Kumar Pal, Arnob Ray, Sayantan Nag Chowdhury, Dibakar Ghosh | 2023-03-31T18:00:54Z | http://arxiv.org/abs/2304.00039v1 | # Extreme rotational events in a forced-damped nonlinear pendulum
###### Abstract
Since Galileo's time, the pendulum has evolved into one of the most exciting physical objects in mathematical modeling due to its vast range of applications for studying various oscillatory dynamics, including bifurcations and chaos, under various interests. This well-deserved focus aids in comprehending various oscillatory physical phenomena that can be reduced to the equations of the pendulum. The present article focuses on the rotational dynamics of the two-dimensional forced damped pendulum under the influence of the ac and dc torque. Interestingly, we are able to detect a range of the pendulum's length for which the angular velocity exhibits a few intermittent extreme rotational events that deviate significantly from a certain well-defined threshold. The statistics of the return intervals between these extreme rotational events are supported by our data to be spread exponentially. The numerical results show a sudden increase in the size of the chaotic attractor due to interior crisis which is the source of instability that is responsible for triggering large amplitude events in our system. We also notice the occurrence of phase slips with the appearance of extreme rotational events when phase difference between the instantaneous phase of the system and the externally applied ac torque is observed.
Natural events like droughs, earthquakes, tsunamis, floods, global pandemics, and human-made disasters like share market crashes and power blackouts are recurrent with a low probability of occurrence but with having immediate cataclysmic impacts on human society. In the literature, such recurrent and profoundly significant incidents are referred to as extreme events. From the study of the temporal dynamics of many physical systems, large-amplitude events significantly deviating from the mean state are observed occasionally, which has a qualitative similarity, recognized from the data records and statistical distribution, with those described above natural and human-made cataclysms. This similarity encourages researchers to study dynamical systems investigating those sudden, intermittent events better to understand the origin of extreme events. Our present study considers a forced damped nonlinear pendulum with ac and dc torque and identifies a sudden expansion of the chaotic attractor through the interior crisis. This sudden expansion of the chaotic attractor is connected to the origination of extreme rotational events, and our numerical simulations suggest their return interval distributions are of exponential type. System dynamics experience a phase slip during the transition from libration to rotation. Consequently, we uncover the same large phase slip during the appearance of these large-amplitude rotational events. Our research offers valuable insights into the emergence of extreme rotational events on dynamical systems and may find applicability for a better understanding of the continuous-time systems with a strange attractor.
+
Footnote †: preprint: APS/123-QED
## I Introduction
The study of extreme events [1; 2; 3] has the utmost importance in many scientific and interdisciplinary disciplines for their immediate severe consequences and potential applications. It is hardly possible to define, what indeed the extreme events (EEs) are, in literature. The events or phenomena with large deviation from the regular behavior having a huge impact in the society are usually contemplated to be as EEs. These recurrent EEs are observed in several natural and engineering systems. EEs present several unique challenges because they are unpredictable and occur spontaneously. EEs have received a lot of attention from experts nowadays because of their disastrous and terrible consequences on the socioeconomic situation [1]. EEs are found to occur in nature as well as it may be human-made as well. The natural events such as floods [4], tsunamis [5], earthquakes [6], cyclones [7], droughs [8], seismic activity [9], wildfires [10], volcanoes [11], to name but a few, and the man-made system's disasters such as power blackouts [12], the nuclear leakage in Chernobyl and Fukushima [13], regime shifts in ecosystems [14; 15; 16], share market crashes [17] are all considered as EEs. The necessity of studying EEs basically lies in restraining their adverse huge impact in terms of have concerning the importance of prediction [18; 19; 20; 21; 22; 23] and mitigation [18; 24; 25; 26; 27].
Generally, the events with amplitude larger than four to eight times the standard deviation from the central tendency (mean state) of the events [1] or the events whose amplitudes are in the 90th-99th percentile of the probability distribution [2] are defined as EEs. The EEs are being occurred far away from the mean state of the skewed distribution, they appear on the tail of the distribution having less frequency of occurrence [28]. The scientific community needs help to investigate their unpredictable occurrences due to the availability of a limited amount of real data and has recently often resorted to the classical dynamical system approach [28; 29]. The dynamical systems are being recognized as the prognostica
tion to get rid of the problem of having a small number of real data [28]. Specifically, in dynamical systems, evolving the equations of motion forward in time, we may gather a huge number of simulated data which are helpful for statistical analysis [28; 30]. Researchers often struggle to explain the origin of EEs in natural systems. In that situation, dynamical models might facilitate the same. In the study of temporal dynamics of many dynamical systems, the occurrence of infrequent but recurrent having comparatively high or low amplitude events might have qualitative similarities with occasional large events being recorded in many real-world phenomena. The emergence of EEs are reported in several dynamical systems such as FitzHugh-Nagumo oscillators [29; 31; 32; 33; 34; 35], Hindmarsh-Rose model [36], Lienard system [37], Ikeda map [25], Josephson junctions [38], Ginzburg-Landau model [39], nonlinear Schrodinger equation [40], micromechanical system [41], climatic models [42], ecological model [43], mechanica system [44], electronic circuits [45], to name but a few. Besides, we also find some experimental evidences of appearance of EEs such as in laser systems [46], epileptic EEG studies in rodents [47], annular wave flume [48], laser systems [49], and so on.
The emergence of EEs in dynamical systems is basically due to the presence of region of instability in the state space of the system [3; 50]. The occasional visit of a chaotic trajectory in the region of instability of the state space immediately leads to the traversing locations in the state space far away from the bounded region, after short duration the trajectory returns back to that region. As a result, the manifestation of occasional comparatively large amplitude events is observed in the temporal dynamics of the observable [50]. The emergence of EEs in dynamical systems most of the time follows the sudden enlargement of the size of a chaotic attractor through an interior crisis which is a considerably important one among all other possible mechanisms [51; 52; 53; 54; 55]. Interior crisis [56; 57; 58; 59] occurs due to the collision of a chaotic attractor with the stable manifold of an unstable fixed point or an unstable periodic orbit. In multistable systems, under the presence of noise, a sudden transition from one state to another may cause the origination of EEs [60; 61]. There are several other mechanisms behind the emergence of EEs in dynamical systems such as breakdown of quasiperiodicity [62], intermittency [37; 63], transition between the librational motion to rotational motion [38; 64], and attractor bubbling [65; 18; 66].
In the realm of physics and natural phenomena, the pendulum has become one of the paradigms of study. In this work, we consider a forced damped pendulum [67], and the dynamics of this system is phenomenologically rich. This system exhibits two kinds of motion, i.e., libration and rotation, as usual [68]. Here, we investigate that extreme events may be emerge in the rotational dynamics of the damped pendulum under the influence of dc and ac torque as angular velocity becomes infrequently faster than normal during rotation for a very short period of time. We use existing nonlinear theories to demonstrate the abrupt enlargement of the chaotic attractor through the interior crisis within a range of the pendulum's length. When the large-amplitude rotational events repeatedly exceed a certain threshold, we refer to these occurrences as "_extreme rotational events_" (EREs). Also, we depicts histogram plot that exhibits the probability of occurrences of events. This plot underpins the non-Gaussian distribution. Also, histograms of inter-arrival between extreme rotational events are plotted for two different parameter values and are closely fitted by exponential distributions.
The layout of this manuscript is as follows; we delineate the model's description in Sec. (II). We further detail how our procedure defines EREs in Sec. (III). The bifurcation analysis, time series and phase portrait plotting, statistical analysis of EREs, and subsequent results are illustrated in Sec. (III). Finally, we conclude with a concise summary and future perspectives in Sec. (IV).
## II Model description
We consider a forced damped nonlinear pendulum [67] having the governing equation as
\[ml^{2}\ddot{\theta}+\gamma\dot{\theta}=-mgl\ sin\theta+\tau^{\prime}+\tau\ sin(\omega+\phi) \tag{1}\]
Here, \(\theta\) is the phase variable, and \(\dot{\theta}\) and \(\ddot{\theta}\) denote the angular velocity and angular acceleration of the pendulum, respectively. \(g\) is the acceleration due to gravity, \(m\) is the mass of the bob, \(l\) is the length of the pendulum, and \(\gamma\) is the damping parameter. \(\omega\) is the angular frequency, \(\phi\) is the initial phase of the ac torque, \(\tau\) is the ac torque, and \(\tau^{\prime}\) is the dc torque. A schematic diagram of the pendulum (1) is delineated in Fig. (1). The angle between the downward vertical and the pendulum in this case is denoted by \(\theta\). Besides, we delineate other parameters through this figure. The parameter values \(m=1.0\), \(g=1.0\), \(\gamma=0.75\), \(\tau=0.4\), \(\tau^{\prime}=0.7167\), \(\omega=0.25\), and \(\phi=22/7\) are held constant throughout the text. In the
Figure 1: **A schematic of a forced, damped nonlinear pendulum in the presence of dc and ac torque**: The angle between the pendulum of length \(l\) and the downward vertical is denoted by the angular variable \(\theta\). Here, \(g\) is the acceleration due to gravity, and \(m\) is the mass of the bob. Both constant dc torque \(\tau^{\prime}\) and periodic ac torque \(\tau\) are applied to drive the pendulum counterclockwise. Here, \(\omega\) is the angular frequency, and \(\phi\) is the initial phase of the ac torque.
following section, we examine the impact of the pendulum length \(l\) by treating it as the bifurcation parameter.
In general, two types of motion [69] are possible for a pendulum model. One is librational motion [70] (small amplitude oscillation) in which the pendulum merely swings back and forth but does not fully rotate (around) with respect to the pivot, and other one is rotational motion [71] (large amplitude oscillation) in which the pendulum fully rotates or swings around with respect to the pivot. A schematic diagram of cylindrical phase space is plotted where the trajectory of the librational orbit is shown in Fig. (2) (a). On the other hand, we observe a rotational orbit's trajectory in Fig. (2) (b).
## III Result
_Brief overview of this section_: This section introduces the results of this study on extreme rotational events (EREs). The starting point of this work refers to how we characterize events, librational events, rotational events, and EREs. This small discussion provides the essential background to interpret our findings for dynamical system (1). The second portion of these results describes how the interior crises give rise to an enlarged attractor and crisis-induced intermittent behavior in our considered pendulum (1). All these results lie at the heart of our work. With the help of bifurcation and existing nonlinear theories, we investigate how a trajectory spends most of its time on the post-crisis attractor and occasionally does brief intermittent excursions to distant regions. Also, we explore the statistics of EREs in the last part of this section, which enables us to conclude the inter-event intervals between these EREs are distributed according to the exponential distribution, and their amplitude maintains a non-Gaussian distribution.
_Quantification of EREs_: Gavrielides et al. [72] illustrated the emergence of chaotic regimes for our chosen system described by Eq. (1) by examining the bifurcation analysis with the variation of the length \(l\) of the pendulum. They identified the approximate range of \(l\in(0.998,1.002)\) for which chaotic dynamics occur in the system. Also, it is mentioned in Ref. [67] that the dynamics of a pendulum, given by Eq. (1), exhibit librational motion for \(l>1.002\), whereas it shows rotational motion for \(l<0.998\). Those studies drive us to focus on the regime of \(l\), which is near the transition in dynamics from rotation to libration. In the bifurcation diagram presented in Fig. (3), we depict the variation of the local maxima of \(\theta\) by varying \(l\) and also observe the transition between high and low amplitude oscillations. Here, we can expect the occurrence of extreme events because already a few notable Refs. [38; 64] confirms the appearance of extreme events in the two-dimensional phase model during the transition between rotation and libration. So for our study, the angular velocity (\(\hat{\theta}\)) is the _observable_[50] where we expect to observe the extreme events. We consider the local maxima (\(\hat{\theta}_{max}\)) of \(\hat{\theta}\) as _events_. Since we know trajectory bounds only for a portion of the circumference of the cylindrical phase space due to libration, system dynamics exhibits small oscillations, and the values of \(\hat{\theta}_{max}\) become lower. On the other hand, the trajectory revolves around the cylinder for rotation, resulting in large amplitude oscillation. So, the values of \(\hat{\theta}_{max}\) become higher. This clear distinction is observed in the bifurcation diagram from Fig. (3). In the present investigation, the events are classified into two classes: (a) librational events and (b) rotational events, based on this observation. We choose a threshold in such a way that large and small amplitude events are easily separated. We set the threshold value as \(0.5\) since the librational and rotational events are distinguishable as a gap is observed between small and large amplitude events in the same bifurcation diagram. For \(\hat{\theta}_{max}<0.5\), the events appear due to the librational motion of the system, and being so is termed as librational events. For \(\hat{\theta}_{max}>0.5\), the events occurred due to rotation. We call them rotational events. In our present study, we mainly concentrate on the rotational dynamics of the pendulum (1) because we observe from the bifurcation diagram that the maximum value of rotational events is, for a wide range of \(l\), less than \(1.5\). Still, it crosses \(2\) for another range of \(l\). This difference and temporal dynamics of observable lead to categorizing a subset of rotational events as EREs. To distinguish EREs from rotational events, we adopt the threshold-based statistical measure [1; 2; 64], which is commonly used to classify an event as extreme events in dynamical system-related studies. A rotational event is considerable as ERE when its amplitude crosses a threshold value, \(H_{T}=\mu+d\sigma\) (\(d\in\mathbb{R}\setminus\{0\}\)) where we chose \(d=6\) for our study. \(\mu\) and \(\sigma\) signify the mean and standard deviation of a collected dataset of rotational events. The choice of \(d\) sets forth how far the deviation is from the mean state. It is suitably chosen for our system so that the characterization of extreme events sustains the extreme rotational events. One of the essential characteristics of EREs is the irregular occurrence in the temporal dynamics of events. The low probable occurrence of the EREs is classified depending on how the larger value of \(d\) is chosen [28; 66].
Throughout the study, we perform the numerical simulation by integrating Eq. (1) using the fifth-order Runge-Kutta-Fehlberg method, having an integration step length of \(0.01\).
_Generation of EREs_: A bifurcation diagram is plotted in Fig. (3) for the depiction of the changing scenario of \(\hat{\theta}_{max}\) as \(l\) varies within \([0.998,1.004]\). Initially, we observe the periodic dynamics of the oscillation, and after a certain value
Figure 2: **A schematic of librational and rotational orbits in cylindrical phase space**: (a) The librational orbit covers a portion of the periphery of the phase space. (b) The rotational orbit rounds the circumference of phase space.
of \(l\), chaotic dynamics emerge via period-doubling bifurcation. But the amplitude of the chaotic attractor increases after crossing a particular value of \(l\). In this scenario, the pendulum swings back and forth as well as whirls over the top in a chaotic fashion. After increasing the value of \(l\), we only observe that the system dynamics exhibit chaotic libration beyond a specific value of \(l\). That means the pendulum swings only to and fro because the combined effect of dc and ac torque is inadequate to overcome its increased rotational inertia. After that, the system undergoes from chaotic to periodic oscillation through inverse period-doubling bifurcation. We also plot the variation of \(H_{T}\) by changing \(l\) in Fig. 3 for identifying EREs.
Temporal dynamics of \(\dot{\theta}\) and the corresponding phase space (\(\theta\)-\(\dot{\theta}\)) in the cylindrical surface are displayed in Fig. (4) for five different values of \(l\). In the left panel, the temporal evolutions of \(\dot{\theta}\) along with threshold, \(H_{T}\) (denoted by red dashed line) are displayed, and the respective cylindrical phase spaces (\(\theta\) versus \(\dot{\theta}\)) are shown in the right panel for \(l=0.999941,0.999945,1.001,1.00218\), and \(1.002184\). Figure (4) (a) depicts the temporal evolution of \(\dot{\theta}\) exhibiting large amplitude oscillation for \(l=0.999941\). No large spikes or bursts are observed here; consequently, no rotational events exceed the threshold. Corresponding phase space is shown in Fig. (4) (b), where trajectory bounds within a small portion of the periphery as well as rotates the entire cylindrical surface. Occasional large spikes are observed in Fig. (4) (c), corresponding to the temporal evolution of \(\dot{\theta}\) for \(l=0.999945\) because the angular velocity \(\dot{\theta}\) of the pendulum occasionally increases during rotation. Here, two rotational events that cross the red threshold line \(H_{T}\) are treated as EREs. Corresponding phase space is shown in Fig. (4) (d), in which trajectory is being observed in the librational (partially rounding the periphery of the cylinder) and rotational orbit (fully rounding the perimeter of the cylinder). The trajectory rotates within a bounded region during rotation but occasionally travels far away from the region, indicating the appearance of EREs. For the sake of clarity, the presence of an ERE is depicted by the brown colored spike in the temporal dynamics of \(\dot{\theta}\) in Fig. (4) (c), the respective portion of trajectory is shown by the brown color in the phase space of Fig. (4) (d). Figure (4) (e) exhibits the time series of \(\dot{\theta}\) and the corresponding phase space diagram is shown in Fig. (4) (f) for \(l=1.001\) where EREs are not observed anymore. Also, the trajectory's deflection from a bounded region in the phase space is absent. Figure 4 (g) is the depiction of the temporal evolution of \(\dot{\theta}\) for \(l=1.00218\) in which some intermittently large spikes are observed. Here, few rotational events exceed \(H_{T}\) and are qualified as EREs. Figure 4 (h) shows the respective phase space in the cylindrical surface. Here, the trajectory evolves within a portion of the
Figure 3: **Emergence of EREs caused by the interior crises**: We draw the bifurcation diagram of \(\dot{\theta}_{max}\) for the forced-damped nonlinear pendulum (1) considering the pendulum’ length \(l\) as the bifurcation parameter varying in the range [0.998, 1.004] with the step length 0.00001. Numerical simulation is performed using the RKF45 method with integration step length 0.01 and \(8\times 10^{5}\) iterations, leaving a transient of \(3\times 10^{5}\) iterations. The pendulum displays the librational dynamics for \(\dot{\theta}_{max}<0.5\) and the rotational motion for \(\dot{\theta}_{max}>0.5\). A sudden transition from the chaotic oscillation (pre-crisis) to comparative large-amplitude chaotic oscillation (post-crisis) is observed when the value of \(l\) is increased from the left-hand side of the diagram. The critical value of \(l\) is indicated by L where sudden expansion of the attractor occurs. Similarly, a sudden transition of the small amplitude chaotic oscillation in libration to large-amplitude chaotic oscillation in libration and rotation occurs at the critical value of \(l\) is indicated by **R** when the value of \(l\) is decreased from the right-hand side of the diagram. The red line is the extreme rotational events qualifying threshold line \(H_{T}\). Enlarge versions of the transition from two sides in rotation dynamics (two shaded portions of the bifurcation diagram) are presented in two insets on the figure’s left and right sides. The left and right inset figures portray how the chaotic attractor suddenly enlarges through the interior crises. Those intermittent, sporadic blue points from the post-crisis attractor cross the red threshold line \(H_{T}\) from the left and right sides, respectively. We set the initial condition fixed at \((\dot{\theta}_{0},\dot{\theta}_{0})=(0.01,0.02)\). Although the result remains qualitatively the same for other choices of initial conditions too. Other parameter values: \(\omega\)=0.25, \(\dot{\varphi}\)=\(\frac{22}{7}\), \(m\)=1.0, \(g\)=1.0, \(\gamma\)=0.75, \(\tau^{\prime}\)=0.7167, \(\tau\)=0.4.
circumference of the surface and also spends some time fully rounding the periphery of the cylinder. But sometimes, the trajectory traverses around the cylindrical surface far away from its regular arrival path during rotation. The temporal dynamics of \(\dot{\theta}\) for \(l=1.002184\) is depicted in Fig. 4 (i), where we observe only chaotic dynamics in libration. Since rotational dynamics are fully terminated, and the dynamics is switched over to libration, no large intermittent spikes in the time series are observed here. The denser region in Fig. 4 (i) depicts the librational chaos merely. Here, the extreme rotational event qualifying red threshold line \(H_{T}\) is not also observed because of no rotational event. We delineate the corresponding cylindrical surface depicting the trajectory in a small portion of phase space in Fig. 4(j).
Now we examine the route of the emergence of extreme rotational events by analyzing the bifurcation diagram Fig. (3). From the left side of the bifurcation diagram, a sudden and abrupt jump of the chaotic attractor is observed at the critical value of the bifurcation parameter, \(l\approx 0.999942\), evinced by \(\mathbf{L}\) on the diagram when we increase the value of \(l\). We also provide an enlarged version of a shaded region of the bifurcation diagram in the inset on the left-hand side for precise observation of the transition. On the left-hand side of \(\mathbf{L}\), the dynamics are chaotic but exhibit librational as well as rotational motion, and the maximum amplitude of \(\dot{\theta}\) is less than \(1.5\). But, though the dynamics remain chaotic, consisting of oscillation in libration as well as rotation at the right-hand side of \(\mathbf{L}\), the maximum amplitude of \(\dot{\theta}\) reaches around \(2\). Temporal dynamics of \(\dot{\theta}\) near the critical value of \(l\) (crisis point \(\mathbf{L}\)) is depicted in Fig. 4 (c), which exhibits the appearance of EREs. Similarly, a sudden large expansion of the chaotic attractor is also noticed at the critical value of the bifurcation parameter \(l\approx 1.00218\) being evinced by \(\mathbf{R}\) on the diagram from the right side of the bifurcation diagram when we decrease the value of \(l\). For this scenario, chaotic librational dynamics transit to chaotic dynamics consisting of not only libration but also rotation (see the zoomed version of the shaded portion in the inset of Fig. (3) for better visualization). The temporal evolution of \(\dot{\theta}\) near the critical value of \(l\) is exhibited in Fig. 4 (g), where the appearance of EREs is clearly detected. So from the left-hand side as well as the right-hand side, a sudden large expansion of the chaotic attractor occurs for the critical values of the bifurcation parameter \(l\) in Fig. (3). This incident occurs when chaotic dynamics emerge in a system through _interior crisis_. Chaotic attractors can experience sudden and qualitative changes depending on the system parameters. These changes are well-known in the literature as crises [56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 99; 11; 12; 13; 14; 15; 16; 17; 18; 19; 19; 12; 14; 19; 15; 17; 19; 16; 18; 19; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 72; 73; 74; 75; 76; 77; 78; 79; 81; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 101; 102; 103; 104; 105; 106; 107; 108; 109; 111; 11; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 188; 189; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 22; 22; 231; 241; 252; 261; 272; 283; 284; 285; 286; 287; 290; 221; 223; 288; 291; 289; 292; 287; 288; 293; 289; 294; 30; 31; 320; 32; 331; 332; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 45; 46; 47; 48; 49; 51; 52; 53; 54; 55; 56; 57; 58; 59; 61; 70; 73; 74; 75; 76; 77; 78; 81; 82; 83; 84; 85; 86; 87; 88; 89; 91; 92; 93; 95; 96; 97; 98; 101; 102; 103; 104; 105; 106; 107; 108; 109; 111; 113; 107; 109; 121; 123; 124; 125; 126; 127; 128; 129; 131; 140; 141; 151; 152; 153; 154; 155; 156; 157; 158; 169; 170; 183; 184; 185; 186; 187; 188; 192; 193; 194; 195; 196; 197; 198; 201; 210; 223; 241; 25; 262; 273; 289; 299; 30; 31; 32; 333; 34; 35; 36; 37; 38; 39; 41; 42; 43; 45; 46; 47; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 62; 61; 62; 62; 63; 63; 64; 65; 66; 67; 68; 69; 70; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 85; 86; 87; 88; 89; 92; 93; 940; 95; 96; 97; 98; 101; 102; 104; 105; 106; 107; 108; 109; 111; 117; 119; 121; 133; 141; 157; 158; 161; 179; 180; 199; 190; 122; 132; 1334; 135; 159; 162; 171; 181; 193; 195; 196; 187; 197; 198; 199; 202; 211; 22; 233; 242; 25; 26; 274; 28; 293; 297; 203; 204; 205; 206; 211; 23; 25; 275; 28; 298; 21; 211; 24; 26; 28; 29; 30; 32; 31; 34; 36; 37; 38; 39; 40; 41; 43; 42; 45; 46; 47; 48; 49; 52; 54; 57; 58; 59; 61; 72; 73; 74; 75; 76; 78; 83; 84; 85; 86; 87; 89; 93; 94; 95; 101; 11; 12; 13; 14; 15; 16; 17; 18; 19; 19; 21; 22; 23; 24; 25; 26; 27; 28; 29; 31; 33; 35; 36; 37; 39; 41; 43; 44; 45; 46; 47; 48; 53; 56; 57; 58; 59; 62; 70; 78; 84; 85; 86; 87; 89; 94; 95; 96; 97; 98; 109; 110; 111; 12; 13; 14; 15; 16; 17; 18; 19; 22; 24; 25; 26; 27; 28; 29; 30; 33; 34; 36; 37; 38; 39; 42; 43; 45; 46; 47; 48; 59; 50; 51; 5
nected with the switching between librational and rotational dynamics when the pendulum swings back and forth and whirls over the top successively. From Fig. 3, we observe oscillatory dynamics in libration and rotation until the pendulum length \(l\) crosses \(1.00218\). When the system switches between rotational and librational dynamics, the difference [37; 74] between the instantaneous phases of the system and the forcing signal exceeds multiple of \(\pi\). Generally, this abrupt change of phase difference is called _phase slip_[75; 76]. So, to verify the appearance of the phase slip, we first calculate the instantaneous phase of the ac torque \(\tau sin(\omega t+\phi)\) using the Hilbert Transform [77] method and then calculate its difference with the system's phase \(\theta\), and finally, we plot the phase difference \(\Delta\) with respect to time in the lower panel of Fig. (5). Interestingly, in the case of each switching between libration and rotation, its subsequent phase slip is observed in the time evolution of \(\hat{\theta}\) delineated in the upper panel of Fig. (5). For \(l=0.999945\), the time series of \(\hat{\theta}\) is depicted in Fig. (5) (a), and its corresponding phase difference is shown in Fig. (5) (b). For the large-amplitude rotational event observed in the time series plot, its respective phase slip is shown in the inset figure for clear visualization of the abrupt jump of the phase difference \(\Delta\), highlighted by the shaded portion for the specific region. In Fig. (5) (c), the time evolution of \(\hat{\theta}\) for \(l=1.00218\) is portrayed, and its corresponding phase difference plot is depicted in Fig. (5) (d). The two subfigures (b) and (d) of the phase difference plot make it reasonably clear that a phase slip in the phase difference with regard to time takes place during the transition of system dynamics from libration to rotation. Consequently, phase slip occurs with the occurrence of EREs.
_Statistics of EREs_: Still now, we observe from Fig. (3) that the whole bifurcation diagram can be classified into five regimes for \(l\in[0.998,1.004]\). Before the pre-crisis regime, i.e., for \(l<0.999942\), there is no sign of EREs. To further validate this claim, we gather sufficiently long data of length \(10^{11}\), out of which we discard the initial transient of size \(10^{6}\). We plot the histogram for \(l=0.999941\) in Fig. (6) **(a)**. Clearly, there are two separate portions in these subfigures. Data related to rotational motion concentrate in the right group, whereas data related to librational motion accumulate in the left group. We further plot the red vertical threshold line \(H_{T}\) to distinguish between extreme rotational events and chaotic oscillations. The distributed numerical data can not cross this threshold \(H_{T}\) as \(l=0.999941\) is chosen from the pre-crisis regime. The scenario differs if we choose \(l=0.999945\) just after the crisis point \(\mathbf{L}\). Here, a fair portion of the accumulated data crosses the red line as seen from Fig. (6) (b) and, thus, confirms the presence of EREs. This finding also validates our bifurcation analysis, given in Fig. (3). We highlight a portion by the shaded box in that figure (Fig. (3)) just after the crisis point \(\mathbf{L}\), from which whenever we choose a value of \(l\), then we can anticipate EREs. Beyond that grey box, those rotational motions are not high enough to cross the pre-defined threshold \(H_{T}\) unless we choose a value of \(l\) from another shaded box just after the crisis from the opposite direction. We choose a value \(1.001\) of \(l\) from the intermediate range far from the crisis points \(\mathbf{L}\) and \(\mathbf{R}\). Figure (6) (c) shows the chaotic trajectories can not cross the threshold line \(H_{T}\) and hence, validates our understanding. As we move further toward the point \(\mathbf{R}\) (\(l\approx 1.00218\)), we can expect the emergence of such large
Figure 5: **Time evolution of \(\theta\) (top panel) and variation of phase difference \(\Delta\) with respect to time (bottom panel):** We choose two values of the pendulum’s length to illustrate the temporal dynamics of \(\hat{\theta}\) and the occurrence of phase slips. We choose \(l=0.999945\) for subfigures (a-b) just after the crisis point \(\mathbf{L}\) and \(l=1.00218\) at the crisis point \(\mathbf{R}\) for subfigures (c-d). The inset of subfigure (b) shows the phase slip during the manifestation of the largest spike of the time series in subfigure (a). An abrupt jump of \(\Delta\) is observed during the phase slip in both subfigures (b) and (d). Red dashed lines indicate the threshold \(H_{T}\).
amplitude EREs again. We choose a value of \(l=1.00218\) at the crisis point **R**, where we expect the occurrence of EREs, as illustrated in Fig. (3). Our grouped data into bins along the x-axis in Fig. (6) (d) attests to the occurrence of EREs. Figure (3) also confirms the same attribute for this choice of \(l\), where we notice the bounded chaotic (rotational) trajectory explodes into a large-size attractor. This bifurcation diagram displays that there are no rotational events beyond this crisis point **R**; hence, we can not anticipate such large-amplitude EREs there. Thus, we can not detect any EREs for \(l=1.002184\) in Fig. (6) (e). Figure (6) (e) only shows the data related to the librational motion.
To further study the inter-event intervals between the EREs [78], we choose two particular values of \(l\). We select the value of \(l\) initially as 0.999945 that lies just after the crisis point **L**. The second one, \(l=1.00218\), coincides with the crisis point **R**. These values of \(l\) correspond to the emergence of
Figure 6: **Histograms of rotational and librational dynamics in semi-log scale**: We collect the data related to both librational and rotational motion over long iterations of length \(10^{11}\) leaving a transient of \(10^{6}\). The gap between the two groups is due to the presence of two different motions involving librational and rotational dynamics. The group of bins on the left side corresponds to the librational motion, and the other group on the right side has rotational dynamics. Using the red vertical line, we also plot the extreme rotational event qualifying threshold \(H_{T}=\mu+6\sigma\), where \(\mu\) is the sample mean, and \(\sigma\) is the standard deviation of the sample. This threshold helps to distinguish the EREs from the chaotic rotational dynamics. The rotational events on the left of \(H_{T}\) are considered as EREs. Note that subfigure (e) contains only librational events at \(l=1.002184\). Parameter values: (a) \(l=0.999941\), (b) \(l=0.999945\), (c) \(l=1.001\), (d) \(l=1.00218\), and (e) \(l=1.002184\). Initial condition: \((\mathbf{\theta_{0}},\mathbf{\dot{\theta_{0}}})=(0.01,0.02)\).
Figure 7: **Probability density functions (PDFs) of the inter-event intervals in semi-log scale**: We calculate the inter-event intervals (IEI) between consecutive EREs from the collected sample of size \(10^{11}\) discarding a sufficiently long transient of length \(10^{6}\). Our numerical data (blue bins) fits well with the continuous exponential distribution (black line). Small inter-event intervals have higher probabilities of occurrence of EREs. However, since the distribution is positively skewed (right-skewed), the probability of the appearance of EREs reduces significantly as the inter-event intervals increase. Parameter values: (a) \(l=0.999945\) and (b) \(l=1.00218\). Estimated rate parameter: (a) \(\lambda=4.2177\times 10^{-06}\) with standard error 3652.39, and (b) \(\lambda=1.6018\times 10^{-05}\) with standard error 493.323. Calculated coefficient of variation (CV): (a) CV = 0.9876 and (b) CV = 1.0168. We use the MATLAB Distribution Filter app to fit the probability distribution to the gathered data.
EREs, as discussed using previous figures. We plot the histograms of the gathered data of length \(10^{11}\) after discarding the initial transient of size \(10^{6}\). Throughout the study, we use the same initial conditions \(\theta_{0}=0.01\) and \(\dot{\theta}_{0}=0.02\) (unless stated otherwise). We find our accumulated data are best fitted to the exponential distribution as shown in Fig. (7). Using MATLAB, we confirm our numerical data (shown in blue bins in Fig. (7)) fitted by the following probability density function (PDF)
\[f(x;\lambda)=\begin{cases}\lambda e^{-\lambda x};&x\geq 0,\\ 0;&x<0,\end{cases} \tag{2}\]
where \(\lambda>0\) is the rate parameter. We explicitly calculate this rate parameter \(\lambda\) for \(l=0.999945\) and \(1.00218\). The best estimated rate parameters are \(\lambda=4.2177\times 10^{-06}\) with standard error 3652.39 for Fig. (7) (a) and \(\lambda=1.6018\times 10^{-05}\) is for (7) (b) with standard error 493.323. Notably, the standard error in subfigure (a) is more significant (larger) than in subfigure (b) regarding data fitting by the exponential distribution. This large error is due to the number of EREs in our gathered data. We iterate the system (1) for \(10^{11}\) iterations discarding a sufficiently long transient of length \(10^{6}\) for both subfigures. However, the number of EREs in subfigure (a) is 4214, and the same in subfigure (b) is 16015. The availability of a larger number of EREs in subfigure (b) offers a better statistical convergence to the exponential distribution than in subfigure (a). Thus, we get a lower standard error in subfigure (b). We additionally calculate the coefficient of variation (CV), defined as the ratio of standard deviation and the mean of the sample. This measure is equal to 1 in the case of the exponential distribution. The respective CVs are 0.9876 (for \(l=0.999945\)) and 1.0168 (for \(l=1.00218\)) for the gathered sample shown in subfigures (a) and (b) of Fig. (7). Since these derived CVs are very close to unity for our sample and using MATLAB distribution fitter, we conclude the accumulated data is distributed according to the exponential distribution.
## IV Conclusions
In this work, we have shown how a suitable choice of pendulum length can produce large-amplitude rotational motion of a forced-damped nonlinear pendulum under the influence of the ac and dc torque. We have characterized the librational and rotational events using the bifurcation analysis. The same bifurcation diagram helps us detect extreme events' emergence in the rotational dynamics. The system displays rotational and librational dynamics, but occasionally the angular velocity becomes higher during rotation than in regular observation. Our numerical simulations suggest that the chaotic attractor in the rotational motion suddenly enlarges at two different post-crisis regimes due to the interior crisis, generating intermittent behavior in the rotational dynamics. The temporal evolutions of the angular velocity further validate that these sporadic rotational events occasionally cross a statistically pre-defined threshold. Hence, these large-amplitude rotational events have similar features of extreme events, as observed in various nonlinear dynamical systems. We have also displayed the respective phase portrait for each time series to confirm our claims. Furthermore, we have obtained an exponential distribution for the inter-event intervals between EREs. We have elucidated the occurrence of phase slips between the system's phase and the externally applied ac torque in due course of the origination of extreme rotational events.
It might be interesting to detect the chaotic saddles lying in the basin of attraction of the attractor, generating interior crises. However, identifying such nonattracting chaotic sets mediating interior crises is more challenging and requires further investigation. One might investigate coupled nonlinear pendula in the spirit of the present study, which may offer greater insight into a wide range of dynamical systems. Examining whether the observed signature of extreme rotational events is experimentally observable will be further interesting. Such generalizations are left as an exciting core avenue for future research. It is also possible to investigate why the angular velocity rises irregularly during rotation. In conclusion, we anticipate that the findings of this work will contribute to our knowledge of how extreme large-amplitude events arise in nonlinear dynamical systems and will encourage additional research into the causes of these extreme rotational events in other non-equilibrium systems.
## Acknowledgements
DG and TKP are supported by Science and Engineering Research Board (SERB), Government of India (Project no. CRG/2021/005894). TKP is stretching his naive and earnest thanks to Gourab Kumar Sar, Md Sayeed Anwar, and S. Sudharsan for their friendly support and benevolent discussions. AR wants to thank Subrata Ghosh for fruitful discussions. SNC and AR are indebted to Arindam Mishra, Chittaranjan Hens, and Syamal K Dana for valuable discussions and feedback on this manuscript. SNC thanks Srilena Kundu for the insightful discussions and productive conversations.
## Conflict of Interest
The authors have no conflicts to disclose.
## Data Availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
|
2309.09787 | Harnessing Collective Intelligence Under a Lack of Cultural Consensus | Harnessing collective intelligence to drive effective decision-making and
collaboration benefits from the ability to detect and characterize
heterogeneity in consensus beliefs. This is particularly true in domains such
as technology acceptance or leadership perception, where a consensus defines an
intersubjective truth, leading to the possibility of multiple "ground truths"
when subsets of respondents sustain mutually incompatible consensuses. Cultural
Consensus Theory (CCT) provides a statistical framework for detecting and
characterizing these divergent consensus beliefs. However, it is unworkable in
modern applications because it lacks the ability to generalize across even
highly similar beliefs, is ineffective with sparse data, and can leverage
neither external knowledge bases nor learned machine representations. Here, we
overcome these limitations through Infinite Deep Latent Construct Cultural
Consensus Theory (iDLC-CCT), a nonparametric Bayesian model that extends CCT
with a latent construct that maps between pretrained deep neural network
embeddings of entities and the consensus beliefs regarding those entities among
one or more subsets of respondents. We validate the method across domains
including perceptions of risk sources, food healthiness, leadership, first
impressions, and humor. We find that iDLC-CCT better predicts the degree of
consensus, generalizes well to out-of-sample entities, and is effective even
with sparse data. To improve scalability, we introduce an efficient
hard-clustering variant of the iDLC-CCT using an algorithm derived from a
small-variance asymptotic analysis of the model. The iDLC-CCT, therefore,
provides a workable computational foundation for harnessing collective
intelligence under a lack of cultural consensus and may potentially form the
basis of consensus-aware information technologies. | Necdet Gürkan, Jordan W. Suchow | 2023-09-18T14:05:04Z | http://arxiv.org/abs/2309.09787v2 | # Harnessing Collective Intelligence Under a Lack of Cultural Consensus
###### Abstract
Harnessing collective intelligence to drive effective decision-making and collaboration benefits from the ability to detect and characterize heterogeneity in consensus beliefs. This is particularly true in domains such as technology acceptance or leadership perception, where a consensus defines an intersubjective truth, leading to the possibility of multiple "ground truths" when subsets of respondents sustain mutually incompatible consensuses. Cultural Consensus Theory (cct) provides a statistical framework for detecting and characterizing these divergent consensus beliefs. However, it is unworkable in modern applications because it lacks the ability to generalize across even highly similar beliefs, is ineffective with sparse data, and can leverage neither external knowledge bases nor learned machine representations. Here, we overcome these limitations through Infinite Deep Latent Construct Cultural Consensus Theory (idlc-cct), a nonparametric Bayesian model that extends cct with a latent construct that maps between pretrained deep neural network embeddings of entities and the consensus beliefs regarding those entities among one or more subsets of respondents. We validate the method across domains including perceptions of risk sources, food healthiness, leadership, first impressions, and humor. We find that idlc-cct better predicts the degree of consensus, generalizes well to out-of-sample entities, and is effective even with sparse data. To improve scalability, we introduce an efficient hard-clustering variant of the idlc-cct using an algorithm derived from a small-variance asymptotic analysis of the model. The idlc-cct, therefore, provides a workable computational foundation for harnessing collective intelligence under a lack of cultural consensus and may potentially form the basis of consensus-aware information technologies.
collective intelligence, consensus beliefs, bayesian modelling, cultural consensus theory
## 1 Introduction
Collective intelligence refers to decision-making processes that draw on the collective opinions of multiple individuals. This often results in higher-quality decisions compared to those made by a single individual, a phenomenon commonly termed the "wisdom of the crowd" [1, 2, 3]. A multiplicity of methods can serve as information-pooling mechanisms to derive collective intelligence, ranging from structured communication techniques to prediction markets and algorithmic aggregation methods. Technologies that implement these methods and facilitate the sharing of opinions and knowledge have further enhanced collective intelligence, leading to innovative approaches and methodologies for problem-solving [4]. This integration is readily apparent in practical applications; for instance, businesses now routinely leverage crowdsourcing platforms to perform tasks such as creative design, ideation, and prediction [5, 6, 7].
From their origins in taking the median response of the group [8], traditional aggregation and belief-merging methods have been premised on the notion that each respondent provides a partial view of an objective truth (e.g., the weight of the ox) or of a single intersubjective consensus truth (e.g., who is the group's leader), while acknowledging that individuals may have unequal domain knowledge, vigilance, or bias that causes their responses to vary [9, 10, 11]. The effectiveness of aggregation methods for deriving collective intelligence then hinges on their ability to parcel out these
sources of signal, noise, and bias in the judgments respondents make, with more effective aggregation mechanisms coming closer to that objective or intersubjective truth.
However, these methods are often applied in domains where there is a lack of cultural consensus, with different subsets of people forming conflicting culturally constructed beliefs. For example, in one subset, a leader may be perceived as someone who is authoritative and decisive, while in another subset, leadership may be associated with collaborative and inclusive decision-making [12]. One subset may view face tattoos as taboo, whereas another may find them to be stylish and meaningful expressions of individual or group identity [13]. A subset may avoid certain technologies because of privacy concerns, while another views them as vital for improving the quality of life [14]. It quickly becomes evident that many concepts are socially learned and deeply influenced by respondents' personal and cultural beliefs, leading to the possibility of multiple incommensurate views [15].
Cultural Consensus Theory (cct) provides a statistical framework for information pooling in domains where there may be a lack of cultural consensus, enabling those who use it to infer the beliefs and attitudes that influence social practices and the degree to which respondents know or express those beliefs [16]. Consensus models based on cct then provide an opportunity to simultaneously study both individual and group-level differences by examining the extent to which a respondent conforms to the consensus within one or more subsets, and facilitating the representation of how people differ in terms of their level of knowledge and response biases. Researchers have applied the cct framework to find a practical and concise definition of beliefs that are accepted by a group sharing common knowledge. Applications are found in a wide variety of domains, including cognitive evaluation [17], organizational culture [18], and online communities [19].
However, cct has several limitations that preclude its use as a computational foundation for harnessing collective intelligence in modern applications. First, it treats each question-answer pair independently, which prevents generalization across questions. This hinders the model's performance in the sparse data regime and leads to a cold-start problem when the model cannot generalize to new questions, even those highly similar to questions previously asked. Therefore, the number of questions required to characterize a culture's consensus beliefs scales linearly with the number of culturally held beliefs. Finally, cct can leverage neither existing structured knowledge bases nor pretrained learned representations that could provide relevant information about known entities and their relations. These constraints hinder the application of cct in more complex or dynamic cultural contexts where the interrelatedness of beliefs and the availability of preexisting knowledge could play a pivotal role.
These issues find solution in modern machine-learning techniques that enable one to make predictions about out-of-sample items by learning a model over latent representations. Machine learning algorithms can analyze vast quantities of data from various modalities, including text, images, and audio, and identify patterns and relationships that generalize beyond the training data [20, 21, 22]. Leveraging these powerful techniques, it is now possible to create vector-feature representations of words, sentences, visual scenes, and images of objects. These high-dimensional representations at times approximate human mental representations [23]. Although they are not comprehensive theories of human cognition, vector representations of various real-world objects and concepts have been used as inputs to linear models that can predict individual and aggregate evaluations on a wide range of topics, including perceived risk [24], first impressions based on faces [25, 26], perceptions of leadership [27], and evaluation of creative writing [28].
Even so, machine-learning methods are not immune to the very same limitations that beset traditional aggregation methods, and they similarly fail in domains that lack a clear cultural consensus. For instance, a computer vision algorithm might be used to predict a label from an image without recognizing that different subsets of respondents have distinct normative responses regarding which label ought to be applied. Or, a natural-language processing algorithm might be used to predict whether user-generated content is consistent with the norms of an online community, without recognizing that different subsets of respondents have distinct normative responses regarding what content is deemed acceptable. This is particularly relevant in domains where a consensus among respondents defines an intersubjective truth, leading to the possibility of multiple "ground truths" when subsets of respondents sustain mutually incompatible consensuses.
To combine the strengths of cct and machine-learning methods while addressing their respective limitations, we propose the idlc-cct, an extension to the cct that (1) allows culturally held beliefs to take the form of a _deep latent construct_: a fine-tuned deep neural network that maps the features of a concept or entity to the consensus response among a subset of respondents, and (2) draws these deep latent constructs from a Dirichlet Process using the stick-breaking construction. The approach therefore aligns pretrained machine representations to both group- and individual-level judgments, effectively capturing variations in belief processes and behaviors across them under a lack of cultural consensus. We evaluate the idlc-cct on people's judgments of various phenomena, including risk sources, leadership effectiveness, first impressions of faces, and humor. This refined technique has broad scientific and practical applicability in domains where (1) social scientists and organizations currently study group-level variation but do not leverage machine-learning methods to gain insight, and (2) computer scientists and behavioral scientists apply
machine-learning methods to harness collective intelligence without considering conflicting culturally constructed beliefs.
The plan for the paper is as follows. In Section 2, we introduce advancements in deep learning and applications of word, sentence, and image embeddings. Section 3 reviews the cct and provides a mathematical formalism for continuous responses that serves as the foundation for our extended model. Following that, in Section 4 we introduce our extended deep latent-construct cct model, the idlc-cct. Sections 5 and 6 describe the datasets and methods used to validate the model. Finally, in Sections 7 and 8 we present and discuss the results obtained from fitting the model to data across the several domains.
## 2 Machine representations
Recent advancements in deep learning have given rise to expressive high-dimensional vector representations, known as embeddings [23]. These embeddings capture the relationships, patterns, and similarities of entities such as words or objects. Deep learning involves training large multi-layered neural networks, consisting of input, hidden, and output layers [23]. The input layer receives data (e.g., image or text), while hidden layers transform the data into intermediate representations, and the output layer generates a response (e.g., label or action). These intermediate representations, or embeddings, effectively capture expressive features of high-dimensional data points, allowing for successful mapping to responses. Embeddings are versatile inputs for a variety of predictive and explanatory models [29; 30; 20; 22; 31]. Researchers employ different deep-learning techniques to generate these embeddings based on the specific application [32]. Subsequent sections will discuss word, sentence, and image embeddings in detail.
### Sentence and Word Representations
The field of distributional semantics concentrates on automatically deriving embeddings for real-world concepts from large-scale natural language data [33]. These models are based on the understanding that a significant portion of human knowledge is embedded in word co-occurrence patterns [34]. By leveraging these patterns, researchers can create word embeddings as vectors in high-dimensional semantic spaces or as collections of probabilistic semantic topics [35; 36; 37; 38]. The resulting vectors show that closely associated words, which are often discussed and referenced in similar contexts, have similar representations and are therefore closer to each other in the semantic space.
In most use cases, "pre-trained" distributional semantic representations for concepts serve as inputs for more sophisticated models. These models are then "fine-tuned" using participant data, allowing them to approximate the structured knowledge inherent in participants' responses. Applications include predicting the perceived riskiness of various hazards [24], evaluating perceived leadership [27], modeling stereotypes and prejudices that bias social judgments [39], and extracting team cognitive diversity [31]. In addition, recent deep learning advancements can generate vector representations that capture key aspects of sentence meaning [40; 41; 42], which can then be fine-tuned for downstream tasks using secondary machine learning models (see [43] for an intuitive explanation). Researchers have applied these methods to study people's semantic cognition, such as commonsense [44], relation knowledge [45], similarity judgements [46], and individual's propensity ratings [47].
### Image representations
Deep neural networks have demonstrated the ability to match or surpass human performance on benchmarks for image classification [48], detection [49], and recognition [50]. Training these networks on large-scale database of images enables them to learn versatile feature sets that generalize effectively to real-world settings. The representations discovered by deep neural networks can be used in models of human behavior for perceptual tasks, such as predicting the memorability of objects in images [51] and predicting human typicality judgments [52]. While their predictive power is evidence of relevance to human judgments, these models fail to fully capture the structure of human psychological representations [53]. Comparison of these two representations is challenging because human psychological representations cannot be directly observed. Researchers thus have combined representations obtained from machines with psychological models to examine the correspondence between the two [54] and bring them into closer alignment [55; 25].
## 3 Cultural Consensus Theory
The underlying principle of consensus-based models is that, for many tasks, a group's central tendency often yields a precise outcome. The collective response can serve as a substitute for the actual answer when assessing individual group members--those whose judgments are closer to the group's central tendency (across multiple questions) are presumed
to possess greater knowledge. Consequently, consensus-based models can be employed to estimate an individual's level of knowledgeability when there is no available ground truth. Cultural Consensus Theory (cct), developed by [16], is a theory and method that outlines the conditions under which agreement among individuals can indicate knowledge or accuracy. Many folk epistemological systems are based on this very relationship between consensus and truth; notable examples include the jury system, decentralized content moderation in online communities, and numerical ratings on review aggregation sites. The principle is even reflected in expressions such as "50,000,000 Elvis fans can't be wrong."
Researchers employing cct strive to measure the consensus from respondents' individual responses in cases where the researchers do not know the consensus ahead of time, nor which respondents have more or less knowledge. Indeed, the problem solved by cct is akin to determining the answer key to a test given to respondents while simultaneously grading those respondents with respect to the answer key [56, 57]. Additionally, like related Item Response Theory methods [58], cct measures the difficulty of the questions. In cct, this is accomplished with cognitive-based response process models, with consensus answers and cognitive characteristics of the respondents estimated endogenously. Thus, cct is useful to researchers in situations where (a) the consensus knowledge of the respondents is unknown to the researcher ahead of time, (b) the researcher has access to a limited number of respondents who may or may not have equal access to this shared cultural knowledge, (c) the researcher can construct a relevant questionnaire but does not know which questions are more or less difficult, and (d) the researcher does not know much a priori about the characteristics of the respondents.
The cct framework additionally takes into account each respondent's response biases, which are the shift and scale parameters of the function that maps their latent appraisal to a location on the response scale. An example of a scale bias is a tendency to mark most values either at the outer ends or in the middle section of the scale. An example of a shift bias is a tendency to mark values more frequently on one side of the scale. These parameters derive from a function often used to scale bias in probability estimation in the range (0, 1), called the Linear in Log Odds function [59]. Thus, the scale bias parameter estimates the shrinkage or expansion of the latent appraisals, while the shift bias parameter estimates a shift to the left or right.
The first cct model, the General Condorcet Model (GCM), was developed for binary data (true/false responses) and assumes that the consensus truth of each item is also a binary value [16]. The GCM has been widely used in the social and behavioral sciences [60]. [61] introduced an alternate assumption to the GCM to extend it to continuous truths. An extensive cct model for ordinal data was developed using a Gaussian appraisal model [62]. In addition, cct models for continuous response data were developed to estimate and detect cultural consensuses, respondent knowledge, response biases, and item difficulty from continuous data [63, 56].
Here, we describe the Continuous Response Model (CRM), developed by [63] and that allows for multiple consensus truths, which serves as the basis for our extension to the model.
### Continuous Cultural Consensus Theory
As a starting point, consider the Continuous Response Model (crm) [63], a cultural consensus model for continuous data derived from observations of the random response profile matrix \(\mathbf{X}_{ik}=(X_{ik})_{N\times M}\) for \(N\) respondents and \(M\) items, where each respondent's response falls within \((0,1)\) or a finite range that permits a linear transformation to \((0,1)\). The crm links the random response variables in (0, 1) to the real line through the logit transform \(X^{*}=\text{logit}(X_{ik})\). Therefore, each item has a consensus value in \((-\infty,\infty)\).
The crm is formalized and further explained by the following axioms:
**Axiom 1** (_Cultural truths_).: There is a collection of of \(V\geq 1\) latent cultural truths, \(\{T_{1},...,T_{v},...,T_{V}\}\), where \(T_{V}\in\prod_{k=1}^{M}(-\infty,\infty)\). Each participant \(i\) responds according to only one cultural truth (corresponding consensus locations), as \(T_{\Omega_{i}}\), where \(\Omega_{i}\in\{1,....,V\}\), and parameter \(\Omega=(\Omega_{i})_{1\times N}\) denotes the cultural membership for each respondent.
**Axiom 2** (_Latent Appraisals_).: It is assumed that each participant draws a latent appraisal, \(Y_{ik}\), of each \(T_{\Omega_{ik}}\), in which \(Y_{ik}=T_{\Omega_{i}k}+\epsilon_{ik}\), The \(\epsilon_{ik}\) error variables are distributed normal with mean 0 and standard deviation \(\sigma_{ik}\).
**Axiom 3** (_Conditional Independence_).: The \(\epsilon_{ik}\) are mutually stochastically independent.
**Axiom 4** (_Precision_.).: There are cultural competency parameters \(\mathbf{E}=(E_{i})_{1\times N}\) with all \(E_{i}>0\), and item difficulty parameters specific to each cultural truth \(\Lambda=(\lambda_{k})_{1\times M}\), \(\lambda_{k}>0\) such that
\[\sigma_{ik}=\lambda_{k}/E_{i}. \tag{1}\]
If all item difficulties are equal, then each \(\lambda_{k}\) is set to 1.
**Axiom 5** (_Response Bias_). There are two respondent bias parameters that act on each respondent's latent appraisals to arrive at the observed responses \(X_{ik}\). These include a scaling bias, \(\textbf{A}=(a_{i})_{1\times N},a_{i}>0\); and shifting bias \(\textbf{B}=(b_{i})_{1\times N},-\infty<b_{i}<\infty\), such that
\[X_{ik}^{*}=a_{i}Y_{ik}+b_{i}. \tag{2}\]
These axioms are developed to model the continuous responses of respondents that differ in cultural competency, \(E_{i}\), and response biases, \(a_{i}\) and \(b_{i}\), to items that have different shared latent truth values. The respondents have a latent appraisal with a mean at the item's consensus location, plus some error, which depends on their competence and the item difficulty. Axiom 1 locates the item truth values in the continuum. Axiom 2 defines the appraisal error is normally distributed with mean zero. Axiom 3 sets the appraisals to be conditionally independent given the respondents' cultural truth and the error standard deviations. Axiom 4 specifies the standard appraisal error that depends on the respondent's competence and item difficulty. Axiom 5 sets each respondent's response shift and scale biases.
## 4 Deep Latent-Construct CCT
cct represents the structure of culturally held beliefs as a lookup table, where keys correspond to questions and values to answers. In this representation, questions and answers lack defined internal structures and are connected only through correlations across respondents' answers. However, this formulation comes with several limitations. First, it treats each question-answer pair independently, preventing information from one question from informing our understanding of others. Second, the number of questions required to characterize a culture scales linearly with the number of culturally held beliefs. Finally, there is no way to benefit from existing structured knowledge bases that provide information about known entities and their relations.
We begin by extending CCT to provide a more sophisticated representation of culturally held beliefs than a mere lookup table. In this extension, we define these beliefs as a _latent construct_, a function that maps a question to a consensus answer through an intermediate representation [64]. We consider latent constructs that are structured as deep neural networks fine-tuned using a linear readout layer that is specific to a particular culture. Thus the latent construct as it applies to a particular culture is a combination of pre-trained embeddings and the regression weights associated with that culture.
Formally, then, we introduce two elements to the model described in Section 3.1: (1) pre-trained embeddings \(\phi_{k}\) that represent a featurization of the entity \(k\) under question, and (2) the culture-specific regression weights \(\omega_{\Omega_{i}}\). The relationship between the regression weights of the latent construct and the embeddings is encapsulated in the embeddings is represented through the regression equation
\[T_{vk}=\phi_{k}\omega_{\Omega_{i}}^{T} \tag{3}\] \[Y_{ik}=T_{vk}+\epsilon_{ik},\]
where \(\epsilon_{ik}\) is the error variable in Axiom 4 (Eq. 1). We then replace the consensus location described in Axiom 1 with a function that takes as input the embeddings and corresponding weights for each feature and outputs the consensus.
When fitting the latent construct, we use Bayesian Ridge regression to regularize its weights. The prior for the coefficients, \(\omega_{\Omega_{i}}\), is given by a spherical Gaussian:
\[p(\omega_{\Omega_{i}}\mid\zeta)=\text{Normal}(\omega_{\Omega_{i}}\mid 0, \zeta^{-1}\textbf{I}_{p}), \tag{4}\]
with the prior over \(\zeta\) assumed to be Gamma distributed, the conjugate prior for the precision of the Gaussian.
### Infinite Deep-Latent Construct CCT
In formulations that allow for the possibility of multiple cultures, CCT analyzes eigenvalues obtained from the cross-participant correlation matrix to determine the number of cultures present [63]. In the context of our Bayesian formulation of deep latent construct CCT, we employ a Bayesian non-parametric technique in which cultures (and their associated latent constructs) are drawn from a Dirichlet Process (DP) by way of the stick-breaking construction [65]. The Dirichlet Process serves as a prior over discrete distributions and is particularly useful for mixture models, where the number of components is not known a priori and can grow with the data. The DP's attribute of being an infinite discrete prior has led to it being widely applied not only to mixture models [66; 67], but also to psychometric models as a prior over probabilities, facilitating flexible modeling of individual differences and underlying traits [68; 69]. Using this method in the context of CCT provides an end-to-end probabilistic approach to learning the number of cultures needed to account for the observed data that simultaneously learns the culture assignments, latent constructs, and individual-level parameters. These parameters can be marginalized over the discrete cultural membership indicators
\(\mathbf{z}\) by using an efficient posterior inference algorithm (e.g., adi, nuts, hmc) for learning the joint posterior of the remaining model parameters.
Although the model assumes an unbounded number of cultures in theory, in practice, for a given set of respondents, only a finite number of cultures will have at least one respondent assigned to them. To make the model more computationally feasible and efficient, we truncate the Dirichlet Process by selecting an upper bound that caps the possible number of cultures. This approach strikes a balance between enabling the model to account for a variety of cultures while keeping resource consumption, such as processing time and memory, in check.
### Hierarchical specification of the extended CCT
In this section, we provide a hierarchical specification of the generative model that underpins Infinite Deep Latent Construct (idlc-cct), where population distributions are specified for the parameters using hyperparameters [70]. These hyperparameters are estimated from their own distributions and can represent the central tendency across items or respondents, which may be unique to each dataset. The hierarchical structure of the generative model is as follows:
\[\omega_{\Omega_{i}} \sim\text{Normal}(0,\zeta^{-1})\] Coefficient weights \[T_{vk} =\phi_{k}\omega_{\Omega_{i}}^{T}\] Latent construct item location \[\text{log}(\lambda_{vk}) \sim\text{Normal}(\mu_{vk},\tau_{vk})\] Item difficulty \[\text{log}(E_{i}) \sim\text{Normal}(\alpha_{E_{\Omega_{i}}},\kappa_{E_{\Omega_{i}}})\] Respondent competency \[\text{log}(a_{i}) \sim\text{Normal}(\mu_{a_{i}},\tau_{a_{i}})\] Respondent scaling bias \[b_{i} \sim\text{Normal}(\mu_{b_{i}},\tau_{b_{i}})\] Respondent shifting bias \[\Omega_{i} \sim\text{Categorical}(\pi)\] Group membership \[\pi \sim\text{stickbreaking}(\beta)\] Pr. of group membership \[\beta \sim\text{Beta}(1,\alpha)\] Group sparsity
The item consensus \(T_{vk}\) is derived by applying Bayesian Ridge Regression to the pretrained embeddings. The location of the item consensus is calculated by taking the dot product of the relevant two row matrices. The other model parameters, \(E_{i}\), \(\lambda_{i}\), and \(a_{i}\), which are each located on the positive half-line, are log-transformed to the real line and also assumed to be sampled from a normal population-level distribution. The shift bias, \(b_{i}\), is located on the real line, parameterized with a mean and precision (inverse variance). Note that the respondents' competence parameter remains singly indexed by \(i\), through an indexing technique in which their distribution is specified by their group membership \(\Omega_{i}\). Culture assignments are derived via a stick-breaking prior, and this allows for varying probabilities of being in any of \(V\) groups; note that \(V\) is unknown priori and needs to be estimated from observed data. The sparsity of cultures, \(\beta\), is sampled from a beta distribution. The variables \(\Omega_{i}\) and \(\pi\) are removed for the single-truth variant of our model.
## 5 Method
We applied the idlc-cct and dlc-cct to a diverse range of datasets that included judgments of the healthiness of food, humor, leadership effectiveness, social attributes of faces, and risk. These datasets were chosen because they include respondents' individual responses to questions (items) that are related to their shared knowledge or beliefs, and because the nature of the domain is such that a consensus contributes to an intersubjective truth. For our idlc-cct model, we rescaled these ratings to fall within the range of 0 to 1. Excluded were datasets with only aggregate responses, or where the nature of the domain is fully objective or subjective.
Models were implemented in NumPyro[71] with the jax backend [72]. The model components were integrated into a single likelihood function and a set of prior distributions. Inference was performed using a Gibbs Sampler [73] combined with the No-U-Turn Sampler (nuts) [74], a standard Markov Chain Monte Carlo sampling algorithm, as implemented in NumPyro. We used two chains with 10,000 warm-up samples and 10,000 draw samples, thereby obtaining 20,000 posterior samples for each model. We ensured that the posterior had converged by ensuring there were no divergent transitions and monitoring chain diagnostics such as \(R_{\text{hat}}\). The code for the models and datasets are available on GitHub at [Redacted for blind submission].
In the next section, we elucidate the phenomena associated with each dataset, along with a thorough description of the datasets themselves.
Concept and Data
### Sense of humor
Everyone appreciates humor. In the workplace, it offers benefits such as team bonding, employee motivation, idea generation, and frustration diffusion through venting [75; 76]. However, it is important to recognize that humor can also have downsides, such as causing distractions, harming credibility, or offending others in diverse work environments [77]. The differences in how humor is enacted and understood arise from individual differences, such as gender, beliefs, and mental state [78]. Therefore, organizations may find themselves in a complex position, responsible for considering individual differences in perception of humor to avoid negative consequences in the workplace [79].
Another stream of research focuses on the use of humor in marketing materials, such as commercial and print ads, as well as entertainment products like TV shows and web content. A common belief is that making someone laugh can lead to a purchase. Advertising and marketing research, indeed, has demonstrated that humor can enhance ad likability, partly due to its ability to evoke positive emotions [80]. Humorous ads positively impact viewers' moods and elicit more positive feelings than non-humorous ads [81]. These positive emotions may lead to a favorable attitude towards the ad [82; 83], increased liking of the source [84], and stronger persuasion [85]. While studies indicate the positive impacts of humor in marketing and advertising, the effectiveness of humor depends on the targeted audience's idiosyncratic features [86; 87]. Hence, incorporating humor tailored to the target audience in marketing and advertising could be vital for companies to positively influence consumers and avoid negative consequences [88].
We employed the idlc-cct on the Jester dataset [89], commonly used in recommendation system tasks. The dataset consists of 140 jokes rated by 59,132 internet users on a scale of -10 (extremely unfunny) to 10 (extremely funny). To avoid computational intensity that could be caused by the high number of respondents, we selected the ratings of a random subset of 5,000 users from the dataset as input for our model. Rating distribution across items is sparse due to the disparity in the number of ratings made for different items. Ratings of 112 jokes were used for training and the remaining 28 were held-out as a validation set, an 80-20 split.
We obtained embeddings for each joke from RoBERTa, a pre-trained language model [42] that extracts features from textual data and generates a latent vector in a 768-dimensional space. These latent vectors serve as pre-trained embeddings in the model.
### First impression of faces
Though warned not to judge a book by its cover, people nonetheless attribute a wide variety of traits (e.g., trustworthiness, dominance, or smartness) to strangers based on their facial appearance. These judgments are formed quickly and have considerable impact on human behavior in domains as diverse as loan approval [90], politics [91], law [92], business [93], and social decisions [94; 93]. For example, [93] showed that people form impressions of aspiring leaders from their faces, which in turn predict their success in reaching prestigious leadership positions.
Some trait judgments have high levels of inter-rater agreement within particular cultures and are shared by observers in multiple cultures [95; 96]. Such agreement among the groups is a so-called consensus impression that is shared by many individuals or communities. For example, many individuals in the U.K. and the U.S. attribute naively and trustworthiness to faces with large eyes and round features [96]. The same participants tend to judge short, squat faces to be more aggressive than faces that are tall and thin [97]. On the other hand, face-trait judgments are found to be idiosyncratic and differ between individuals living within the same culture [95]. It suggests that one's local environment that is surrounded around family and community likely drives the observed individual differences. Similarly, [98] showed that perceivers learn conceptual knowledge of how traits correlate that shape trait inferences across faces, personal knowledge, and stereotypes in one's environment. These results show the importance of individual differences in first impression formation relative to cultural variability.
We applied the iDLC-cct to a large dataset of people's first impressions of faces [25]. The dataset contains over 1 million judgments of 34 trait inferences for 1,000 face images. Each face is rated by 30 unique participants for each trait. In this dataset, the face images are generated using a synthetic photorealistic image generator, StyleGAN2 [99]. The generator network component of StyleGAN2 models the distribution of face images conditioned on a 512-dimensional, unit-variance, multivariate normal latent variable. This vector is the pretrained embedding used for our modeling. For each trait, we used the ratings of 800 faces for training and the remainder (200 faces) for validation, an 80-20 split.
### Risk perception
Research on risk perception investigates people's views when asked to describe and assess potentially dangerous activities and technologies. Such research supports risk analysis and decision-making in society, such as enhancing
techniques for obtaining risk-related opinions, predicting public reactions to hazards, and refining the exchange of risk information among the public, technical experts, and policymakers [100; 101]. Risk is fundamentally a subjective notion [100], and discerning how individuals perceive risk is essential for understanding the relationship between risk and decision-making processes at the individual, group, and organizational levels [100; 102]. Various factors have been found to explain how people perceive a hazard, such as the risk's characteristics [103], perceived benefits [104], knowledge [105], personal norms [106], and affective associations [107]. These factors could be unobserved variables that should be considered when examining the public's perception of a hazard.
We applied the iDLC-cct to people's risk perceptions of technological hazards, activities, and participant-generated risk sources [24]. The dataset on risk perception of technological hazards consists of 125 technologies of varying risk levels. The items in this dataset were based on Slovic's experiment [100]. The dataset on risk perception of daily activities consists of 125 activities of varying risk levels. The items in this dataset were also based on Slovic's experiment [100]. The dataset on participant-generated risk sources consists of the 200 risk sources most frequently listed by participants, without any category limitations. We used to word2vec [37] word representations to obtain an embedding for each risk source, which is in a 300-dimensional space. This vector is the pretrained embedding used for our modeling. For risk sources of technologies and activities, we used the ratings of 100 sources for training and the remainder (25 sources) for validation, an 80-20 split. For participant-generated risk sources, we used the ratings of 160 risk sources for training and the remainder (40 sources) for validation, an 80-20 split.
### Leadership perception
Identifying the factors that predict leader selection is crucial, as a leader's influence on their organization's success impacts the well-being of its members and those affected by the organization's output [108]. As a result, organizations and their members should be highly motivated to recognize and select effective leaders within their domain, ideally based on objective indicators of leadership quality. Leadership researchers have emphasized the importance of studying how followers perceive their leaders [109; 12; 110]. Although researchers' opinions may vary regarding the characteristics of effective leaders, individuals often possess implicit leadership theories that define the cognitive structures and criteria distinguishing leaders from non-leaders, as well as effective leaders from ineffective ones [111]. While skilled leaders may not always be acknowledged as such, the perception of leadership ability is essential for an individual's success in a leadership role [112; 91].
We applied the iDLC-cct model to a dataset containing people's judgments of leadership effectiveness [27]. This dataset includes assessments from 210 participants regarding 293 individuals, consisting of both leaders and non-leaders. Participants rated leadership effectiveness on a scale of 0 (extremely ineffective) to 100 (extremely effective). We utilized word2vec word representations [37] to generate an embedding for each individual, projecting the word vectors into a 300-dimensional space. This vector is the pretrained embedding used for our modeling. We used the ratings of 235 individuals for training and the remainder (58 individuals) for validation, an 80-20 split.
### Food healthiness
People hold diverse opinions regarding what constitutes a healthy diet [113; 114]. These opinions are shaped by various factors, including personal beliefs, associations, attitudes, and knowledge regarding the healthiness of food [115]. Due to the complex interplay of psychological, cognitive, and social factors influencing consumer choices, there is no definitive answer to what makes food appear healthy [116]. Even experts cannot reach a consensus on the definition of a healthy diet [117]. Consequently, understanding people's health beliefs and perceptions of healthy food is crucial for creating culturally and socially appropriate behavior change interventions [118].
Choosing healthy food is a complex behavior that involves cultural (e.g., customs, social norms), psychological (e.g., body image), and social factors (e.g., price availability, ethical concerns) [119]. Prior research on healthy food perceptions across diverse groups has highlighted the unique ways individuals define healthy food [120]. Cross-cultural studies indicate that some cultures share similar views on healthy food choices and nutritional intake [121], while others differ [120]. Factors such as gender and education level also impact judgments about the healthiness of food [122]. Beyond individual characteristics, front-of-package labeling is considered to have a significant influence on people's evaluations of food healthiness [123]. However, judgments based on front-of-package labeling are also affected by individual features [124].
We applied the iDLC-cct model to a dataset of laypeople and experts' judgments of food healthiness [125]. The lay people dataset contains judgments of 149 lay and 19 expert participants of a diverse set of 172 foods. Participants were asked to judge the healthiness of food on a scale ranging from -100 (extremely unhealthy) to +100 (extremely healthy). We used word2vec word representations [37] to obtain an embedding for each food, projecting the word vectors into a
300-dimensional space. This vector is the pretrained embedding used for our modeling. We used the ratings of 235 food items for training and the remainder (34 food items) for validation, an 80-20 split.
## 7 Results
Across all the datasets, we consistently found that the dlc-cct outperformed the original cct model, showing a reduction in the average RMSE by 0.12 and an increase in \(R^{2}\) by 0.22 (Table 1). Because the original cct lacks a mechanism by which to meaningfully generalize across items based on their content, it relies only on the posterior mean of the consensus parameter's prior when predicting held-out items. In contrast, as indicated in Table 1, dlc-cct significantly outperforms the original cct by generalizing to held-out items, demonstrating the benefits of integrating a deep latent construct into cultural consensus analysis.
Likewise, we found that the idlc-cct (with its multiple consensuses) outperforms the dlc-cct (with only a single consensus). The dlc-cct yielded average RMSE and \(R^{2}\) values of 0.22 and 0.33, respectively. In comparison, the idlc-cct performed better, reflected in an average RMSE of 0.20 and an \(R^{2}\) of 0.41. This represents a reduction in RMSE by 0.02 and an increase in \(R^{2}\) by 0.08. Comparison between the two reveals the advantages of incorporating heterogeneity in consensus beliefs across subsets of respondents.
The sixth column of Table 1 presents the posterior mode of the number of instantiated cultures -- those with at least one respondent assigned to them. Although this count provides insights into the magnitude of heterogeneity in cultural consensus, it does not convey the distribution's uniformity (or lack thereof) concerning cultural assignments. An entropy-based metric fills this gap by estimating the uncertainty in the assignment of a randomly selected respondent under the modal posterior cultural distribution. Lower entropy occurs when there are a few dominant clusters, while higher entropy occurs when there is a more balanced distribution of cultures. The posterior over models parameters for the idlc-cct featured an average cultural entropy of 1.39 and an average cultural count of 4.82 instantiated cultures (i.e., those with at least one respondent assigned to them in the modal posterior assignment), demonstrating both a multiplicity of consensus as well as a non-uniform distribution of assigned respondents per culture. Crucially, as highlighted in the sixth column of Table 1, despite varying in the extent of heterogeneity, none of the 40 datasets was best fit by the single-culture model -- all displayed heterogeneity in consensus beliefs.
To ascertain if there was a dominant culture within a dataset, we measured the proportion of datasets that assigned at least 50% of respondents to a single culture -- these datasets represent a "majority" culture. Out of the 40 datasets, 23 exhibited a majority culture. The datasets with the weakest majority (less than 66%) included participant-generated-risks, long hair, groomed, alert, liberal, electable, outdoors, fat, and happy. On the other hand, those with the strongest majority (more than 66%) were technology hazards, daily activities, lay and expert opinions on healthy food, age, familiarity, skin color, hair color, and ethnic backgrounds such as Asian, Middle-Eastern, Hispanic, Native-American, and White. A comparison of the first and third columns of Table 1 reveals that the idlc-cct improved the fit for 38 out of the 40 concepts evaluated. The predictive accuracy showed an improvement for 32 of the 40 concepts, as evidenced by the comparison between the second and fourth columns in Table 1. These enhancements are more pronounced for the datasets with significant heterogeneity in consensus beliefs among participants because the single-truth model overlooks such variation by assuming that all respondents operate under a single consensus view.
Figure 1 illustrates respondents' response biases. Understanding these biases helps to account for individual differences in responses that cannot be attributed to cultural competency, nor culture assignment. By definition, the average shift bias was 0.0 and the average log scale bias was 0.0. Across participants, the standard deviation of the shift bias was 0.65, while for scale bias, it was 0.08. Many datasets exhibit roughly similar variations in shift and scale biases, but some show notable differences. For instance, in the first impression dataset, the "age" trait has little variation in shift bias (_SD_ = 0.10), whereas the religious trait shows opposite pattern, with little variation in scale bias (_SD_ = 0.02) but considerable variation in shift bias (_SD_ = 1.43).
### Model Ablation Experiments
Understanding the contribution and significance of different components within idlc-cct is important for gaining insights into the model's behavior and understanding where the observed heterogeneity in consensus views arise. In this section, we focus on model ablation experiments, a systematic approach aimed at evaluating the impact of individual components or features on the model's overall performance [126]. By selectively removing or neutralizing certain elements of the model, we can isolate the effect that each component has on the outcome. This is instrumental in identifying which aspects of the model are critical to its predictive power, and which may be redundant or even detrimental. Such experiments are especially useful for model interpretation, fine-tuning, and guiding future developments. We used
\begin{table}
\begin{tabular}{l l c c|c c c c c c} & & \multicolumn{2}{c}{**CCT**} & \multicolumn{2}{c}{**DLC-CCT**} & \multicolumn{4}{c}{**i**DLC-CCT**} \\ \cline{3-10} & & \(R^{2}\) & RMSE & \(R^{2}\) & RMSE & \(R^{2}\) & RMSE & \begin{tabular}{c} Culture \\ entropy \\ \end{tabular} &
\begin{tabular}{c} No. \\ cultures \\ \end{tabular} \\ \cline{2-10}
**Leadership** & _Leadership_ &.08 & 0.29 &.31 &.24 &.35 &.24 & 2.23 & 7 \\ \hline
**Humor** & _Humor_ &.00 &.30 &.27 &.22 &.27 &.22 & 2.69 & 12 \\ \hline
**Healthy food** & _Lay people_ &.00 &.30 &.52 &.20 &.56 &.19 &.40 & 2 \\ & _Experts_ &.00 &.31 &.43 &.21 &.43 &.21 &.74 & 2 \\ \hline
**Risk sources** & _Technology hazards_ &.00 &.32 &.33 &.26 &.36 &.24 &.51 & 4 \\
**Risk sources** & _Daily activities_ &.04 &.29 &.26 &.26 &.32 &.24 &.67 & 2 \\ & _Participant-generated risks_ &.00 &.31 &.47 &.23 &.50 &.22 & 1.44 & 3 \\ \hline
**Impressions of faces** & _Dominant_ &.00 &.38 &.33 &.22 &.41 &.20 & 1.46 & 7 \\ & _Trustworthy_ &.00 &.40 &.29 &.20 &.34 &.19 & 2.11 & 8 \\ & _Smart_ &.00 &.33 &.29 &.18 &.36 &.17 & 2.17 & 5 \\ & _Attractive_ &.00 &.39 &.41 &.19 &.45 &.18 & 2.29 & 6 \\ & _Outgoing_ &.00 &.38 &.36 &.18 &.46 &.17 & 1.97 & 5 \\ & _Age_ &.00 &.26 &.40 &.12 &.42 &.12 &.03 & 2 \\ & _Fat_ &.00 &.30 &.37 &.15 &.44 &.13 & 1.56 & 5 \\ & _Familiar_ &.00 &.35 &.47 &.20 &.48 &.20 & 0.59 & 3 \\ & _Gender_ &.00 &.43 &.58 &.22 &.67 &.19 & 1.37 & 5 \\ & _Typical_ &.00 &.41 &.27 &.20 &.30 &.19 & 2.55 & 8 \\ & _Happy_ &.00 &.35 &.51 &.16 &.60 &.15 & 1.33 & 4 \\ & _Dorky_ &.00 &.42 &.19 &.25 &.30 &.23 & 2.08 & 6 \\ & _Long hair_ &.00 &.35 &.26 &.21 &.55 &.17 & 1.44 & 7 \\ & _Skin color_ &.00 &.28 &.59 &.13 &.66 &.12 & 0.83 & 5 \\ & _Smug_ &.00 &.35 &.27 &.24 &.41 &.22 & 2.21 & 7 \\ & _Groomed_ &.00 &.42 &.28 &.20 &.36 &.19 & 1.54 & 5 \\ & _Cute_ &.00 &.41 &.47 &.20 &.53 &.19 & 1.89 & 5 \\ & _Alert_ &.00 &.40 &.26 &.20 &.36 &.18 & 2.16 & 6 \\ & _Hair color_ &.00 &.44 &.60 &.18 &.64 &.17 &.48 & 5 \\ & _Privileged_ &.00 &.35 &.31 &.19 &.39 &.17 & 2.07 & 5 \\ & _Liberal_ &.00 &.26 &.16 &.23 &.20 &.22 & 1.79 & 5 \\ & _Asian_ &.00 &.38 &.19 &.28 &.39 &.27 &.15 & 6 \\ & _Middle-Eastern_ &.00 &.39 &.30 &.25 &.34 &.24 & 1.03 & 6 \\ & _Hispanic_ &.00 &.40 &.16 &.29 &.17 &.29 &.38 & 4 \\ & _Polynesian_ &.00 &.37 &.29 &.25 &.37 &.24 & 1.49 & 5 \\ & _Native- American_ &.00 &.33 &.26 &.25 &.33 &.23 &.89 & 4 \\ & _Black_ &.00 &.45 &.25 &.19 &.43 &.17 &.24 & 5 \\ & _White_ &.00 &.52 &.38 &.28 &.60 &.23 &.86 & 4 \\ & _Looks like me_ &.00 &.29 &.35 &.19 &.49 &.17 & 1.57 & 3 \\ & _Etectable_ &.00 &.39 &.18 &.26 &.24 &.25 &.77 & 3 \\ & _Gay_ &.00 &.40 &.07 &.25 &.09 &.25 & 1.77 & 4 \\ & _Religious_ &.00 &.42 &.14 &.22 &.22 &.20 & 2.21 & 6 \\ & _Outdoors_ &.00 &.38 &.44 &.27 &.47 &.27 & 1.00 & 3 \\ \hline \end{tabular} _Notes._ Performance (\(R^{2}\)) and predictive accuracy (RMSE) of single-truth (dlc-cct) and multi-truth (dlc-cct) models. No. of cultures and entropy of cluster assignment shown for iDLC-ccT.
\end{table}
Table 1: Model comparison.
the leadership perception dataset for the parameter ablation experiments. The ablation experiments for other datasets can be found in the Appendix.
In our first ablation experiment, we examined the significance and contribution of the deep latent constructs to the model's performance. To achieve this, we randomly shuffled the embedding feature values across items, disrupting their structure while preserving their marginal distribution. This process eliminates any meaningful relationships that the features might have captured. The idlc-cct performed poorly when the deep features' structure was disrupted (\(R^{2}\) = 0.0). The results indicate that the content of the pre-trained embeddings is critical for the model's performance.
To examine the significance of each individual and item-level parameters on the predictive performances of idlc-cct, we conducted parameter ablation experiments. Table 2 presents the results of these ablation experiments, detailing which parameters were held constant to assess their contribution to prediction accuracy. The results reveal that each parameter in i-dlc-cct enhances predictive accuracy. Bayesian Ridge Regression yields the poorest performance, whereas idlc-cct, when incorporating all the model components, emerges as the most effective. Setting the scale bias to a constant had the least impact on predictive power, and interestingly, it caused the model to output a single consensus belief, indicating a singular cultural cluster. The scale parameter specifically accounts for magnitude preferences, such
Figure 1: Individual response biases plotted against each other for each participant.
as a tendency to mark most values at the outer ends or the middle section of the scale. Therefore, taking into account respondents' magnitude preferences may be instrumental in discerning group-level differences.
### Robustness to Sparse Sampling
Robustness to sparse sampling refers to a model's ability to sustain performance when faced with limited data. This analysis is especially valuable in situations where data collection and labeling are resource-intensive or challenging. To assess the robustness of our models against sparse sampling, we trained them using incremental data quantities of 5%, starting from 10% and increasing up to 80% of the total data. In the analyses presented here, we considered the leadership perception dataset. The robustness to sparse sampling experiments for other datasets can be found in the Appendix. Fig. 2 illustrates our robustness analysis, revealing that both the idlc-cct and dlc-cct models display considerable resilience to sparse sampling. Impressively, even with a significantly reduced training set, constituting merely 10% of the items, both models could still achieve some level of generalizability (i.e., idlc-cct: \(R^{2}\) =.17; dlc-cct: \(R^{2}\) =.15). As the quantity of training data increased, the models' performance also improved incrementally. Notably, the models reached parity with the performance achieved with 80% of the training data while requiring only 65% of the data to do so.
Compared to idlc-cct and dlc-cct, the Bayesian Ridge Regression model did not demonstrate a reasonable level of performance under these conditions. This suggests that considering individual and group-level differences is a significant factor in harnessing collective intelligence effectively. Our findings emphasize the value of utilizing models such as idlc-cct and dlc-cct, which are designed to account for these variances, especially in scenarios where data might be sparse or difficult to gather. The original cct demonstrates a slight improvement in its predictions when there is a larger training dataset because it can better estimate the posterior of the consensus parameter's prior with more data when predicting held-out items.
\begin{table}
\begin{tabular}{l c c c c} \hline
**Model** & \(R^{2}\) & **RMSE** & **Cultural Entropy** & **No. Cultures** \\ \hline idlc-cct &.35 &.24 & 2.23 & 6 \\ \hline dlc-cct &.31 &.24 & 0.0 & 1 \\ idlc &.22 &.26 & 4.00 & 8 \\ cct &.08 &.29 & 1.07 & 5 \\ Bayesian Ridge Regression &.13 &.28 &.00 & 1 \\ Ablation shift &.25 &.27 &.84 & 2 \\ Ablation scale &.31 &.25 &.00 & 1 \\ Ablation competence &.23 &.26 & 2.60 & 10 \\ Ablation item difficulty &.24 &.25 & 3.50 & 8 \\ \hline \end{tabular}
\end{table}
Table 2: iDLC-CCT Model Ablation.
Figure 2: Robustness analysis of the iDLC-CCT and DLC-CCT models under sparse sampling conditions. The models were trained with incrementally increasing quantities of data (from 10% to 80%) and tested for performance (\(R^{2}\)).
### The Discrepancy Between Demographic Features vs. Identified Cultures
Researchers have used demographic features to explain variations in people's perceptions of concepts and entities [127]. While some studies suggest these features can be indicative, others assert they cannot be solely relied upon [128]. Although demographic features can sometimes explain these variations, they are not always available for organizations and researchers to incorporate into their computational models. Moreover, in the interconnected society we live in, people's perceptions are influenced by a complex network of factors that extend beyond basic demographic attributes [129]. A significant part of this complexity is contributed by the easily accessible and diverse array of viewpoints on concepts found on social media, online platforms, and professional and social networks [130]. These platforms not only allow individuals to form unique understandings but also expose them to a variety of perspectives [131].
We continued to use the leadership perception dataset to investigate whether the cultures identified by the idlc-cct align with demographic features. Fig. 3 shows that demographic features alone do not provide clear insight into the hetereogeneity in perceptions of leadership that are captured by the idlc-cct. This insight illuminates the multifaceted and nuanced nature of leadership perceptions in our interconnected society, highlighting that these perceptions are not just products of individual demographic attributes, but also of the shared knowledge and perspectives within our interconnected social fabric.
## 8 Discussion
In this paper, we developed idlc-cct, an extension of Cultural Consensus Theory that integrates a latent construct to map between pre-trained embeddings of an entity and the consensus belief among one or more respondent subsets concerning those entities. The model effectively aggregates beliefs from individuals, including experts and non-experts, to estimate consensus while identifying idiosyncratic and group-level differences in cultural constructs. By incorporating features from deep neural networks into cct, we can estimate cultural consensus for any entity using pre-trained networks or other available embeddings. We argue that idlc-cct is a robust foundation for assessing group consensus levels by leveraging the underlying structure and inter-relatedness of beliefs and a foundation for consensus-aware technologies. Our findings reveal that considering group-level consensus variations enhances predictive accuracy and effectively harnesses collective intelligence for decision-making and collaboration within organizations.
In the subsections that follow, we discuss the scalability of the idlc-cct model (and multiple-consensus models more generally), describe its possible applications to consensus building and as a computational foundation for consensus-aware information technologies, and consider variants of the model that relax some of its core assumptions.
### The Small-Variance Asymptotic Approximation
An ongoing challenge in machine learning involves creating algorithms that can scale to extremely large datasets. Although probabilistic methods -- and Bayesian models in particular -- offer flexibility, the absence of scalable inference techniques may hinder their effectiveness with certain data. For instance, in the context of clustering, \(k\)-means algorithms are frequently preferred in large-scale scenarios over probabilistic methods like Gaussian mixtures or
Figure 3: Demographic Distribution of Cultural Assignments
Dirichlet process (DP) mixtures [132]. This is because \(k\)-means is relatively easy to implement and can manage larger datasets.
Applying an asymptotic analysis to the variance or covariance of distributions within a model has been used to create connections between probabilistic and non-probabilistic models. Examples of this include forming connections between probabilistic and standard PCA by allowing the covariance of the data likelihood in probabilistic PCA to approach zero [133, 134]. Similarly, the \(k\)-means algorithm can be obtained as a limit of the EM algorithm when the covariances of the Gaussians associated with each cluster decrease towards zero [135]. Not only do small-variance asymptotics offer a conceptual link between different approaches, they can also provide practical alternatives to probabilistic models when dealing with large datasets, as non-probabilistic models often have better scalability. While still emerging, the use of such techniques to derive scalable algorithms from complex probabilistic models offers a promising direction for the development of scalable learning algorithms.
A simple procedure (Algorithm 1) enables the conversion of any existing single-culture cct to an icct via an algorithm derived from a small-variance asymptotic analysis of the Dirichlet Process [136]. One of the primary advantages of the conversion is that it opens the door to novel scalable algorithms for multiple cultures.
We begin with a description of the small-variance asymptotic approximation to the Infinite Gaussian Mixture Model, which alternates through three stages: cluster assignment, cluster updating, and cluster instantiation [135]. In cluster assignment, each point is assigned to the cluster that is nearest by Euclidean distance. Initially, all points belong to the same (and only) cluster. In the cluster updating phase, the cluster center is adjusted to the mean of the points assigned to it. During cluster instantiation, the algorithm identifies the point farthest from the center of its assigned cluster, and if that distance exceeds \(\lambda\), it establishes a new cluster. The center of the new cluster is positioned at that point, which is then assigned to it. The algorithm repeats until no point is further than \(\lambda\) from its cluster center. The parameter \(\lambda\) thus acts as a concentration parameter, which, along with the complexity of the data, will determine the number of clusters that eventually appear, much like the concentration hyperparameter of a Dirichlet process that controls the number of clusters in an infinite Gaussian mixture model.
```
1:Input: Individuals \(x_{1},\ldots,x_{n}\), \(\lambda>0\)
2:Initialization:\(C=\emptyset\), dlc-cct
3:while\(\forall c\in C,\forall x_{i}\in c:CC(x_{i},c)<\lambda\)do\(\triangleright\)Culture Assignment
4:
5:for each \(x_{i}\)do
6:\(\text{CC}(x_{i},C)=\text{CC}(x_{i},c_{1}),\text{CC}(x_{i},c_{2}),...,\text{ CC}(x_{i},c_{n}))\)
7:\(c_{\text{max}}=\arg\max_{c}CC(x_{i},c)\)
8:if\(CC(x_{i},c_{\text{max}})\geq\lambda\)then
9: Assign \(x_{i}\) to \(c_{\text{max}}\)
10:else
11:\(c_{new}=\text{logit}(x_{i1}),\text{logit}(x_{i2}),...,\text{logit}(x_{im})\)
12: Assign \(x_{i}\) to \(c_{\text{new}}\)
13: Add \(c_{\text{new}}\) to \(C\)
14:endif
15:endfor
16:
17:for each \(c\)do\(\triangleright\)Culture Updating
18:\(C_{c}=\text{dlc-cct}(X_{c})\)
19:endfor
20:\(\triangleright\)Culture Instantiation
21:for each \(c\)do
22:\(x_{\text{min}}=\arg\min_{x_{i}\in c}\text{CC}(x_{i},c)\)
23:if\(\text{CC}(x_{\text{min}},c)<\lambda\)then
24:\(c_{\text{new}}=\text{logit}(x_{\text{min}})\)
25: Assign \(x_{\text{min}}\) to \(c_{\text{new}}\)
26: Add \(c_{\text{new}}\) to \(C\)
27:endif
28:endfor
29:endwhile
30:Output:\(C\)
```
**Algorithm 1** Small-Variance Asymptotic Approximation to the Infinite Cultural Consensus Model
By analogy, the small-variance asymptotic approximation to the Infinite Cultural Consensus Model likewise proceeds through three steps: culture assignment, culture updating, and culture instantiation. In culture assignment, each respondent is assigned to the culture for which they exhibit the highest cultural competence. Since the continuous version of cultural consensus theory models ratings as a logistic transformation of the latent appraisal, the assigned culture will be the one whose cultural consensus values correlate highest with the participant's logit-transformed responses. Initially, all respondents belong to the same (and only) culture. In culture updating, consensus values for each culture are determined by fitting a single-truth cultural consensus model to the data from respondents assigned to that culture. And during culture instantiation, the algorithm examines the participant with the lowest cultural competence for their assigned culture. If that respondent's cultural competence is less than \(\lambda\), a new culture is created with consensus values set to the logit-transformed responses of the respondent, who is then assigned to it.
The small-variance asymptotic approximation facilitates the conversion of existing single-truth cultural consensus models to the infinite cultural consensus model. There are two preconditions. First, the model must provide a means to compute the cultural consensus, which is precisely the function of existing single-truth cultural consensus methods and thus presents no issue. Second, the model must enable one to identify the culture for which a respondent exhibits the highest cultural competence. The readiness of this capability under existing methods is varied. As previously discussed, for the continuous model, it involves calculating the linear correlation between the respondent's logit-transformed responses and the consensus for each culture, then identifying the culture with the highest correlation. When these conditions are met, the single-truth consensus models can be scaled to an efficient multiple-truth consensus model.
### Applications and Considerations
Next, we consider potential applications of the idlc-cct. One such application is as a computational foundation for consensus-aware information technologies. Our findings demonstrate the potential of methods to detect and characterize heterogenity in consensus beliefs. Building on these methods, one could integrate idlc-cct into information systems to enable interventions that help to detect and understand a lack of consensus within a purported group or to build consensus among the members of a team to foster improved decision-making and collaboration. Beyond detecting and characterizing heterogeneity in consensus beliefs, idlc-cct also provides insights at both the question and individual levels, deepening our comprehension of the domain of interest. For instance, by analyzing variations in question difficulty across cultures, a consensus-aware information technology could pinpoint questions that some subsets of respondents may find challenging to agree upon or identify controversial topics unique to specific cultural contexts. Such insights could assist in tailoring interventions and strategies to promote collaboration and consensus-building. By recognizing diverse opinions, a consensus-aware algorithm can generate proposals that effectively represent various perspectives, promoting a more inclusive and equitable approach to consensus formation. It paves the way for information technology applications that either harness collective intelligence under a lack of cross-cultural consensus.
Another promising application of cct is its integration into AI systems to enhance their interoperability. By leveraging cct's unique capability to interpret shared beliefs within cultural contexts, we can equip AI systems with a more culturally nuanced understanding of consensus beliefs. This integration could significantly enhance these models' generalizability. The integration of cct could also increase the transparency of AI systems by rendering their decision-making processes more intelligible; decisions could be attributed to culturally shared beliefs or norms. Furthermore, applying cct could foster a greater level of trust in AI systems among users. By ensuring that the systems' decision-making processes align more closely with shared beliefs, we can develop systems that are ethically sound and culturally sensitive. Even so, the successful integration of cct into AI systems will require carefully designed methods to translate a consensus derived from cct into workable AI systems.
The idlc-cct can also be used to support the consensus-building process by surfacing the causes of disagreement between respondents, whether those are individual-level parameters such as shift and scale biases or competency, or culture-level parameters such as the consensus. However, using idlc-cct for consensus-building may be challenging under adversarial behavior. Consider, for example, voting mechanisms or other consensus-building methods, where knowledge of the aggregation scheme could potentially allow individuals or groups to strategically manipulate the system by skewing their responses, misreporting their preferences, or coordinating their actions with others to drive the consensus toward a desired value [137]. This is sometimes known as "gaming the system" [137]. It is an open question as to the extent to which idlc-cct is robust to gaming and the most effect means of reducing the risk of such strategic behavior to ensure that each individual's input is a genuine reflection of their own beliefs, views, or preferences, rather than a strategic attempt to influence the final outcome. Doing so can lead to more robust and representative consensus outcomes and more effect consensus-building tools. In summary, while a respondent's naivety about the aggregation scheme may help ensure the genuineness and objectivity of the consensus process, it also implies the need for the design of robust, resistant-to-manipulation aggregation schemes, and vigilance towards potential strategic behaviors.
The idlc-cct can be extended in various ways. First, building on the insights derived from the iidl-cct approach, researchers and organizations can further explore the potential of multimodal fusion models in capturing and modeling cultural consensus. People perceive the world by simultaneously processing and merging high-dimensional inputs from multiple modalities, including vision and semantic meaning [138]. With the advancements in deep learning and computing power, multimodal fusion models have gained popularity for their ability to combine different modalities in a way similar to human perception, leading to more accurate predictions of human judgments [139]. By integrating these multimodal models within the idlc-cct framework, future research can combine information from various sources to achieve better alignment between machine representations and cultural consensus, further enhancing our understanding of the complex interplay between culture and perception.
Finally, we note that in the idlc-cct model, we rely on the Dirichlet Process, which assumes the existence of an infinite number of potential cultures, with each individual belonging to one specific culture. While this simplification facilitates mathematical modeling and computational tractability, it may not accurately represent the multifaceted, overlapping, and fluid nature of cultural identities in reality. Real-world cultures are seldom mutually exclusive, and individuals often navigate multiple cultural spaces simultaneously. They may adhere to one cultural norm in a certain context and another in a different setting. Additionally, cultural affiliations are not static; they can evolve, intersect, and change over time based on personal experiences, societal influences, migration, and many other factors. Further, the distribution over the size of clusters under the DP has exponential tails, which may or may not be a reasonable assumption regarding how respondents are spread among cultures. Extensions of the present work to the Pitman-Yor process could relax this assumption by allowing for non-exponential tail behavior, such as power-law tails. Therefore, while the Dirichlet Process provides a practical method for dealing with an unknown number of cultures, it is essential to understand its limitations. Future studies can explore more intricate models that better capture the dynamic and overlapping nature of cultural identities while relaxing some of the assumptions made about cluster assignments.
### Conclusion
In conclusion, we presented the idlc-cct, which allows culturally held beliefs to be transformed into fine-tuned machine representations. These representations map features of a concept or entity to the consensus response among a subset of respondents using the Dirichlet Process. This method integrates the strengths of the cultural consensus model with machine learning techniques, overcoming their respective limitations. It has demonstrated both predictive and explanatory benefits. The idlc-cct method offers both scientific and practical applicability, allowing researchers to study group-level variation while benefiting from the latest advances in machine learning. The advantages of incorporating heterogeneity in consensus beliefs are evident, and idlc-cct proves invaluable for analyzing and devising strategies that promote consensus-building. Through the aggregation of information, it advances collective intelligence. The integration of idlc-cct into technological systems can enhance our understanding of consensus beliefs, deepening our knowledge of the relevant domain.
|
2310.20262 | On the Karlsson-Nussbaum conjecture for resolvents of nonexpansive
mappings | Let $D\subset \mathbb{R}^{n}$ be a bounded convex domain and $F:D\rightarrow
D$ a $1$-Lipschitz mapping with respect to the Hilbert metric $d$ on $D$
satisfying condition $d(sx+(1-s)y,sz+(1-s)w)\leq \max \{d(x,z),d(y,w) \}$. We
show that if $F$ does not have fixed points, then the convex hull of the
accumulation points (in the norm topology) of the family $\{R_{\lambda
}\}_{\lambda >0}$ of resolvents of $F$ is a subset of $\partial D.$ As a
consequence, we show a Wolff-Denjoy type theorem for resolvents of nonexpansive
mappings acting on an ellipsoid $D$. | Aleksandra Huczek, Andrzej Wiśnicki | 2023-10-31T08:29:03Z | http://arxiv.org/abs/2310.20262v1 | # On the Karlsson-Nussbaum conjecture for resolvents of nonexpansive mappings
###### Abstract.
Let \(D\subset\mathbb{R}^{n}\) be a bounded convex domain and \(F:D\to D\) a \(1\)-Lipschitz mapping with respect to the Hilbert metric \(d\) on \(D\) satisfying condition \(d(sx+(1-s)y,sz+(1-s)w)\leq\max\{d(x,z),d(y,w)\}\). We show that if \(F\) does not have fixed points, then the convex hull of the accumulation points (in the norm topology) of the family \(\{R_{\lambda}\}_{\lambda>0}\) of resolvents of \(F\) is a subset of \(\partial D\). As a consequence, we show a Wolff-Denjoy type theorem for resolvents of nonexpansive mappings acting on an ellipsoid \(D\).
Key words and phrases:Karlsson-Nussbaum conjecture, Wolff-Denjoy theorem, Geodesic space, Hilbert's projective metric, Resolvent, Nonexpansive mapping. 2020 Mathematics Subject Classification: Primary 53C60; Secondary 37C25, 47H09, 51M10
## 1. Introduction
The study of dynamics of nonlinear mappings started by considering iterates of holomorphic mappings on one-dimensional bounded domains. In this field, one of the first theorem is the classical Wolff-Denjoy theorem which describes dynamics of iteration of holomorphic self-mappings on the unit disc of the complex plane. It asserts that if \(f:\Delta\to\Delta\) is a holomorphic map of the unit disc \(\Delta\subset\mathbb{C}\) without a fixed point, then there is a point \(\xi\in\partial\Delta\) such that the iterates \(f^{n}\) converge locally uniformly to \(\xi\) on \(\Delta\). Generalizations of this theorem in different directions have been obtained by numerous authors (see [1, 6, 9, 14, 15] and references therein). One such generalization was formulated by Beardon who noticed that the Wolff-Denjoy theorem can be considered in a purely geometric way depending only on the hyperbolic properties of a metric and gave its proof using geometric methods (see [4]). In [5], Beardon extended his approach for strictly convex bounded domains with the Hilbert metric. Considering the notion of the omega limit set \(\omega_{f}(x)\) as the set of accumulation points of the sequence \(f^{n}(x)\) and the notion of the attractor \(\Omega_{f}=\bigcup_{x\in D}\omega_{f}(x)\), we can formulate a generalization of the Wolff-Denjoy theorem known as the Karlsson-Nussbaum conjecture, which was formulated independently by Karlsson and Nussbaum (see [10, 14]). This conjecture states that if \(D\) is a bounded convex domain in a finite-dimensional real vector space and \(f:D\to D\) is a fixed point free nonexpansive mapping acting on the Hilbert metric space \((D,d_{H})\), then there exists a convex set \(\Omega\subseteq\partial D\) such that for each \(x\in D\), all accumulation points \(\omega_{f}(x)\) of the orbit \(O(x,f)\) lie in \(\Omega\).
The aim of this note is to show a variant of the Karlsson-Nussbaum conjecture for resolvents of nonexpansive (\(1\)-Lipschitz) mappings. For this purpose we construct in Section 3 the family of resolvents of a nonexpansive mapping and prove its main properties: nonexpansivity and the resolvent identity. In the literature, the resolvents usually occur in the
context of Banach spaces or geodesic spaces that are Busemann convex, see e.g., [3, 17]. Then their construction is based on the Banach contraction principle. Since a Hilbert metric space \((D,d_{H})\) is in general not Busemann convex, our construction of resolvents is a little more complicated and exploits the argument related to Edelstein's theorem [8].
In Section 4 we formulate and prove the main theorem of this work. We show that if \(D\subset\mathbb{R}^{n}\) is a bounded convex domain and \(F:D\to D\) is a fixed point free nonexpansive mapping with respect to the Hilbert metric \(d_{H}\) on \(D\) satisfying condition
(D) \[d_{H}(sx+(1-s)y,sz+(1-s)w)\leq\max\{d_{H}(x,z),d_{H}(y,w)\},\]
then the convex hull of the accumulation points of the family \(\{R_{\lambda}\}_{\lambda>0}\) of resolvents of \(F\) is a subset of \(\partial D\). Since a Hilbert metric space \((D,d_{H})\) is Busemann convex if and only if \(D\) is ellipsoid, we obtain as a corollary a Wolff-Denjoy type theorem for resolvents of nonexpansive mappings acting on an ellipsoid \(D\).
## 2. Preliminaries
Let \(V\) be a finite dimensional real vector space, \(D\subset V\) a convex bounded domain and \((D,d)\) a metric space. A curve \(\sigma:[a,b]\to D\) is said to be _geodesic_ if \(d(\sigma(t_{1}),\sigma(t_{2}))=|t_{1}-t_{2}|\) for all \(t_{1},t_{2}\in[a,b]\). We will use the same name for the image \(\sigma([a,b])\subset D\) of \(\sigma\), denoted by \([\sigma(a),\sigma(b)]\). We say that \(D\) is a _geodesic space_ if every two points of \(D\) can be joined by a geodesic. A map \(F:D\to D\) is called _contractive_ if \(d(F(x),F(y))<d(x,y)\) for any distinct points \(x,y\in D\). A map \(F:D\to D\) is called _nonexpansive_ if for any \(x,y\in D\), \(d(F(x),F(y))\leq d(x,y)\).
We recall the definition of _the Hilbert metric space._ If \(x,y\in D\), consider the straight line passing through \(x\) and \(y\) that intersects the boundary of \(D\) in precisely two points \(a\) and \(b\). Assuming that \(x\) is between \(a\) and \(y\), and \(y\) is between \(x\) and \(b\), we define the cross-ratio metric
\[d_{H}(x,y)=\log\bigg{(}\frac{||y-a||\,||x-b||}{||x-a||\,||y-b||}\bigg{)}, \qquad x\neq y.\]
Furthermore, we put \(d_{H}(x,y)=0\) if \(x=y\).
Following Beardon [5] we consider the subsequent lemmas.
**Lemma 2.1**.: _Let \(D_{1},D_{2}\subset V,\)\(D_{1}\subset D_{2}\) be bounded convex domains and \((D_{1},d_{1}),(D_{2},d_{2})\) be Hilbert metric spaces, then \(d_{2}\leq d_{1}\). Furthermore, for distinct points \(x,y\in D_{1},d_{1}(x,y)=d_{2}(x,y)\) iff the segment \(L_{xy}\cap D_{1}\) coincides with \(L_{xy}\cap D_{2}\)._
**Lemma 2.2**.: _Suppose that \((D,d_{H})\) is a Hilbert metric space, \(x_{0}\in D\) and \(l\in[0,1)\). Then the mapping \(g(x)=x_{0}+l(x-x_{0})\) is contractive._
Proof.: Fix \(x_{0}\in D\) and \(l\in[0,1)\). Let \(x,y\in D\) and consider the straight line passing through \(x\) and \(y\) that intersects \(\partial D\) in two points \(x^{\prime}\) and \(y^{\prime}\) such that \(x\) is between \(x^{\prime}\) and \(y\), and \(y\) is between \(x\) and \(y^{\prime}\). Take two points \(z^{\prime}=(1-l)x_{0}+lx^{\prime}\in\partial g(D),w^{\prime}=(1-l)x_{0}+ly^{ \prime}\in\partial g(D)\), and note that the points \(z^{\prime},g(x),g(y),w^{\prime}\) are collinear such that \(g(x)\) is between \(z^{\prime}\) and \(g(y)\), and \(g(y)\) is between \(g(x)\) and \(w^{\prime}\). Since \(g(D)\) lies in a compact subset of \(D\), it follows from Lemma 2.1 that \(d_{H}(g(x),g(y))<d_{2}(g(x),g(y))\), where
\[d_{H}(x,y)=\log\left(\frac{||x^{\prime}-y||\,||x-y^{\prime}||}{||x^{\prime}-x||\,|| y-y^{\prime}||}\right)=\log\left(\frac{||z^{\prime}-g(y)||\,||w^{\prime}-g(x)||}{||z^{ \prime}-g(x)||\,||w^{\prime}-g(y)||}\right)=d_{2}(g(x),g(y)).\]
Therefore we get \(d_{H}(g(x),g(y))<d_{H}(x,y).\)
Note that if \(D\subset V\) is a bounded convex domain, then the Hilbert metric \(d_{H}\) is locally equivalent to the euclidean norm in \(V.\) Furthermore, for any \(w\in D,\) if \(\{x_{n}\}\) is a sequence in \(D\) converging to \(\xi\in\partial D=\overline{D}\setminus D,\) then
\[d_{H}(x_{n},w)\rightarrow\infty\]
(see [5, 9]). The above property is equivalent to properness of \(D,\) that is, every closed and bounded subset of \((D,d_{H})\) is compact. It is not difficult to show that for \(x,y,z\in D\) and \(s\in[0,1],\)
(C) \[d_{H}(sx+(1-s)y,z)\leq\max\{d_{H}(x,z),d_{H}(y,z)\}.\]
In what follows, we will assume a more restrictive condition: for all \(x,y,z,w\in D\) and \(s\in[0,1],\)
(D) \[d_{H}(sx+(1-s)y,sz+(1-s)w)\leq\max\{d_{H}(x,z),d_{H}(y,w)\}.\]
## 3. Resolvents of nonexpansive mappings
In this section we describe the construction of a resolvent of a nonexpansive mapping acting on a Hilbert metric space. Let \(D\subset V\) be a convex bounded domain and \(F:D\to D\) a nonexpansive mapping with respect to the Hilbert metric \(d\) on \(D.\) Recall that the topology of \((D,d)\) coincides with the Euclidean topology and \((D,d)\) is proper metric space, that is, every closed ball \(\bar{B}(x_{0},r),\)\(x_{0}\in D,\) is compact. We fix \(x\in D,\)\(\lambda>0,\) and define a mapping
\[G_{x,\lambda}(y)=\frac{1}{1+\lambda}x+\frac{\lambda}{1+\lambda}F(y),\qquad y \in D.\]
It follows from Lemma 2.2 that \(G_{x,\lambda}\) is contractive.
We show that \(G_{x,\lambda}(D)\) is bounded. For this purpose, we select \(w\in G_{x,\lambda}(D).\) Note that there exists \(y\in D\) such that \(w=\frac{1}{1+\lambda}x+\frac{\lambda}{1+\lambda}F(y).\) We show that \(B(w,\frac{1}{1+\lambda}d)\subset D,\) where \(d=\inf_{v\in\partial D}||v-x||.\) Choose any \(w^{\prime}\in B(w,\frac{1}{1+\lambda}d).\) Then there exists \(z\in D\) such that \(w^{\prime}=\frac{1}{1+\lambda}z+\frac{\lambda}{1+\lambda}F(y).\) Note that
\[||w-w^{\prime}||=\left||\frac{1}{1+\lambda}x+\frac{\lambda}{1+\lambda}F(y)- \frac{1}{1+\lambda}z+\frac{\lambda}{1+\lambda}F(y)\right||=\frac{1}{1+\lambda }||x-z||.\]
If \(||w-w^{\prime}||<\frac{1}{1+\lambda},\) then \(||x-z||=(1+\lambda)||w-w^{\prime}||<d.\) It implies that \(z\in D\) and hence \(w^{\prime}\in D.\) It follows that for all \(w\in G_{x,\lambda}(D),\)
\[\inf_{v\in\partial D}||v-w||\geq\frac{1}{1+\lambda}d. \tag{3.1}\]
Take a sequence \(\{w_{n}\}\subset G_{x,\lambda}(D).\) Since \(\overline{D}\) is compact in the Euclidean topology, there exists a subsequence \(\{w_{n_{k}}\}\) and \(x_{0}\in\overline{D}\) such that \(||w_{n_{k}}-x_{0}||\to 0,\) if \(k\rightarrow\infty.\) It follows from (3.1) that \(x_{0}\in D,\) and hence \(d(w_{n_{k}},x_{0})\to 0\) since the topology of \((D,d)\) coincides
with the Euclidean topology. Therefore, \(G_{x,\lambda}(D)\) is bounded in \((D,d)\) and by properness of \(D\) we have that \(\overline{G_{x,\lambda}(D)}\) is compact in \((D,d)\).
Note that \(D\supset G_{x,\lambda}(D)\supset G_{x,\lambda}^{2}(D)\supset...\), which means that the orbits of \(G_{x,\lambda}\) are bounded. Fix \(y\in D\). Since \(\overline{G_{x,\lambda}(D)}\) is compact, there exists a subsequence \(\{G_{x,\lambda}^{n_{k}}(y)\}\) of \(\{G_{x,\lambda}^{n}(y)\}\) converging to some \(z\in D\). Let
\[d_{n}=d(G_{x,\lambda}^{n}(y),G_{x,\lambda}^{n+1}(y)).\]
Since \(G_{x,\lambda}\) is contractive, the sequence \(\{d_{n}\}\) is decreasing and hence it converges to some \(\zeta\), as \(n\to\infty\). Hence
\[\zeta\gets d_{n_{k}}=d(G_{x,\lambda}^{n_{k}}(y),G_{x,\lambda}^{n_{k}+1}(y ))\to d(G_{x,\lambda}(z),z),\]
and
\[\zeta\gets d_{n_{k}+1}=d(G_{x,\lambda}^{n_{k}+1}(y),G_{x,\lambda}^{n_{k}+ 2}(y))\to d(G_{x,\lambda}^{2}(z),G_{x,\lambda}(z)).\]
We get
\[d(G_{x,\lambda}^{2}(z),G_{x,\lambda}(z))=d(G_{x,\lambda}(z),z)=\zeta.\]
Since the map \(G_{x,\lambda}\) is contractive, \(G_{x,\lambda}(z)=z\). Moreover, \(z\) is the unique fixed point of \(G_{x,\lambda}\). Indeed, otherwise if \(z_{1},z_{2}\in D\), \(z_{1}\neq z_{2}\) are fixed points of \(G_{x,\lambda}\), then
\[d(z_{1},z_{2})=d(G_{x,\lambda}(z_{1}),G_{x,\lambda}(z_{2}))<d(z_{1},z_{2}),\]
and we obtain a contradiction. Define \(z=R_{\lambda}(x)\). We refer to the mapping \(R_{\lambda}:D\to D\) as _the resolvent of_\(F\). We have
\[z=G_{x,\lambda}(z)=\frac{1}{1+\lambda}x+\frac{\lambda}{1+\lambda}F(z),\qquad x \in D,\;\lambda>0,\]
and hence
\[R_{\lambda}(x)=\frac{1}{1+\lambda}x+\frac{\lambda}{1+\lambda}F(R_{\lambda}(x) ),\qquad x\in D,\,\lambda>0. \tag{3.2}\]
Furthermore, any converging subsequence \(G_{x,\lambda}^{m_{k}}(y)\) has the limit \(z\) (the unique fixed point), as \(k\to\infty\). This gives the formula:
\[\lim_{n\to\infty}G_{x,\lambda}^{n}(y)=R_{\lambda}(x),\qquad y\in D. \tag{3.3}\]
It turns out that if \((D,d)\) is sufficiently regular, then the resolvent of a nonexpansive mapping is also nonexpansive.
**Lemma 3.1**.: _Let \((D,d)\) be a Hilbert metric space satisfying condition (D), \(F:D\to D\) a nonexpansive mapping, and \(\lambda>0\). Then the resolvent \(R_{\lambda}:D\to D\) is nonexpansive._
Proof.: Fix \(z_{0},z_{1},z_{2}\in D\). First we show that \(d(G_{z_{1},\lambda}^{n}(z_{0}),G_{z_{2},\lambda}^{n}(z_{0}))\leq d(z_{1},z_{2})\) for each \(n\). We proceed by induction. For \(n=1\), it follows from condition (C) that
\[d(G_{z_{1},\lambda}(z_{0}),G_{z_{2},\lambda}(z_{0})) = d\bigg{(}\frac{1}{1+\lambda}z_{1}+\frac{\lambda}{1+\lambda}F(z_{ 0}),\frac{1}{1+\lambda}z_{2}+\frac{\lambda}{1+\lambda}F(z_{0})\bigg{)}\] \[\leq \max\{d(z_{1},z_{2}),d(F(z_{0}),F(z_{0}))\}=d(z_{1},z_{2}).\]
Fix \(n\in\mathbb{N}\) and suppose that \(d(G^{n}_{z_{1},\lambda}(z_{0}),G^{n}_{z_{2},\lambda}(z_{0}))\leq d(z_{1},z_{2}).\) Then it follows from (D) that
\[d(G^{n+1}_{z_{1},\lambda}(z_{0}),G^{n+1}_{z_{2},\lambda}(z_{0})) = d\bigg{(}\frac{1}{1+\lambda}z_{1}+\frac{\lambda}{1+\lambda}F(G^{ n}_{z_{1},\lambda}(z_{0})),\frac{1}{1+\lambda}z_{2}+\frac{\lambda}{1+\lambda}F(G^{ n}_{z_{2},\lambda}(z_{0}))\bigg{)}\] \[\leq \max\{d(z_{1},z_{2}),d(G^{n}_{z_{1},\lambda}(z_{0}),G^{n}_{z_{2}, \lambda}(z_{0}))\}=d(z_{1},z_{2}).\]
Now the formula (3.3) yields
\[d(R_{\lambda}(z_{1}),R_{\lambda}(z_{2})) = \lim_{n\to\infty}d\bigg{(}G^{n}_{z_{1},\lambda}(z_{0}),G^{n}_{z_ {2},\lambda}(z_{0})\bigg{)}\] \[\leq d(z_{1},z_{2}),\]
which shows that \(R_{\lambda}\) is a nonexpansive mapping.
We will also use the following property called the resolvent identity.
**Proposition 3.2**.: _Suppose that \(F:D\to D\) is a nonexpansive mapping. Then its resolvent \(R_{\lambda}\) satisfies_
\[R_{\lambda}(x)=R_{\mu}\bigg{(}\frac{\lambda-\mu}{\lambda}R_{\lambda}(x)+\frac{ \mu}{\lambda}x\bigg{)},\qquad x\in D,\]
_for all \(\lambda>\mu>0\)._
Proof.: Fix \(x\in D\) and \(\lambda,\mu>0\) such that \(\lambda>\mu\). Define
\[y:=\frac{\lambda-\mu}{\lambda}R_{\lambda}(x)+\frac{\mu}{\lambda}x. \tag{3.4}\]
It follows from (3.2) that there exists the unique point
\[z:=R_{\mu}(y)=\frac{1}{1+\mu}y+\frac{\mu}{1+\mu}F(R_{\mu}(y)). \tag{3.5}\]
On the other hand, we have
\[\tilde{z}:=R_{\lambda}(x)=\frac{1}{1+\lambda}x+\frac{\lambda}{1+\lambda}F(R_{ \lambda}(x)). \tag{3.6}\]
From the above and (3.4) we get \(\lambda y-\mu\tilde{z}(1+\lambda)=(\lambda-\mu)\tilde{z}-\lambda\mu F(\tilde{ z})\), which implies
\[\tilde{z}=\frac{1}{1+\mu}y+\frac{\mu}{1+\mu}F(\tilde{z}).\]
Therefore, from the uniqueness of the construction of the point \(z\) and by (3.5) and (3.6), we have
\[R_{\mu}(y)=z=\tilde{z}=R_{\lambda}(x).\]
For any \(x\in D\), \(F:D\to D\), the set of accumulation points (in the norm topology) of the sequence \(\{F^{n}(x)\}\) is called the _omega limit set of_\(x\) and is denoted by \(\omega_{F}(x)\). In a similar way, if \(R_{\lambda}:D\to D,\lambda>0\), is a family of resolvents of \(F\), we define
\[\omega_{\{R_{\lambda}\}_{\lambda>0}}(x)=\{y\in\overline{D}:\|R_{\lambda_{n}}( x)-y\|\to 0\text{ for some increasing sequence }\{\lambda_{n}\},\lambda_{n}\to\infty\},\]
and the _attractor_ of \(\{R_{\lambda}\}_{\lambda>0},\)
\[\Omega_{{}_{\{R_{\lambda}\}_{\lambda>0}}}=\bigcup_{x\in D}\omega_{{}_{\{R_{ \lambda}\}_{\lambda>0}}}(x).\]
**Lemma 3.3**.: _Suppose that \(F:D\to D\) is a nonexpansive mapping without fixed points and \(R_{\lambda}:D\to D\), \(\lambda>0\) is a family of resolvents of \(F\). Then \(\Omega_{{}_{\{R_{\lambda}\}_{\lambda>0}}}\subset\partial D.\)_
Proof.: On the contrary, we suppose that there exists \(y\in D\) such that \(\|R_{\lambda_{n}}(x)-y\|\to 0\) for some \(x\in D\) and an increasing sequence \(\{\lambda_{n}\},\lambda_{n}\to\infty.\) Then
\[||R_{\lambda_{n}}(x)-F(R_{\lambda_{n}}(x))||=\frac{1}{1+\lambda_{n}}||x-F(R_{ \lambda_{n}}(x))||\to 0, \tag{3.7}\]
as \(n\to\infty.\) Since the topology of \((D,d)\) coincides with the norm topology, \(F:D\to D\) is norm-continuous, and hence
\[||F(y)-y||\leq||F(y)-F(R_{\lambda_{n}}(x))||+||F(R_{\lambda_{n}}(x))-R_{ \lambda_{n}}(x)||+||R_{\lambda_{n}}(x)-y||\to 0,\]
as \(n\to\infty.\) Thus \(F(y)=y,\) and we obtain a contradiction.
## 4. Main theorem
We begin by recalling one of the fundamental properties of a Hilbert metric space that allows Karlsson and Noskov to extend Beardon's Wolff-Denjoy theorem for bounded strictly convex domains (see [11, Theorem 5.5],[13, Proposition 8.3.3]).
**Lemma 4.1**.: _Let \(D\subseteq V\) be an open bounded convex set and \(d\) a Hilbert metric on \(D\). If \(\{x_{n}\}\) and \(\{y_{n}\}\) are convergent sequences in \(D\) with limits \(x\) and \(y\) in \(\partial D\), respectively, and the segment \([x,y]\nsubseteq\partial D\), then for each \(z\in D\) we have_
\[\lim_{n\to\infty}[d(x_{n},y_{n})-\max\{d(x_{n},z),d(y_{n},z)\}]=\infty.\]
We also need the following standard argument that can be found for example in [7, Lemma 5.4].
**Lemma 4.2**.: _Let \((D,d)\) be a separable metric space and let \(a_{n}:D\to\mathbb{R}\) be a nonexpansive mapping for each \(n\in\mathbb{N}.\) If for every \(x\in D\), the sequence \(\{a_{n}(x)\}\) is bounded, then there exists a subsequence \(\{a_{n_{j}}\}\) of \(\{a_{n}\}\) such that \(\lim_{j\to\infty}a_{n_{j}}(x)\) exists for every \(x\in D\)._
Fix \(x_{0}\in D\) and consider a sequence \(\{x_{n}\in D:n\in\mathbb{N}\}\) contained in \(D.\) Define \(a_{n}(x)=d(x,x_{n})-d(x_{n},x_{0})\) for any \(n\in\mathbb{N}.\) Note that
\[|a_{n}(y)-a_{n}(x)|\leq d(x,y),\]
i.e., \(a_{n}\) is nonexpansive and the sequence \(\{a_{n}(y)\}\) is bounded (by \(d(y,x_{0})\)) for every \(y\in D.\) It follows from Lemma 4.2 that there exists a subsequence \(\{x_{n_{j}}\}\) of \(\{x_{n}\}\) such that \(\lim_{j\to\infty}a_{n_{j}}(x)\) exists for any \(x\in D\), i.e.,
\[\lim_{j\to\infty}d(x,x_{n_{j}})-d(x_{n_{j}},x_{0}) \tag{4.1}\]
exists for every \(x\in D.\)
Now we are in a position to prove a variant of the Karlsson-Nussbaum conjecture for resolvents of nonexpansive mappings.
**Theorem 4.3**.: _Let \(D\subset V\) be a bounded convex domain. Suppose that \((D,d)\) is a Hilbert metric space satisfying condition (D) and \(R_{\lambda}:D\to D,\lambda>0,\) is a family of resolvents of a nonexpansive mapping \(F:D\to D\) without fixed points. Then \(\operatorname{co}\Omega_{\{R_{\lambda}\}_{\lambda>0}}\subseteq\partial D\)._
Proof.: Suppose on the contrary that there exist \(z_{1},\ldots,z_{m}\in D,\zeta^{1}\in\omega_{\{R_{\lambda}\}_{\lambda>0}}(z_{1} ),\ldots,\zeta^{m}\in\omega_{\{R_{\lambda}\}_{\lambda>0}}(z_{m})\) and \(0<\alpha_{1},\ldots,\alpha_{m}<1\) with \(\sum_{i=1}^{m}\alpha_{i}=1\) such that \(\sum_{i=1}^{m}\alpha_{i}\zeta^{i}\in D\). Since \(F\) does not have fixed points, it follows from Lemma 3.3 that the omega limit sets \(\omega_{\{R_{\lambda}\}_{\lambda>0}}(z_{i})\subseteq\partial D\), \(i=1,\ldots,m\), and, following [13, Theorem 8.3.11], we can assume that \(m\geq 2\) is minimal with the property that \(\sum_{i=1}^{m}\alpha_{i}\zeta^{i}\in D\). It follows that \(R_{\lambda_{j}^{i}}(z_{i})\rightarrow\zeta^{i}\in\partial D\) for some increasing sequences \(\{\lambda_{j}^{i}\}_{j},\lambda_{j}^{i}\rightarrow\infty,\) as \(j\rightarrow\infty,i=1,\ldots,m\). We put \(\zeta=\zeta^{1}\) and \(\eta=\sum_{i=2}^{m}\mu_{i}\zeta^{i}\), where \(\mu_{i}=\frac{\alpha_{i}}{1-\alpha_{1}}\) for \(i\in[2,m]\). Let \(\eta^{j}=\sum_{i=2}^{m}\mu_{i}R_{\lambda_{j}^{i}}(z_{i})\) for all \(j\geq 1\). Since \(m\) is minimal, we have \(\zeta,\eta\in\partial D\) and \(\alpha_{1}\zeta+(1-\alpha_{1})\eta\in D\). Since \(D\) is convex, we get \(\alpha\zeta+(1-\alpha)\eta\in D\) for all \(\alpha\in(0,1)\). By passing to a subsequence we can assume from (4.1) that for every \(x\in D\) there exists the limit
\[g(x)=\lim_{j\rightarrow\infty}d(x,R_{\lambda_{j}^{1}}(z_{1}))-d(R_{\lambda_{ j}^{1}}(z_{1}),z_{1}). \tag{4.2}\]
Since
\[\left|\left|y-\frac{\lambda_{j}^{1}-\mu}{\lambda_{j}^{1}}y-\frac{\mu}{ \lambda_{j}^{1}}z_{1}\right|\right|=\frac{\mu}{\lambda_{j}^{1}}||y-z_{1}|| \to 0,\quad\text{as }\lambda_{j}^{1}\rightarrow\infty,\]
and topologies of \((D,d)\) and \((\overline{D},||\cdot||)\) coincide on \(D\), we have
\[d\bigg{(}y,\frac{\lambda_{j}^{1}-\mu}{\lambda_{j}^{1}}y+\frac{\mu}{\lambda_{j }^{1}}z_{1}\bigg{)}\to 0, \tag{4.3}\]
if \(\lambda_{j}^{1}\rightarrow\infty\). According to Lemma 3.1, Proposition 3.2, (4.3) and condition (C), we get
\[g(R_{\mu}(y)) = \lim_{j\rightarrow\infty}d(R_{\mu}(y),R_{\lambda_{j}^{1}}(z_{1} ))-d(R_{\lambda_{j}^{1}}(z_{1}),z_{1})\] \[= \lim_{j\rightarrow\infty}d\bigg{(}R_{\mu}(y),R_{\mu}\bigg{(} \frac{\lambda_{j}^{1}-\mu}{\lambda_{j}^{1}}R_{\lambda_{j}^{1}}(z_{1})+\frac{ \mu}{\lambda_{j}^{1}}z_{1}\bigg{)}\bigg{)}-d(R_{\lambda_{j}^{1}}(z_{1}),z_{1})\] \[\leq \limsup_{j\rightarrow\infty}d\bigg{(}y,\frac{\lambda_{j}^{1}-\mu }{\lambda_{j}^{1}}R_{\lambda_{j}^{1}}(z_{1})+\frac{\mu}{\lambda_{j}^{1}}z_{1} \bigg{)}-d(R_{\lambda_{j}^{1}}(z_{1}),z_{1})\] \[= \lim_{j\rightarrow\infty}d\bigg{(}\frac{\lambda_{j}^{1}-\mu}{ \lambda_{j}^{1}}y+\frac{\mu}{\lambda_{j}^{1}}z_{1},\frac{\lambda_{j}^{1}-\mu }{\lambda_{j}^{1}}R_{\lambda_{j}^{1}}(z_{1})+\frac{\mu}{\lambda_{j}^{1}}z_{1} \bigg{)}-d(R_{\lambda_{j}^{1}}(z_{1}),z_{1})\] \[\leq \lim_{j\rightarrow\infty}d(y,R_{\lambda_{j}^{1}}(z_{1}))-d(R_{ \lambda_{j}^{1}}(z_{1}),z_{1})\] \[= g(y).\]
From the above we have \(g(R_{\mu}(y))\leq g(y)\leq d(y,z_{1})\) for every \(y\in D\) and \(\mu>0\). It follows from (C) that for any \(k\in\mathbb{N}\),
\[g(\eta^{k})=g\bigg{(}\sum_{i=2}^{m}\,\mu_{i}R_{\lambda_{k}^{i}}(z_{i})\bigg{)} \leq\max_{i=2,\ldots,m}g(z_{i})\leq\max_{i=2,\ldots,m}d(z_{i},z_{1})=M.\]
Consequently, by diagonal method, there exists a subsequence \(\lambda_{j_{1}}^{1}\leq\lambda_{j_{2}}^{1}\leq\ldots\leq\lambda_{j_{k}}^{1}\leq\ldots\) of \(\{\lambda_{j}^{1}\}\) such that
\[\limsup_{k\to\infty}d(\eta^{k},R_{\lambda_{j_{k}}^{1}}(z_{1}))-d(R_{\lambda_{j_ {k}}^{1}}(z_{1}),z_{1})\leq M+1. \tag{4.4}\]
Since \(R_{\lambda_{j}^{i}}(z_{i})\to\zeta^{i}\), as \(j\to\infty\) for any \(i=1,\ldots,m\), we have
\[||\eta^{j}-\eta||=\left|\left|\sum_{i=2}^{m}\mu_{i}R_{\lambda_{j}^{i}}(z_{i})- \sum_{i=2}^{m}\mu_{i}\zeta^{i}\right|\right|\leq\sum_{i=2}^{m}\mu_{i}||R_{ \lambda_{j}^{i}}(z_{i})-\zeta^{i}||\to 0,\quad j\to\infty,\]
which implies that \(||\eta^{j}-\eta||\to 0,j\to\infty\). Moreover, since \([\zeta,\eta]\nsubseteq\partial D\) it follows from Lemma 4.1 that
\[\liminf_{k\to\infty}d(\eta^{k},R_{\lambda_{j_{k}}^{1}}(z_{1}))-d(R_{\lambda_{ j_{k}}^{1}}(z_{1}),z_{1})=\infty.\]
However, the above formula contradicts (4.4).
We can use Theorem 4.3 to show a Wolff-Denjoy type theorem for resolvents of nonexpansive mappings. Let \((D,d)\) be a geodesic metric space and \([x,y],[x^{\prime},y^{\prime}]\) two arbitrary geodesic segments in \(D\). For every \(\alpha\in[0,1]\), consider the point \(z=\alpha x+(1-\alpha)y\) on segment \([x,y]\) such that \(d(\alpha x+(1-\alpha)y,y)=\alpha d(x,y)\) and in the same way, the point \(z^{\prime}=\alpha x^{\prime}+(1-\alpha)y^{\prime}\) on segment \([x^{\prime},y^{\prime}]\) such that \(d(\alpha x^{\prime}+(1-\alpha)y^{\prime},y^{\prime})=\alpha d(x^{\prime},y^{ \prime})\). Recall that a geodesic space \((D,d)\) is called _Busemann convex_ if
\[d(z,z^{\prime})\leq(1-\alpha)d(x,x^{\prime})+\alpha d(y,y^{\prime})\]
for every \(x,y,x^{\prime},y^{\prime}\in D\) and \(\alpha\in[0,1]\).
Combining Corollary 3.3 and Proposition 3.4 in [2], we obtain the following proposition (see also [12], [16, p. 191]).
**Proposition 4.4**.: _Let \(D\subset V\) be a bounded convex domain. A Hilbert metric space \((D,d)\) is Busemann convex if and only if \(D\) is an ellipsoid._
Since in Hilbert's metric spaces every straight-line segment is a geodesic, it follows from Proposition 4.4 that \((D,d)\) satisfies condition (D), whenever \(D\) is an ellipsoid. This leads to the following Wolff-Denjoy type theorem for resolvents of nonexpansive mappings.
**Corollary 4.5**.: _Suppose that \(D\subset V\) is an ellipsoid and \(R_{\lambda}:D\to D,\lambda>0,\) is the resolvent of a nonexpansive mapping \(F:D\to D\) (with respect to Hilbert's metric) without fixed points. Then there exists \(\xi\in\partial D\) such that \(\{R_{\lambda}\}_{\lambda>0}\) converge uniformly on bounded sets of \(D\) to \(\xi\)._
Proof.: It follows from Theorem 4.3 that \(\operatorname{co}\Omega_{\{R_{\lambda}\}_{\lambda>0}}\subseteq\partial D\). Since \(D\) is strictly convex, \(\Omega_{\{R_{\lambda}\}_{\lambda>0}}\) consists of a single element \(\xi\in\partial D\). The proof of uniform convergence on bounded sets is standard (see, e.g., [5]): suppose, on the contrary, that there exist an open neighbourhood \(U\subset\overline{D}\) of \(\xi\), a bounded set \(K\subset D\) and a sequence \(\{y_{\lambda_{n}}\}\subset K\) (\(\lambda_{n}\to\infty\)) such that \(R_{\lambda_{n}}(y_{\lambda_{n}})\notin U\) for each \(n\). Then
\[d(R_{\lambda_{n}}(y_{\lambda_{n}}),R_{\lambda_{n}}(y))\leq d(y_{\lambda_{n}},y )\leq\operatorname{diam}K\]
for any \(y\in K\) and, since \(R_{\lambda_{n}}(y)\to\xi\), we deduce from Lemma 4.1 that \(R_{\lambda_{n}}(y_{\lambda_{n}})\to\xi\in\overline{D}\setminus U\), a contradiction.
**Acknowledgements** The first author was partially supported by National Science Center (Poland) Preludium Grant No. UMO-2021/41/N/ST1/02968.
|
2309.07966 | Emergent nucleosynthesis from a 1.2 second long simulation of a
black-hole accretion disk | We simulate a black-hole accretion disk system with full-transport general
relativistic neutrino radiation magnetohydrodynamics (GR$\nu$RMHD) for 1.2
seconds. This system is likely to form after the merger of two compact objects
and is thought to be a robust site of $r$-process nucleosynthesis. We consider
the case of a black-hole accretion disk arising from the merger of two neutron
stars. Our simulation time coincides with the nucleosynthesis timescale of the
$r$ process ($\sim$ 1 second). Because these simulations are time consuming, it
is common practice to run for `short' duration of approximately 0.1 to 0.3
seconds. We analyze the nucleosynthetic outflow from this system and compare
the results between stopping at 0.12 and 1.2 seconds respectively. We find that
the addition of mass ejected in the longer simulation as well as more favorable
thermodynamic conditions from emergent viscous ejecta greatly impacts the
nucleosynthetic outcome. We quantify the error in nucleosynthetic outcomes
between short and long cuts. | T. M. Sprouse, K. A. Lund, J. M. Miller, G. C. McLaughlin, M. R. Mumpower | 2023-09-14T18:03:01Z | http://arxiv.org/abs/2309.07966v1 | # Emergent nucleosynthesis from a 1.2 second long simulation of a black-hole accretion disk
###### Abstract
We simulate a black-hole accretion disk system with full-transport general relativistic neutrino radiation magnetohydrodynamics (GR\(\nu\)RMHD) for 1.2 seconds. This system is likely to form after the merger of two compact objects and is thought to be a robust site of \(r\)-process nucleosynthesis. We consider the case of a black-hole accretion disk arising from the merger of two neutron stars. Our simulation time coincides with the nucleosynthesis timescale of the \(r\) process (\(\sim\) 1 second). Because these simulations are time consuming, it is common practice to run for'short' duration of approximately 0.1 to 0.3 seconds. We analyze the nucleosynthetic outflow from this system and compare the results between stopping at 0.12 and 1.2 seconds respectively. We find that the addition of mass ejected in the longer simulation as well as more favorable thermodynamic conditions from emergent viscous ejecta greatly impacts the nucleosynthetic outcome. We quantify the error in nucleosynthetic outcomes between short and long cuts.
Nucleosynthesis (1131), R-process (1324), Nuclear astrophysics (1129), Nuclear fission (2323), Nuclear decay (2227), Compact objects (288) +
Footnote †: journal: ApJ
0000-0002-8071-8088]Trevor M. Sprouse
0000-0002-0880-0885]Kelsey A. Lund
0000-0002-4880-7885]Jonah M. Miller
0000-0002-4880-7885]Gail C. McLaughlin
0000-0002-8860-788X]Matthew R. Mumpower
## 1 Introduction
The outflows of a black-hole accretion disks are promising sites for the synthesis of the heavy elements via the rapid neutron capture (\(r\) process) (Freiburghaus et al., 1999; Siegel and Metzger, 2017; Siegel and Metzger, 2018; Hossein Nouri et al., 2018; Miller et al., 2019, 2019; Foucart et al., 2020; Kullmann et al., 2021; Foucart et al., 2021; Fahlman and Fernandez, 2022; Just et al., 2022). Such environments may form after the merger of compact objects and potentially offer unique signatures of heavy element formation (Metzger et al., 2010; Zhu et al., 2018; Korobkin et al., 2020; Zhu et al., 2021; Lund et al., 2023).
Accretion disks have been modeled in increasing detail for many years with notable works from Pringle and Rees (1972); Ruffert et al. (1996); Popham et al. (1999); MacFadyen and Woosley (1999); Shibata and Uryu (2000).
The behavior of accretion disks is sensitive to a number of physical effects including post-merger magnetic field configurations (Rudiger and Shalybkov, 2002; Christie et al., 2019), the nuclear equation of state (Steiner et al., 2013), and neutrino physics (McLaughlin and Surman, 2005; Surman et al., 2008). In neutron star mergers, disk ejecta may be accompanied by dynamical ejecta (Dietrich and Ujevic, 2017; Radice et al., 2018) that is also sensitive to neutrino physics (Foucart et al., 2023). Accretion disks from the merger of a neutron-star black-hole binary are also found to be favorable sites of the \(r\) process (Siegel and Metzger, 2017; De and Siegel, 2021; Murguia-Berthier et al., 2021; Curtis et al., 2023).
Long term evolution of accretion disks is consequential for electromagnetic counterparts (Fernandez et al., 2019; Christie et al., 2019) as well as the nucleosynthesis that ensues in the aftermath of these cataclysmic events. Recently significant effort has been devoted to simulations that capture the long-lived remnant (Hayashi et al., 2022, 2023). To our knowledge, however, no late-time models to date perform detailed radiation transport and nucleosynthesis calculations. Previous work (Miller et al., 2019, 2020) indicates that at early times, higher-fidelity transport is required to accurately capture the electron fraction of the outflow and thus the nucleosynthetic yields. To date it is unclear if this result translates to late times during active nucleosynthesis.
In this work, we help resolve this uncertainty. We model a black-hole accretion disk system that may arise after the merger of two neutron stars and evolve it for 1.2 seconds. This duration of time is long enough to explore active nucleosynthesis in the \(r\) process. We analyze mass ejection, entropy, and electron fraction which all have a strong influence on the nucleosynthetic outcomes. To analyze the error in present model calculations arising from computational limitations, we compare these results to the same simulation stopped at 0.12 seconds. We end with a discussion of the uncertainty that arises in simulated nucleosynthesis yields when using short-duration simulations.
## 2 Simulating nucleosynthesis
### Simulation details
We extend the full transport general relativistic neutrino radiation magnetohydrodynamics (GR\(\nu\)RMHD) simulation of a black-hole accretion disk-wind system performed in (Miller et al., 2019) using the \(\nu\)bhlight code (Miller et al., 2019, 2020; Miller et al., 2019) to a full 1.2 seconds. This calculation took approximately 7 months of wall-time.
The original model, which we extend, uses a stationary Kerr (1963) black hole spacetime for a black hole of mass \(M_{\rm BH}=2.58M_{\odot}\) and dimensionless spin \(a=0.69\). The initial conditions are a torus in hydrostatic equilibrium (Fishbone & Moncrief, 1976) of constant specific angular momentum, constant entropy of \(s=4k_{b}/\)baryon, constant electron fraction \(Y_{\rm e}=0.1\), and total mass of \(M_{\rm d}=0.12M_{\odot}\). Our torus starts with a single poloidal magnetic field loop with a minimum ratio of gas to magnetic pressure, \(\beta\), of 100.
We solve the equations of general relativistic ideal magnetohydrodynamics, closed with the SFHo EOS, described in Steiner et al. (2013) and tabulated in O'Connor & Ott (2010). Neutrinos are evolved with a Monte Carlo method and can interact with matter via emission, absorption, or scattering. For emission and absorption, we use the charged and neutral current interactions as tabulated in Skinner et al. (2019) and summarized in Burrows et al. (2006). Neutrino scattering is implemented as described in Miller et al. (2019). The Monte Carlo and Finite Volume methods are coupled via first-order operator splitting.
We use a radially logarithmic, quasi-spherical grid in horizon penetrating coordinates with \(N_{r}\times N_{\theta}\times N_{\phi}=192\times 168\times 66\) grid points with approximately \(3.8\times 10^{7}\) Monte Carlo packets. For details on the resolution requirements of the model, and why we chose this resolution, see Miller et al. (2019). After about 400 ms of runtime, the neutrino opacity in the disk is sufficiently low that neutrinos are essentially free-streaming. At this point, we turn off transport and switch to an optically thin cooling prescription. Essentially Monte Carlo particles are emitted at the proper rate but are then immediately deleted and not transported or absorbed.
Although our code is Eulerian, we track approximately \(1.5\times 10^{6}\) Lagrangian fluid packets, or "tracer particles." Each tracer particle is assigned a mass, representing the statistical weight of the particle. Following Bovard & Rezzolla (2017), we initialize tracer particles uniformly distributed in the volume containing a non-trivial density of gas at the initial time. At each time-step tracer particles are advected with the fluid flow via the equation
\[\frac{\partial x^{i}}{\partial t}=\frac{u^{i}}{u^{0}}=\alpha v^{i}-\beta^{i} \tag{1}\]
for fluid four-velocity \(u^{\mu}\), three-velocity \(v^{i}\), lapse \(\alpha\), and shift \(\beta^{i}\). Latin indices range from 1 to 3 and represent spatial directions. Greek indices range from 0 to 3 and represent space and time. Fluid and microphysical data, such as fluid density and temperature, electron fraction, and neutrino reaction rates are interpolated to tracer positions and recorded per tracer.
### Engine Physics
As the system evolves, the magneto-rotational instability (MRI, Velikhov, 1959; Balbus & Hawley, 1991) self-consistently drives the disk to a turbulent state, which provides the turbulent viscosity necessary for the disk to accrete (Shakura & Sunyaev, 1973). This mechanism drives a long-lived accretion flow, which starts as powerful as \(>1M_{\odot}/s\) but sweeps down in accretion rate as the disk expands and cools. Figure 1 shows this behavior. Analytic models of the turbulent viscosity predict the accretion rate follows a \(t^{-5/3}\) power law before eventually eventually transitioning to exponential decay (Tanaka, 2011; Dolence, 2011). We include a \(t^{-5/3}\) line to
guide the eye. Material undergoing \(r\)-process nucleosynthesis is ejected primarily during the downward sloping phase of this curve, after approximately \(2\times 10^{-2}\) seconds.
Over time, the density drops, causing the accretion rate to drop as the disk drains. The density of the disk for three different times is shown in Figure 2. The electron fraction in the disk and the outflow is set by the relative time scale of fluid motion relative to the time scale on which weak processes are occurring. Following Miller et al. (2020), we compute the weak time scale as
\[\tau_{\pm}=\frac{\rho Y_{e}}{G_{Y_{e}}^{\pm}} \tag{2}\]
and the time scale for fluid motion as
\[\tau_{a}(r)=\frac{1}{t_{f}-t_{i}}\int_{t_{i}}^{t_{f}}dt\frac{\theta_{d}}{ \langle v_{a}\rangle_{\rho,Y_{e},\theta,\phi}} \tag{3}\]
for characteristic disk opening angle
\[\theta_{d}(t,r)=\sqrt{\frac{\int_{S^{2}}\sqrt{-g}d^{2}x\rho\theta^{2}}{\int_{ S^{2}}\sqrt{-g}d^{2}x\rho}} \tag{4}\]
and mass-averaged lepton advection velocity
\[\langle v_{a}\rangle_{\rho,Y_{e},\theta,\phi}\left(t,r\right)=\frac{\int_{S^{ 2}}\sqrt{-g}d^{2}x\rho Y_{e}u^{2}}{\int_{S^{2}}\sqrt{-g}d^{2}x\rho Y_{e}}, \tag{5}\]
where \(\rho\) is the fluid density, \(Y_{e}\) is the electron fraction, and \(G_{Y_{e}}^{+}\) and \(G_{Y_{e}}^{-}\) is the fluid-neutrino interaction rate for weak processes that increase and decrease electron fraction respectively. The times \(t_{f}\) and \(t_{i}\) bound the time-average used to compute \(\tau\), \(\theta\) is the angle off the equator so that \(\theta=0\) is the equator and \(\theta=\pi/2\) is the north pole. \(\sqrt{-g}\) is the square root of the determinant of the spacetime metric, and \(u^{2}\) is the theta-component of four-velocity of the fluid. See Miller et al. (2019) for a more detailed description of \(G_{Y_{e}}\) and Miller et al. (2020) for more details on this time-scale analysis procedure.
The top right pane of each panel in Figure 3 shows the ratio of \(\tau^{+}\) to \(\tau_{a}\), the bottom right the ratio of \(\tau_{-}\) to \(\tau_{a}\) and the left panel shows the ratio of \(\tau_{+}\) to \(\tau_{-}\). In the right pane, a small ratio implies that weak processes dominate. As the ratio grows, weak processes become less important in setting \(Y_{e}\) compared to fluid motion, and the electron fraction freezes out. The top row shows the disk at 0.13 s, the middle at 0.51 s, and the bottom at 1.27 s. The left pane shows that \(\tau_{+}\) is smaller than \(\tau_{-}\), indicating weak processes are driving the electron fraction up. However, as the disk cools these weak processes become inefficient compared to fluid motion and the electron fraction in the disk freezes out.
Outflows begin to be launched early in the lifetime of the disk, although they travel at different speeds, and thus become gravitationally unbound at different times. Figure 4 sketches these different components out: The
Figure 1: Accretion rate of the disk over the lifetime of the calculation. The blue line segment shows the accretion rate over the duration of the “short” cut; the red line segment shows the extended part of the calculation, referred to as the “long” cut.
Figure 2: The density of the disk for three different times, 0.13s (top), 0.51s (middle), and 1.27s (bottom) showing the disk drain with time.
magnetic field powers a jet via the Blandford & Znajek (1977) mechanism; turbulent heating drives a hot, fast disk wind in an hourglass shape out the poles of the disk; and turbulent viscosity drives a slower moving equatorial outflow. The viscous mechanism eventually unbinds the most mass. In contrast, the jet is the fastest mechanism but unbinds the least mass. While we describe these three outflow mechanisms as separate here, in reality these mechanisms are difficult to disentangle and thus uniquely quantify.
### From tracer to trajectory
Once the simulation has completed, we down select tracers which are unbound to study the nucleosynthesis. This filter involves the calculations of two physical constraints. The first is that the tracer be 250 gravitational radii (\(GM_{BH}/c^{2}\)) away from the central black hole. The second is that the Bernoulli parameter be \(B_{e}>0\). The Bernoulli parameter originates in modeling of hydrostatic flows. \(B_{e}=0\) implies hydrostatic equilibrium, \(B_{e}<0\) implies a flow in-falling into a gravitational potential, and \(B_{e}>0\) implies a gravitationally unbound flow (Narayan & Yi, 1995).
This selection criterion results in 79,556'short' tracers at 0.12 seconds and 461,690 'long' tracers at 1.2 seconds. The difference between these two subsets comes only from running the simulation for an extended duration. Over the course of this additional second of simulation time, the ejected mass increases by a factor of 18.5 with the electron fraction decreasing by 0.1 on average. The temperature and density also show sizable changes in favor of the production of heavy elements. A summary of the difference between short and long evolutions is provided in Table 1.
Figure 4: Schematic of the outflow components of the disk. For illustrative purposes, this figure uses a zoomed-out snapshot of the electron fraction \(Y_{e}\) of the disk at t=30 ms and a contrasting colormap.
Figure 3: The electron fraction increasing (\(\tau_{+}\)) and decreasing (\(\tau_{-}\)) timescales relative to the fluid advection timescale (\(\tau_{a}\)) at the same three snapshots in the simulation as in Fig. 2.
If a tracer is found to be unbound in the short case, it is also unbound in the long case (by the definition of being unbound using the above two constraints). The bulk of the tracers, \(382134=461690\) - \(79556\), become unbound on timescales greater than 0.1 seconds, owing to the dynamics of the central engine. Magnetohydrodynamics (MHD) disk models typically drive an early, fast outflow powered by heat and magnetic forces (Siegel and Metzger, 2017; Christie et al., 2019) and a late, slow outflow powered by turbulent viscosity (Shakura and Sunyaev, 1973). The latter outflow is enhanced by nuclear recombination, incorporated into the NSE finite temperature equation of state (Fernandez et al., 2019; Fahlman and Fernandez, 2022; Just et al., 2022; Haddadi et al., 2023). Our disk is no exception, and the more massive late-time outflow is from the slower viscous mechanism.
The total amount of mass unbound in the'short' tracers is significantly smaller than in the 'long.' At 0.12 s, when the'short' tracers are extracted, the disk has accreted roughly \(9.57\times 10^{31}\) g of mass. The mass in the'short' tracers accounts for about 3.7% of that accreted mass. At 1.2 s, when the 'long' tracers are extracted, the disk has accreted \(9.81\times 10^{31}\) g of mass, only a small fraction more (this is due to the power law decay shown in Figure 1). However, the total mass in the 'long' tracers is \(6.46\times 10^{31}\) g, or 65% of the accreted mass and about 27% of the total mass of the disk. Other late-time models, such as Siegel and Metzger (2018); Fernandez et al. (2019); Christie et al. (2019), indicate late-time outflow can be as much as 40% of the disk mass. Our result, as well as the other literature, indicate that extrapolating total mass in the outflow at late times based on early-time mass flux will introduce inaccuracies of about an order of magnitude. This is likely due to the different velocities of the outflow, as the fast-moving outflow is less massive than the slower-moving outflow.
We take the set of traces and convert them into a 'trajectory' for use in post-processing nucleosynthesis. A trajectory extends the temperature and density profiles contained in each tracer by assuming a homologous expansion. The simulation of nucleosynthesis for a given trajectory, however, does not start at the point of homologous expansion. Instead, the starting point of our nucleosynthesis calculations begins at the last time the temperature drops below \(T=10\) (GK).
A homologous expansion is implemented as follows. The velocity is assumed to be constant, yielding an increment of the Cartesian coordinates after a duration of time, \(dt\), \(dx_{i}=v_{i}\times dt\). The density is extrapolated as a power law, \(\rho\sim\rho_{e}/t^{3}\), where \(\rho_{e}\) is the density at the time of extrapolation. The temperature is extrapolated from the density assuming an ideal gas with \(\Gamma=5/3\).1 As a consequence of these assumptions, the final time points associated with trajectories are independent from one another and do not interact hydrodynamically (unlike a tracer).
Footnote 1: The ideal gas equation of state is _only_ used to extrapolate the temperature to produce trajectories.
The end point of tracers (starting point of of the homologous expansion) vary drastically. This situation arises naturally from the simulation and thus means the conditions under which heavy element synthesis proceeds will also show large variation. Our results thus highlight the need for future nuclear sensitivity studies to cover a wide range of conditions, as shown in the recent work of Li et al. (2022).
The additional impact of radioactive heating from nuclear processes can be substantial, and result in a change in the temperature evolution of the trajectory relative to a homologous expansion. Nevertheless, it is expected to be a larger effect for dynamical ejecta than in disk ejecta (Lippuner and Roberts, 2015). For this reason, it will be considered in subsequent work.
### Nuclear inputs
We use Portable Routines for Integrated nucleoSynthesis Modeling (PRISM) to model \(r\)-process nucleosynthesis (Sprouse et al., 2021). The nuclear input to PRISM is based on the 2012 version of the Finite Range Liquid Droplet Model (FRDM) Moller et al. (2012); Moller et al. (2016). Neutron induced reactions, including radiative capture and fission, are calculated with the CoH\({}_{3}\) statistical Hauser-Feshbach code Kawano (2019, 2021a, 2019). Rates of \(\beta\)-decay, \(\beta\)-delayed fission, and the associated probabilities to emit neutrons are calcu
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline Name of run & Stop time & Number of traces & Total Mass & Tracer Mass\({}^{\dagger}\) & Ye\({}^{\dagger}\) & Entropy\({}^{\dagger}\) & \(T_{0}{}^{\dagger}\) & \(\rho^{\dagger}\) \\ \hline Short & 0.12 & 79556 & \(3.557\times 10^{30}\) & \(4.471\times 10^{25}\) & 0.247 & 19.67 & 1.76 & \(2.05\times 10^{5}\) \\ Long & \(\frac{\langle\mathbf{s}\rangle}{2}\) & 461690 & \(6.567\)\(\left\langle\mathbf{s}\right\rangle\)\(10^{31}\) & \(1.422\)\(\left\langle\mathbf{s}\right\rangle\)\(10^{26}\) & 0.146 & (\(k_{B}\)\(\frac{\langle\mathbf{s}\rangle}{45.9}\)) & (GK) & 7,96\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\zeta\)\(\zeta\zeta\)\(\zeta\)\(\zeta\zeta\)\(\zeta\zeta\)\(\zeta\zeta\)\(\zeta\)\(\zeta\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\zeta\)\(\zeta\)\(\zeta\)\(\zeta\zeta\)\(\zeta\)\(\zeta\)\(\zeta\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\)\(\zeta\zeta\)\(\
lated assuming a statistical de-excitation from excited states (Mumpower et al., 2016, 2018). The REACLIB database is used for secondary reaction rates (Cyburt et al., 2010). Conditions suitable for a robust fission recycling \(r\)-process are not found in this work. Therefore, a symmetric 50/50 split is used for fission products in order to increase the computational efficiency of PRISM without impact to the resultant nucleosynthesis nor any of our conclusions.
## 3 Results
First we describe the differences in key astrophysical quantities that influence the nucleosynthetic outcomes by running for longer times. We then analyze the nucleosynthesis itself.
In Figure 5 we present the difference in the entropy distribution of the unbound tracers between the long and short runs. The short-duration run has overall less mass ejected, which can be seen from the lower maximum value on the y-axis for traced mass. In addition, the long-duration simulation has a lower average entropy (15.49 \(k_{B}\)/baryon as compared to 19.67 \(k_{B}\)/baryon in the short case). The shift to lower entropy values as the simulation runs longer arises due to the different ejection mechanisms--the early-time, fast outflow is more thermally, as opposed to viscously driven, and may contain a component of material entrained in the jet.
Also crucial to the resultant nucleosynthesis is the value of the electron fraction at the end of the tracer. Figure 6 compares the distributions of electron fraction between the two cases. We find that the longer duration simulation has significantly lower \(Y_{e}\) than the short duration simulation due largely to viscous material which became unbound later in the simulation. This strong shift to lower \(Y_{e}\) is a harbinger of subsequent heavy element formation.
The additional low entropy and low \(Y_{e}\) tracers that are captured in the long-duration run will have slightly different typical nucleosynthetic evolutions as compared with the short tracers. Firstly, a lower electron fraction, with all else being equal, means more neutrons available for capture on seed nuclei and a more robust r-process. This effect is enhanced by lower entropy which means that material will fall out of equilibrium sooner and experience a more robust r-process. Finally, not only does a lower entropy produce a more robust r-process, it also changes the shape and position of the peaks in the distribution (Mumpower et al., 2012, 2012; Orford et al., 2018; Vassh et al., 2020, 2021).
We now turn to the assumption of homologous expansion and contrast the results between early and late times. The evolution of the temperature and density profiles is critical in the first few seconds as the resultant nucleosynthesis occurs almost entirely in this timescale (Kajino et al., 2019; Sprouse et al., 2022).
In Figure 7 we highlight two individual trajectories. The top panel shows a case where the temperature and density evolution are both altered. In this panel, the long cut (solid) maintains a higher temperature and den
Figure 5: Comparison of entropy distributions of tracers between the short and long runs. The short cut (0.12 s) is shown in blue and long cut (1.2 s) in red. Intermediate snapshots of the entropy distribution are shown between these two snapshots.
Figure 6: Comparison of the electron fraction distributions for the short cut (0.12 s, blue) and the long cut (1.2 s, red). Intermediate snapshots of the \(Y_{e}\) distribution are shown between these two snapshots.
sity for longer than the short cut (dotted). The longer time spent in the 3 GK to 1 GK region means the \(r\) process is "hotter," spending more time in \((n,\gamma)\Longleftrightarrow(\gamma,n)\) equilibrium. In addition, the long-duration tracer spends more time at higher density but by happenstance, lands on the homologous expansion curve derived from the short tracer.
The bottom panel of Fig. 7 shows a case where the density is orders of magnitude more defuse in the long run as compared to the short, although the temperature drops off similarly as one would expect from homologous expansion using the short cut. In this case, due to the drop in density, the long cut nucleosynthesis is less robust than the short duration cut.
In general we find the longer cuts behave as a combination of the temperature and density profiles shown in the two panels of Figure 7. On average, the material in the long cut experiences higher densities at later times with a marginally higher temperature evolution as compared with the short cut. The final two columns of Table 1 highlight these differences where the temperatures are roughly comparable but the density is a factor of 4 larger.
Nuclear reactions scale as the square of the density (Rauscher and Thielemann, 2000), so that reaction rates in the long cut are \(\sim 16\) times faster than in the short cut. Furthermore, the higher densities are occurring at later times, when reaction rates are more likely to be out of equilibrium, thus substantially favoring more neutron-rich nucleosynthesis (Mumpower et al., 2012). We find the increase in density at late times to be the primary driver of the differences in the nucleosynthetic outcome between the short and long cuts.
The final abundances for the total mass ejected in each case are shown in Figure 8. The associated elemental abundances are shown in Figure 9. We find a more robust \(r\) process ensues as emergent viscous material emanates from the disk. The short scenario has a first peak where elements like strontium reside with a reduced third peak production. In contrast, the long simulation shows a complete \(r\) process through the actinides, albeit with a reduced first peak. While the actinides are produced in substantial quantity, we do not find evidence of fission recycling. Instead, material just makes it to superheavy nuclei (\(A\sim 280\)) which ultimately decay to populate the longer-lived actinides (Holmbeck et al., 2023). In this simulation superheavy elements (\(Z>103\)) are not found in sufficient quantity to impact a kilonova signal (Holmbeck et al., 2023).
The elemental pattern, in particular, shows abundances regions that are clearly simulation-uncertainty-dominated (large variation between short and long cuts). This spans nearly the entire pattern from the weak \(r\)-process peak (\(A\sim 80\)), to the lanthanides, the third peak (\(A=195\)) and the actinides, while the second peak (\(A=130\)) remains relatively unaltered.
Figure 8: Mean final isotopic abundances at 1 Gyr from the complete ejecta of a NS-BH accretion disk. Solar data in black.
Figure 7: Differences between the homologous expansion assumption for short and long trajectories. The dotted lines indicate the short run while solid lines indicate the long run. The top panel shows a case where both the \(T_{9}\) and \(\rho\) evolution is greatly impacted. The bottom panel shows a case where the \(\rho\) is greatly impacted.
There are also points that cannot be readily explained by simulation uncertainties, since the results from the different cuts of the simulation cannot account for the remaining discrepancy from the solar residuals. In particular, the lighter elements of a 'weak' \(r\)-process component below \(Z=50\) as well as the transition nuclei that reside between the second \(r\)-process peak and the lighter lanthanides (\(50\lesssim Z\lesssim 60\)), have larger errors from nuclear physics uncertainties than seen from the simulation. Additionally, nuclear physics models like FRDM2012 have a closed \(N=126\) shell far from stability, which in this simulation results in an overproduction of this peak relative to the solar residuals. Relevant nuclear physics uncertainties for \(r\)-process nucleosynthesis have been studied extensively in the works of Mumpower et al. (2016); Vassh et al. (2019); Misch et al. (2021); Mumpower et al. (2022).
We now quantify the error between the 0.12 second cut and 1.2 second cut by calculating the percent error, \(\delta=|\frac{X_{0.12}-X_{1.2}}{X_{1.2}}|\times 100\), where \(X_{j}\) are the respective final mass fractions. Figure 10 shows this value as a function of proton number (top panel) and mass number (bottom panel). The average percent error for both functions is between 450 and 500% as indicated by the dashed grey lines. Discrepancies can be found throughout the pattern, but are especially astounding for lighter nuclei. A useful rule of thumb derived from this calculation is that for nuclei \(Z\leq 50\), the error in population is roughly a factor of 6 while for \(Z>50\), the error in population is roughly a factor of 2.
We now address the question of whether or not the stopping point of our simulation at 1.2 seconds (the long cut) is complete. By complete we mean that ejecta has stopped impinging on the extraction surface in sufficient amount such that the electron fraction and other relevant distributions would begin to asymptote, leaving the nucleosynthesis unchanged. To gauge this behavior, we plot in Figure 11 the cumulative mass ejected (grey curve read from the left Y-axis) as a function of time at the extraction surface. The derivative of this quantity, or rate of unbound mass ejection (dashed blue curve) is also shown and can be read from the right Y-axis. While the cumulative mass ejection looks to be slowing down, it is important to to note that the salient feature of this curve is its log scale. The bulk of the unbound material arrives at the extraction surface at later times, and the derivative has yet to approach zero. We conclude that simulations must be run for longer times to fully capture the extent of unbound material.
## 4 Conclusion
We have simulated a black hole accretion disk system resulting from a binary neutron star merger for 1.2 seconds using full transport neutrino radiation magnetohydrodynamics (GR\(\nu\)RMHD). We have analyzed the resultant nucleosynthesis, which is greatly impacted as compared with the same simulation cut at 0.12 seconds. While we find the total amount of unbound ejecta has yet to completely asymptote (Figure 11), our results provide the first insights of running nucleosynthesis with a
Figure 10: Percent error in the mean final mass fractions as a function of \(Z\) or \(A\) when using short duration simulation. Average values of these functions are represented by the dashed grey lines.
Figure 9: Mean final elemental abundances at 1 Gyr from the complete ejecta of a NS-BH accretion disk. Solar data in black.
long duration simulation. In particular, we find that emergent viscous material in the plane of the disk to be primarily responsible for the vastly different nucleosynthetic outcome between the short and long duration cuts.
Our work shows that by running simulations to later times, lanthanides are produced in similar proportion to the first peak (weak) \(r\) process. To obtain conditions favorable for lighter element production that is inline with the solar pattern, one needs additional processing via neutrinos (that is not found in our simulation), or some other physical mechanism. Monte Carlo transport in \(\nu\)bhlight is only performed in regions of the engine where weak processes are subdominant compared to fluid motion, i.e., when \(Y_{e}\) has frozen out. However, on the time scale of the longer simulation (1s), it is possible these slower processes matter, and we may be undercounting them. This is one possible source unaccounted for neutrino processing.
We note that late-time lanthanide-rich outflow from this post-merger disk does not change the fact that the fast-moving lanthanide-poor ejecta may produce an early blue component to a kilonova (Miller et al., 2019). Moreover, these results cannot be straightforwardly extended to the collapsar case, where the disk is fed and the thermodynamic conditions vary as a power law with time (Miller et al., 2020).
In the near future longer duration high-fidelity simulations will become common place. We have shown that late-time modeling is required to fully capture the richness of phenomenology in the nucleosynthesis and neutrino sector, and we look forward to continual developments in the community to uncover the details regarding the origin of the heavy elements.
We thank Luke Roberts for valuable feedback on the initial draft of this manuscript. This work was supported through the Laboratory Directed Research and Development program under project numbers 20220564ECR and 20230052ER at Los Alamos National Laboratory (LANL). LANL is operated by Triad National Security, LLC, for the National Nuclear Security Administration of U.S. Department of Energy (Contract No. 89233218CNA000001). GCM and KAL acknowledge support by the Department of Energy Office of Nuclear Physics award DE-FG02-02ER41216 and by the the Fission in r-Process Elements (FIRE) topical collaboration in nuclear theory, funded by the U.S. DOE, contract No. DE-AC5207NA27344. GCM acknowledges support by the Network for Neutrinos, Nuclear Astrophysics and Symmetries (N3AS) through the National Science Foundation Physics Frontier Center Grant No. PHY-2020275 and by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research and Office of Nuclear Physics, Scientific Discovery through Advanced Computing program under Award Number DE-SC00268442 (ENAF). This work was partially supported by the Office of Defense Nuclear Nonproliferation Research & Development (DNN R&D), National Nuclear Security Administration, US Department of Energy. KAL gratefully acknowledges the support of the U.S. Department of Energy through the LANL/LDRD program and the Center for Nonlinear Studies for this work. We thank the Institute for Nuclear Theory at the University of Washington for its kind hospitality and stimulating research environment. This research was supported in part by the INT's U.S. Department of Energy grant No. DE-FG02- 00ER41132. This work is approved for unlimited release with LA-UR-23-28417.
|
2301.00195 | Sub-Planck structures and sensitivity of the superposed photon-added or
photon-subtracted squeezed-vacuum states | The Wigner function of the compass state (a superposition of four coherent
states) develops phase-space structures of dimension much less than the Planck
scale, which are crucial in determining the sensitivity of these states to
phase-space displacements. In the present work, we introduce compass-like
states that may have connection to the contemporary experiments, which are
obtained by either adding photons to or subtracting photons from the
superposition of two squeezed-vacuum states. We show that, when a significant
quantity of photons is added (or subtracted), the Wigner function of these
states are shown to have phase-space structures of an area that is
substantially smaller than the Planck scale. In addition, these states exhibit
sensitivity to displacements that is much higher than the standard quantum
limit. Finally, we show that both the size of the sub-Planck structures and the
sensitivity of our states are strongly influenced by the average photon number,
with the photon addition case having a higher average photon number leading to
the smaller sub-Planck structures and, consequently, being more sensitive to
displacement than the photon subtraction case. Our states offer unprecedented
resolution to the external perturbations, making them suitable for quantum
sensing applications. | Naeem Akhtar, Jizhou Wu, Jia-Xin Peng, Wu-Ming Liu, Gao Xianlong | 2022-12-31T13:27:30Z | http://arxiv.org/abs/2301.00195v2 | Sub-Planck structures and sensitivity of the superposed photon-added or photon-subtracted squeezed-vacuum states
###### Abstract
The Wigner function of the compass state (a superposition of four coherent states) develops phase-space structures of dimension much less than the Planck scale \(\hbar\), which are crucial in determining the sensitivity of these states to phase-space displacements. In the present work, we introduce compass-like states that may have connection to the contemporary experiments, which are obtained by either adding photons to or subtracting photons from the superposition of two squeezed-vacuum states. We show that, when a significant quantity of photons is added (or subtracted), the Wigner function of these states are shown to have phase-space structures of an area that is substantially smaller than the Planck scale. In addition, these states exhibit sensitivity to displacements that is much higher than the standard quantum limit. Finally, we show that both the size of the sub-Planck structures and the sensitivity of our states are strongly influenced by the average photon number, with the photon addition case having a higher average photon number leading to the smaller sub-Planck structures and, consequently, being more sensitive to displacement than the photon subtraction case. Our states offer unprecedented resolution to the external perturbations, making them suitable for quantum sensing applications.
## I Introduction
Quantum mechanical states can be visualized in the phase space via the Wigner quasiprobability distribution [1; 2; 3; 4; 5]. The term "Gaussian state" refers to a state having the Gaussian Wigner function [6; 7]. The coherent state [8] is an example of a Gaussian state. The Wigner function of the coherent state exhibits the Planck limit [9; 10] in the phase space, which is also known as the standard quantum limit (SQL) or shot-noise limit. The Wigner function of certain non-Gaussian states may attain negative values [11; 12; 13; 14], indicating that these states are nonclassical. The quantum superposition is the source of intriguing non-classical properties of quantum states, such as quantum coherence [11; 15], squeezing [16], and entanglement [17; 18; 19]. Non-classical quantum states play a significant role in quantum-information processing [20], tests of fundamental of physics [21; 22; 23], and applications in sensing and metrology [24; 25].
Non-classical states are not always non-Gaussian, however, non-classical states can be Gaussian in some cases. For example, a squeezed-vacuum state (SVS) is a common non-classical state, but it possesses a Gaussian Wigner function [12; 13]. The Wigner function of the superposition of SVS is non-Gaussian and may have negative amplitudes [26; 27]. Squeezed quantum states play an important role in performing enhanced quantum metrology [16; 28]. Squeezed light has been utilized experimentally to carry out improved measurements [29; 30].
The superpositions of two coherent states with opposite phases (cat states) also possess non-Gaussian Wigner functions [31; 32]. Moreover, the superposition of four coherent states, which is known as the "compass state" [33] exhibits nonclassical features in the Wigner function with dimensions far smaller than the SQL. The quantum states with sub-Planck structures are found very sensitive to environmental decoherence [34] and have achieved prominent theoretical attentions in quantum metrology [34; 35; 36; 37; 38]. The connection between sub-Planck structures and teleportation fidelity has been established [39]. Sub-Fourier sensitivity is a classical analogue of the sub-Planck structures [40]. Compass-like states have been thoroughly investigated in several situations [41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56]. Both theoretical [57; 58; 59; 60; 61; 62] and experimental study [63; 64; 65; 66] have been taken to achieve the controlled generation of such states.
In recent years, there has been a lot of focus on subtracting photons from or adding photons to the quantum states [67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79]. A non-Gaussian state can also be generated by adding or subtracting photons from a Gaussian state. For example, when photons are added to or subtracted from the Gaussian SVS, one may obtain two non-Gaussian squeezed states that have non-positive Wigner functions [72; 73; 74; 75; 77; 79; 80]: photon
added squeezed-vacuum states (PASVS) [69] and photon-subtracted squeezed-vacuum states (PSSVS) [75]. Both PASVS and PSSVS have attracted theoretical interest in quantum metrology [81; 82; 83].
The first theoretical investigation into the photon-addition operation is accomplished by adding photons to the coherent states [84]. Later, the photon-addition operation was successfully proved in experiments by using a non-degenerate parametric amplifier with a weak coupling [85]. The PSSVS is currently the most successfully experimentally observed non-Gaussian SVS in quantum optics [86; 87].
Schrodinger cat-like states with higher amplitude can be used as qubits in quantum computing or as resources for quantum error-correcting coding [88; 89]. Conventional methods are unable to produce Schrodinger cat-like states with the necessary amplitudes [90; 91; 92]. Numerous theoretical [93; 94; 95] and experimental [96; 97] research involving the photons addition or subtractions from the SVS have been carried out to achieve such states.
In the present work, we introduce a few non-Gaussian SVSs that may also hold the properties of the compass state. In particular, we show that the Wigner function of the superpositions of two PASVSs (or PSSVSs) exhibit phase-space structures of an area, which vary inversely with the number of photons added (or subtracted). When a large amount of the photons are added to or subtracted from our states the support area of these structures is substantially smaller than that found for coherent states. Similar sub-Planck structures are also found in the phase space of the mixed states related to the PASVSs and PSSVSs. We demonstrate that the average photon number in the states significantly influences the size of these sub-Planck structures, with photon addition case having higher average photon number leading to smaller sub-Planck structures in the phase space than photon subtraction case.
To investigate the potential applications of these non-Gaussian states in quantum metrology, we analyze the overlap between these states and their slightly shifted analogues [98]. The degree to which the state is sensitive against perturbations in the phase space can be determined from this overlap. The sensitivity associated with coherent states cannot be improved by increasing the number of photons. Techniques using probes prepared in such states have the sensitivity at the SQL [99; 100]. Here, we show that the sensitivity of our states is much higher than the SQL when the quantity of added (or subtracted) photons is relatively high. Furthermore, our superpositions exhibit this enhanced sensitivity in all phase-space directions, whereas the mixtures only do so for specific displacements. The varying average photon number in the states also contributed to the variation in the sensitivities between the photon addition and subtraction cases; it is shown that the photon addition cases have higher sensitivity than the subtracted ones.
The structure of our paper is as follows. In SSII, we review the concept of the sub-Planck structures associated to the compass state. In SSIII, we review the Wigner functions of PASVS and PSSVS. In SSIV, we introduce our states and analyze their phase space by using the Wigner function. Here, we also discuss the sensitivity of our states against the phase-space perturbations. In SSV, we provide our conclusion.
## II Theory of sub-Planck structures
This section provides the background of the sub-Planck structures and is organized as follows. SSII.1 introduces the basic concepts that will be used in this article. In SSII.2, we review the sub-Planck structures that build in the phase space of the compass state. SSII.3 explains the sensitivity to phase-space displacements associated to this compass state.
### Basic concepts
The position operator \(\hat{x}\) and the momentum operator \(\hat{p}\) acts on an infinite-dimensional Hilbert space, forming the so-called Heisenberg-Weyl (HW) algebra \(\mathfrak{hpr}(1)\)[101; 102; 103] for a single degree of freedom. The quantum uncertainty principle [9; 10], arising from commutator relations \([\hat{x},\hat{p}]=\mathrm{i}\) (with \(\hbar\) being scaled to unity throughout) limits the size of a phase-space structure [10], for example, represented by the Wigner function [1] for \(\mathfrak{hpr}(1)\) algebra and, more generally, by Moyal symbols [104] for other symmetries [102]. For convenience, we use the vector
\[\mathbf{\zeta}:=(x,p)^{\top}, \tag{1}\]
to represent the position-momentum pair in the following.
A Schrodinger coherent state is a non-spreading wave packet of the quantum harmonic oscillator [8] and is an eigenstate of the annihilation operator: \(\hat{a}\ket{\alpha}=\alpha\ket{\alpha}\) with \(\alpha\in\mathbb{C}\). The coherent states are obtained by displacing the vacuum state \(\ket{0}\), i.e.,
\[\ket{\alpha}=\hat{D}(\alpha)\ket{0}, \tag{2}\]
where
\[\hat{D}(\alpha):=\exp(\alpha\hat{a}^{\dagger}-\alpha^{*}\hat{a}), \tag{3}\]
is the displacement operator [103].
The overlap between two coherent states \(\ket{\alpha}\) and \(\ket{\beta}\) is [105]
\[\ket{\bra{\alpha}\beta}^{2}=\mathrm{e}^{-|\alpha|^{2}-|\beta|^{2}+2\beta^{*} \alpha}=\mathrm{e}^{-|\alpha-\beta|^{2}}, \tag{4}\]
which implies that two different coherent states are not orthogonal.
The Wigner function for a generic quantum state \(\hat{\rho}\) is written as an expectation value of the parity kernel [4; 6]
\[W_{\hat{\rho}}\left(\mathbf{\zeta}\right):=\mathrm{tr}\left[\hat{\rho}\hat{ \Delta}(\alpha)\right], \tag{5}\]
where
\[\hat{\Delta}(\alpha):=2\hat{D}(\alpha)\hat{\Pi}\hat{D}^{\dagger}(\alpha),\,\hat{ \Pi}:=(-1)^{\hat{a}^{\dagger}\hat{a}}\,, \tag{6}\]
being the displaced parity operator.
The Wigner function for a coherent state is a strictly positive function, and appeared as a Gaussian of the form [5] (we omit the normalization of states and their Wigner functions throughout the paper)
\[G(\mathbf{\zeta};\pm x_{0},\pm p_{0})=\mathrm{e}^{-(x\mp x_{0})^{2}-(p\mp p_{0})^{ 2}}, \tag{7}\]
where \((x_{0},p_{0})\) is the location of the coherent state in phase space. The product of uncertainties of position and momentum for a coherent state has a lower limit \(\Delta x\Delta p=\nicefrac{{1}}{{2}}\)[5; 9; 10; 105], which is also known as the _Planck action_ in the phase space.
It is a common belief that phase-space structures with areas smaller than the Planck scale either do not exist or have no observational consequences for physical quantum states. In fact, this is true for all Gaussian states (coherent, squeezed, thermal, etc.) [6; 7] and even for other non-Gaussian states like cat states [31; 32] that exhibit rapid oscillations in one direction of phase space but an infinite Gaussian profile in the orthogonal direction [54]. However, this notion was refuted by Zurek [33], who demonstrated that the Wigner function of compass states develops phase-space structures with dimensions far smaller than the Planck scale, arguing that these structures play a vital role in determining the sensitivity of these states against perturbations.
### Zurek compass state
The Zurek compass state [33] is obtained from the superposition of the following four coherent states
\[\ket{\psi}:=\nicefrac{{|x_{0}/\sqrt{2}\rangle}}{{+|-x_{0}/\sqrt{2}\rangle}}+ \nicefrac{{|\mathrm{i}x_{0}/\sqrt{2}\rangle}}{{+|-\mathrm{i}x_{0}/\sqrt{2} \rangle}}\,, \tag{8}\]
with \(x_{0}\in\mathbb{R}\). Fig. 1 depicts the Wigner function for this compass state for the cases of \(x_{0}=4,8\) and \(12\). Note that we normalize the Wigner functions throughout by using their maximum amplitudes, \(|W_{\hat{\rho}}(0)|\). The Wigner function of the compass state (8) can be represented as
Figure 1: Wigner distribution of the compass state with (a) \(x_{0}=4\) (b) \(x_{0}=8\) and (c) \(x_{0}=12\). Insets represent the central interference pattern of each case.
Figure 2: Variation of the area of the central phase-space structure of the compass state versus \(x_{0}\).
follows
\[W_{|\psi\rangle}(\mathbf{\zeta})=W_{\circ}(\mathbf{\zeta})+W_{\Xi}(\mathbf{\zeta})+W_{\Xi}( \mathbf{\zeta}), \tag{9}\]
where the first term
\[W_{\circ}(\mathbf{\zeta}):= G(\mathbf{\zeta};x_{0},0)+G(\mathbf{\zeta};-x_{0},0)+G(\mathbf{\zeta};0,x_{0})+\] \[G(\mathbf{\zeta};0,-x_{0}), \tag{10}\]
represents the Wigner function of four coherent states that appear in the phase space as Gaussian lobes. The second term in Eq. (9) is
\[W_{\Xi}(\mathbf{\zeta}):=\frac{1}{2}\sum_{i_{1},i_{2}=\pm 1}I(i_{1}x,i_{2}p), \tag{11}\]
with
\[I(\mathbf{\zeta}):=G(\mathbf{\zeta};\nicefrac{{x_{0}}}{{2}},\nicefrac{{x_{0}}}{{2}}) \cos\Big{[}x_{0}\left(x+p-\frac{x_{0}}{2}\right)\Big{]}, \tag{12}\]
reflecting the Gaussian-modulated oscillations that appear far away from the phase-space origin.
The central pattern resembles a chessboard as shown in the insets of Fig. 1 and is generated by
\[W_{\Xi}(\mathbf{\zeta}):=\frac{1}{2}G(\mathbf{\zeta};0,0)\Big{[}\cos(2x_{0}x)+\cos(2x _{0}p)\Big{]}. \tag{13}\]
This pattern consists of tiles with alternate signs (central chessboardlike pattern). The extension of each tile can be roughly estimated by calculating zeros of Eq. (13), and it is found that it is proportional to \(x_{0}^{-1}\) in all directions of phase space. As a result, the support area of each tile in the chessboardlike pattern is proportional to \(x_{0}^{-2}\) as shown in the log-log plot of the central support area versus \(x_{0}\) in Fig. 2, which is much smaller than the area of the coherent state for \(x_{0}\gg 1\). Note that the mixture of two cat states also contains the same sub-Planck structures that are found in the compass state [54].
The sub-Planck structures also emerge in the Wigner functions of non-Gaussian states with the SU(1,1) [53] and SU(2) symmetries [54]. In particular, it has been found that the Wigner function of the superposition of four SU(1,1) (or SU(2)) coherent states also have sub-Planck structures similar to the compass state when represented on the Poincare disk [53] (or the sphere [54]). The two-mode bosonic realization of the SU(1,1) implies that the sub-Planck structures in the phase space of the SU(1,1) compass state can be associated to the number of photons added to one of the modes of the two-mode squeezed number states [53], and they arise at greater numbers of these added photons. The existence of the sub-Planck structures in the Wigner function of the SU(2) compass state can similarly be linked to the angular momentum; the higher value of the angular momentum causes sub-Planck structures in the phase space [54]. In the subsequent sections, we will demonstrate how adding or subtracting photons from superpositions related to the one-mode non-Gaussian SVS can also cause the emergence of sub-Planck structures in the phase space of those states.
### Sensitivity of compass state
The sensitivity of a quantum state to displacements can be determined by calculating the overlap between it and its slightly displaced version [98]. The overlap between a state \(\hat{\rho}\) and its displaced version \(\hat{D}(\delta\alpha)\hat{\rho}\hat{D}^{\dagger}(\delta\alpha)\) is
\[O_{\hat{\rho}}(\delta\alpha):=\mathrm{tr}\{\hat{\rho}\hat{D}(\delta\alpha)\hat {\rho}\hat{D}^{\dagger}(\delta\alpha)\}=\Big{|}\langle\psi|\hat{D}(\delta \alpha)|\psi\rangle\Big{|}^{2}, \tag{14}\]
where \(\delta\alpha\in\mathbb{C}\) is an arbitrary displacement. Note that the last equality of above expression holds when the state is pure, \(\hat{\rho}=|\psi\rangle\langle\psi|\). The smaller the displacement \(\delta\alpha\) needs to be in order to bring the overlap to zero, the more sensitive the state is claimed to be against displacements [37].
This overlap results in
\[O_{|\alpha\rangle}(\delta\alpha)=\mathrm{e}^{-|\delta\alpha|^{2}}, \tag{15}\]
for a coherent state \(|\alpha\rangle\), indicating that the smallest noticeable displacement that vanishes this overlap is above
the Planck scale, \(|\delta\alpha|>1\). It is interesting to note that the sensitivity to displacements in coherent states is independent of the quantity of quanta contained in the state, \(\bar{n}=\langle\hat{a}^{\dagger}\hat{a}\rangle=|\alpha|^{2}\). Therefore, increasing \(\bar{n}\) will not improve the sensitivity and is solely limited by the _shot noise_ introduced by vacuum fluctuations [99; 100].
We now discuss the sensitivity of the compass state (8) to phase-space displacements. Assuming \(x_{0}\gg 1\) and \(|\delta\alpha|\ll 1\), the overlap (14) for this compass state results in
\[O_{|\psi\rangle}(\delta\alpha)=\frac{1}{4}\mathrm{e}^{-\frac{1}{2}|\delta \alpha|^{2}}\big{[}\cos\left(x_{0}\delta_{x}\right)+\cos\left(x_{0}\delta_{p} \right)\big{]}^{2}, \tag{16}\]
Figure 4: Wigner distribution of the PASVS with (a) \(n=10\) (b) \(n=15\) and (c) \(n=20\). In all cases \(r=0.5\). Insets represent the central interference pattern of each case.
Figure 5: Wigner distribution of the PSSVS with (a) \(n=10\) (b) \(n=15\) and (c) \(n=20\). In all cases \(r=0.5\). Insets represent the central interference pattern of each case.
with
\[\delta\alpha=\delta_{x}+\mathrm{i}\delta_{p},\ \ \delta_{j}\in\mathbb{R}. \tag{17}\]
It can be concluded that \(O_{\left|\psi\right>}(\delta\alpha)\) becomes zero when either of the conditions is satisfied
\[\delta_{x}\pm\delta_{p}=\frac{2m+1}{x_{0}}\pi,\ m\in\mathbb{Z}. \tag{18}\]
As illustrated in Fig. 3 for \(O_{\left|\psi\right>}(\delta\alpha)\) of cases when \(x_{0}\) is increased from 4 to 12, the overlap vanishes for the displacements \(\left|\delta\alpha\right|\sim x_{0}^{-1}\) and the arbitrary directions in the phase space. As a result, it can be inferred that this sensitivity is proportional to \(x_{0}^{-1}\) and that \(\bar{n}=\nicefrac{{x_{0}^{2}}}{{2}}\) ties it to the number of excitations \(\bar{n}\). Therefore, in comparison to coherent states, a compass state with \(\bar{n}\) excitations has shown \(\sqrt{\bar{n}}\)-enhanced sensitivity to displacements of any arbitrary directions in the phase space. Weak force measurements have been performed with Heisenberg-limited sensitivity using compass states [36; 37]. In contrast, cat states have shown the sensitivity to displacements along the specific direction in the phase space [54]. It has been found that cat-state mixtures only exhibit this enhanced sensitivity for displacements along particular phase-space directions [53; 54]. Hence, cat-state mixtures with sub-Planck structures in the Wigner function do not have the potential for metrology of compass states, for which the additional quantum coherence of the cat-state superposition provided by the second term in Eq. (9) plays a crucial role.
The SU(1,1) and SU(2) compass states have shown the same sensitivity to displacements as their HW counterparts [53; 54]. The sensitivity of the SU(1,1) compass state can be connected with the amount of the number of photons added to one of the modes of the two-mode squeezed number state, and this sensitivity improves as the quantity of added photons increases [53]. Similarly, the sensitivity of the SU(2) compass state improves as the angular momentum goes higher [54]. This addition of the photons increases the average photon number in the states, and it can be understood in a way similar to that for compass states of the harmonic oscillator, i.e., injecting more photons in the states improves its sensitivity.
## III Non-Gaussian Svs
The non-Gaussian Wigner functions of the PASVS and PSSVS illustrate the non-Gaussian nature of these states [77; 79; 72; 73; 74; 75]. In this section, we first provide a brief review of two non-Gaussian SVS, the PASVS and the PSSVS, in SSIII.1 and SSIII.2, respectively. These two states are heavily used in our construction of non-Gaussian states that manifest the sub-Planck structures in SSIV. The Wigner functions of both PASVS and PSSVS are discussed in relation to the amount of photons added or subtracted in the following subsections.
### Pasvs
First, we review the Wigner function of the PASVS. The creation operator \(\hat{a}^{\dagger}\) is repeatedly applied to SVS \(\hat{S}(\pm r)\left|0\right>\) to obtain a single-mode PASVS [69]
\[\left|\psi_{\mathrm{PA}}^{\pm}\right>:=\hat{a}^{\dagger n}\hat{S}(\pm r) \left|0\right>\ \mathrm{with}\,n\in\mathbb{N}. \tag{19}\]
The subscript "PA" is the shorthand for "PASVS", and we introduce"\(\pm\)" in the squeezing operator \(\hat{S}(\pm r)\) with the definition [106]
\[\hat{S}(\pm r):=\!\exp\bigg{[}\pm\frac{r}{2}\left(\hat{a}^{\dagger 2}-\hat{a}^{ 2}\right)\bigg{]}. \tag{20}\]
which allows us to preserve part of the expressions introduced in this section for later uses in SSIV.
Using Eq. (5), the Wigner function of PASVS is easily found to be [77; 79; 72; 73]
\[W_{\left|\psi_{\mathrm{PA}}^{\pm}\right>}(\mathbf{\zeta})= \frac{\exp\left(\chi_{\pm}\right)\left[\pm\sinh(2r)\right]^{n}}{ \pi 4^{n}}\sum_{l=0}^{n}\frac{\left(n!\right)^{2}\left[\mp 2\coth(r)\right]^{l}}{l! \left[(n-l)!\right]^{2}}\] \[\left|H_{n-l}\left[-\mathrm{i}\sqrt{\pm 2\coth(r)}\bar{\alpha}_{ \pm}\right]\right|^{2}, \tag{21}\]
where \(H_{m}\) represents the Hermite polynomial, and
\[\chi_{\pm}:=\pm\sinh(2r)\left(\alpha^{*2}+\alpha^{2}\right)-2|\alpha|^{2} \cosh(2r), \tag{22}\]
with
\[\bar{\alpha}_{\pm}:=\alpha\cosh(r)\mp\alpha^{*}\sinh(r). \tag{23}\]
The non-Gaussian shape of the Wigner function \(W_{\left|\psi_{\mathrm{PA}}^{+}\right>}(\mathbf{\zeta})\), which is shown in Fig. 4 for the cases when \(n\) is chosen as 10, 15 and 20, indicates that PASVS is a non-Gaussian state. We can clearly see the interference pattern that emerge in the form of an oscillating pattern in the \(p\) direction in the phase space. As the number of photons \(n\) rises, this pattern gets more pronounced (the frequency of the oscillations is increased). Moreover, the existence of these negative peaks in the Wigner function shows that the PASVS is a non-classical state as well. Another indication of the nonclassicality of this state is the squeezing effect in one of the quadratures, which is visible in the plots. Note that the PASVS is the Gaussian SVS when \(n=0\).
### Pssvs
Now, we review the Wigner function of the PSSVS. The PSSVS [74; 75] is obtained by repeatedly applying the annihilation operator \(\hat{a}\) to the SVS as
\[\left|\psi_{\mathrm{PSV}}^{\pm}\right>:=\hat{a}^{n}\hat{S}(\pm r)\left|0\right>, \tag{24}\]
where the subscript "PS" is the shorthand for "PSSVS". The Wigner function of this state is also in a non-Gaussian form, and is written as
\[W_{|\psi_{\mathrm{PSVS}}^{\pm}\rangle}(\mathbf{\zeta})= \frac{\exp\left(\chi_{\pm}\right)[\pm\sinh(2r)]^{n}}{\pi 4^{n}}\sum_{l=0}^{n}\frac{\left(n! \right)^{2}[\mp 2\tanh(r)]^{l}}{l!\left[\left(n-l\right)!]^{2}}\] \[\left|H_{n-l}\left[-\mathrm{i}\sqrt{\pm 2\tanh(r)}\bar{\alpha}_{ \pm}\right]\right|^{2}. \tag{25}\]
We plot the Wigner function \(W_{|\psi_{\mathrm{PSVS}}^{\pm}\rangle}(\mathbf{\zeta})\) in Fig. 5 with \(r\) being \(0.5\) and \(n\) being \(10\), \(15\), and \(20\). This Wigner function exhibit the interference pattern around the origin of the phase space and oscillates along the \(p\) direction in the phase space. As \(n\) grows, the frequency of this oscillating pattern increases. The simpliest case \(n=0\) of the PSSVS corresponds to the Gaussian SVS. Another indicator of the non-classical nature of this state is the squeezing effect in one of the quadratures, which is indicative of the nonclassicality of state.
In summary, for non-zero values of \(n\), the Wigner function of PASVS and PSSVS maintains non-Gaussianity. It is interesting to note that both PASVS and PSSVS exhibit similar phase-space features as cat-like states. The addition or subtraction of photons from the Gaussian SVS has been employed both theoretically [93; 94; 95] and experimentally [96; 97] to produce cat-like states. Both PASVS and PSSVS have been found very useful for the quantum metrology [81; 82; 83]. When photons are added to or subtracted from the Gaussian SVS, the average photon number of the resulting state grows [82; 83]. It has been shown that for the same number of photons applied on the Gaussian SVS, the subsequent PASVS has a higher average photon number than the PSSVS [82; 83]. This means that the PASVS has a better potential for metrology than the PSSVS [82].
## IV Superposition of nongaussian SVS
In this section, we introduce the quantum states of our interest and present their phase-space analysis by using the Wigner function [1; 2; 3; 4; 5]. The photon-number distribution [27] and the Wigner function [26] have both been used to discuss the nonclassicality in the superpositions
Figure 6: Wigner distribution of the pure SPASVS with (a) \(n=10\) (b) \(n=15\) and (c) \(n=20\). In all cases \(r=0.5\).
Insets represent the central interference pattern of each case.
Figure 7: Variation of the area of the central phase-space structure versus the photon number \(n\) of the SPASVS state with \(n\) chosen from \(5\) to \(100\).
of two SVSs. Theoretically, a non-linear harmonic oscillator can be used to create some specified superpositions of two SVSs [27]. Here, we focus on the superposition of two Gaussian SVSs with opposite phases given by
\[\ket{\psi_{\text{SSV}}}:=\hat{S}(r)\ket{0}+\hat{S}(-r)\ket{0}, \tag{26}\]
with "SSV" in the subscript for the "superposition of SVSs". The Wigner function corresponding to this state exhibit non-Gaussian and non-classical properties [26; 27].
The addition of \(n\) photons to the superposed state (26) leads to the superposition of two photon-added squeezed-vacuum states (SPASVS), that is,
\[\ket{\psi_{\text{SPA}}}:=\hat{a}^{\dagger n}\ket{\psi_{\text{SSV}}}=\ket{\psi_ {\text{PA}}^{+}}+\ket{\psi_{\text{PA}}^{-}} \tag{27}\]
with the subscript "SPA" a shorthand for "SPASVS". Similarly, the subtraction of \(n\) photons from the superposition (26) results in the superposition of two photon-subtracted squeezed-vacuum states (SPSVS) of the following form:
\[\ket{\psi_{\text{SPS}}}:=\hat{a}^{n}\ket{\psi_{\text{SSV}}}=\ket{\psi_{\text{ PS}}^{+}}+\ket{\psi_{\text{PS}}^{-}}, \tag{28}\]
where the subscript "SPS" is the short form of "SPSSVS".
Following subsections cover the discussion about these two superpositions, which are structured as follows. In SSIV.1, we discuss Wigner functions corresponding to \(\ket{\psi_{\text{SPA}}}\) and \(\ket{\psi_{\text{SPS}}}\). Here, we describe how the addition and subtraction of photons lead to the sub-Planck structures in the phase space. In SSIV.2, we discuss the sensitivity to displacements associated with these two superpositions.
### Photons addition versus photons subtraction
The Wigner function of the SPASVS (27) can be obtained by using Eq. (5) as (see Appendix A for detailed derivations)
\[W_{\text{[SPA}}(\mathbf{\zeta})=2\operatorname{Re}\left[I_{\Xi}(\mathbf{\zeta})\right] +W_{\boxplus}(\mathbf{\zeta}), \tag{29}\]
and is shown in Fig. 6 for the cases when \(n=10,15\) and \(20\). The first term in (29)
\[I_{\Xi}(\mathbf{\zeta}):= \frac{\exp\left(\xi\right)\left[-\mathrm{i}\tanh(2r)\right]^{n}} {\pi^{4}\cosh(r)\sqrt{1+\tanh^{2}(r)}}\sum_{l=0}^{n}\frac{(n!)^{2}\left[-2 \mathrm{i}\coth(r)\right]^{l}}{[(n-l)!]^{2}}\] \[H_{n-l}\left[\mathrm{i}\Omega\alpha_{-}\right]H_{n-l}\left[- \Omega\alpha_{+}^{*}\right], \tag{30}\]
provides the interference pattern that appears far away from the phase-space origin, where
\[\Omega:=\frac{\sqrt{\tanh(2r)}}{\sinh(r)}, \tag{31}\]
and
\[\xi:=-\tanh(2r)\left(\alpha^{2}-\alpha^{2*}\right)-2|\alpha|^{2}\operatorname {sech}(2r), \tag{32}\]
with
\[\alpha_{\pm}:=\alpha^{*}\sinh(r)\pm\alpha\cosh(r) \tag{33}\]
being the hyperbolic-rotated \(\alpha\).
For our purposes, we concentrate on the second term in Eq. 29, which contributes to the chessboardlike pattern that is visible at the phase-space origin for \(n\gg 1\). This
Figure 8: Wigner distribution of the mixed-state SPASVS with (a) \(n=10\) (b) \(n=15\) and (c) \(n=20\). In all cases \(r=0.5\). Insets represent the central interference pattern of each case.
central interference pattern is equal to the sum of the individual Wigner functions of PASVSs as
\[W_{\boxplus}(\mathbf{\zeta}):=W_{\left|\psi_{\mathrm{PA}}^{+}\right\rangle}(\mathbf{\zeta} )+W_{\left|\psi_{\mathrm{PA}}^{-}\right\rangle}(\mathbf{\zeta}). \tag{34}\]
The extension of a single tile in the chessboardlike pattern is inversely proportional to \(n\) along any arbitrary direction in the phase space. This is demonstrated in Fig. 6, where we see that as \(n\) increases, the support area of a fundamental tile in the chessboardlike pattern decreases. The area of a fundamental tile can be roughly estimated by calculating the zeros of the Eq. (34). For \(n\gg 1\), the support area of a fundamental tile may be considerably lower than the area of the coherent state. This is depicted in Fig. 7, where we plot the area of the center tile against \(n\) using a log-log plot. Thus, the sub-Planck structures that Zurek discovered for the compass state [33] are also present in SPASVS.
The same sub-Planck structures are also contained by following mixed state
\[\hat{\rho}_{\mathrm{PA}}:=|\psi_{\mathrm{PA}}^{+}\rangle\langle\psi_{\mathrm{ PA}}^{-}|+|\psi_{\mathrm{PA}}^{-}\rangle\langle\psi_{\mathrm{PA}}^{+}|\,, \tag{35}\]
and its Wigner function, which is the same as the one given in Eq. (34), is shown in Fig. 8.
Similarly, for the SPSSVS (28), the Wigner function is also written into two terms (see Appendix A for detailed derivations):
\[W_{\left|\mathrm{SPS}\right\rangle}(\mathbf{\zeta})=2\operatorname{Re}\left[I_{ \Xi}(\mathbf{\zeta})\right]+W_{\boxplus}(\mathbf{\zeta}), \tag{36}\]
which is plotted in Fig. 9. With
\[\omega:=\frac{\sqrt{\tanh(2r)}}{\cosh(r)}, \tag{37}\]
the term that contains
\[I_{\Xi}(\mathbf{\zeta}):= \frac{\exp\left(\xi\right)\left[\mathrm{i}\tanh(2r)\right]^{n}}{ \pi 4^{n}\cosh(r)\sqrt{1+\tanh^{2}(r)}}\sum_{l=0}^{n}\frac{(n!)^{2}\left[2\mathrm{i }\tanh(r)\right]^{l}}{[(n-l)!]^{2}}\] \[H_{n-l}\left[-\omega\alpha_{-}\right]H_{n-l}\left[-\mathrm{i} \omega\alpha_{+}^{*}\right] \tag{38}\]
causes the interference pattern that manifests as oscillating peaks far from the phase-space origin. Again, we concentrate on the second term of Eq. (36), which results in the chessboardlike pattern at the phase-space origin.
Figure 10: Variation of the area of the central phase-space structure versus the photon number \(n\) of the SPSSVS state with \(n\) chosen from \(5\) to \(100\).
Figure 9: Wigner distribution of the pure SPSSVS with (a) \(n=10\) (b) \(n=15\) and (c) \(n=20\). In all cases \(r=0.5\).
Insets represent the central interference pattern of each case.
This term is the sum of the Wigner functions (25) of PSSVSs, that is,
\[W_{\boxplus}(\mathbf{\zeta}):=W_{|\psi^{+}_{\rm PS}}(\mathbf{\zeta})+W_{|\psi^{-}_{\rm PS }}(\mathbf{\zeta}). \tag{39}\]
This pattern manifests sub-Planck oscillations around the origin of the phase space. It is similar to the pattern identified for the case of SPASVS. Again, the extension of a fundamental tile can be estimated by calculating zeros of Eq. (39). As shown in Fig. 9, the extensions of these alternating-sign tiles decrease isotropically as \(n\) increases, and the support area of these tiles in the phase is significantly less than the coherent state for \(n\gg 1\). This is also shown in Fig. 10, which is a log-log plot showing the extension of the central tile. It is interesting to note that for a given \(r\) and \(n\), the central tile in the chessboardlike pattern is larger than that of the SPASVS.
Additionally, we demonstrate that the following mixed state likewise has the same sub-Planck structures
\[\hat{\rho}_{\rm PS}:=|\psi^{+}_{\rm PS}\rangle\langle\psi^{-}_{\rm PS}|+|\psi^ {-}_{\rm PS}\rangle\langle\psi^{+}_{\rm PS}|\,. \tag{40}\]
Similar to the case of the mixture of PASVSs (35), the Wigner function of \(\hat{\rho}_{\rm PS}\) shown in Fig. 11 is identical to Eq. (39).
Thus, the photon-addition or photon-subtraction operations on the superpositions of the Gaussian SVS may produce the sub-Planck structures in the phase space. The average photon number of the resulting states is actually increased by either adding or subtracting photons from the superposition of the Gaussian SVS [82], with the addition of photons producing a higher average photon number than the subtraction of photons. This difference in holding the photons number in the states also effects the size of the sub-Planck structures of such states. As an illustration, the photons addition case has demonstrated to hold smaller sub-Planck structures in the phase space than the photons subtraction for the same amount of photons utilized in the Gaussian SVS. Hence, the idea of the sub-Planck structures connected to our states can be understood in a manner similar to that of the compass state [33], i.e., a higher average photon number in the states leads to the smaller tiles in the chessboardlike pattern.
In summary, the sub-Planck structures of the compass state are present in the Wigner functions of both SPASVS and SPSSVS. The mixtures associated to PASVSs or PSSVSs likewise contain the same sub-Planck structures. Additionally, for the available average photon number, the SPASVS and its associated mixture show smaller sub-Planck structures than their counterparts in the photon-subtracted case.
### Sub-shot noise sensitivity of our states
In this subsection, we discuss the susceptibility of our proposed states to phase-space displacement. Let us first consider SPASVS (27). The overlap (14) for this state under the approximation \(|\delta\alpha|\ll 1\) and \(n\gg 1\) leads to (see Appendix A.2 for the detailed derivations)
\[O_{\rm SPA}(\delta\alpha)=\left[\langle\psi^{+}_{\rm PA}|\,\hat{D}(\delta \alpha)\,|\psi^{+}_{\rm PA}\rangle+\langle\psi^{-}_{\rm PA}|\,\hat{D}(\delta \alpha)\,|\psi^{-}_{\rm PA}\rangle\right]^{2}. \tag{41}\]
Figure 11: Wigner distribution of the mixed-state SPSSVS with (a) \(n=10\) (b) \(n=15\) and (c) \(n=20\). In all cases \(r=0.5\). Insets represent the central interference pattern of each case.
Each term of this overlap is calculated as
\[\langle\psi_{\rm PA}^{\pm}|\,\hat{D}(\delta\alpha)\,|\psi_{\rm PA}^{ \pm}\rangle= \frac{[\mp\sinh(2r)]^{n}\,{\rm e}^{-|\eta_{\pm}|^{2}/2}}{4^{n}}\sum _{l=0}^{n}\frac{(n!)^{2}}{l![(n-l)!]^{2}}\] \[[\mp 2{\rm coth}(r)]^{l}H_{n-l}\left[\Theta_{\pm}\right]H_{n-l} \left[\Theta_{\pm}^{*}\right], \tag{42}\]
where
\[\Theta_{\pm}={\rm i}\sqrt{\pm\frac{{\rm coth}(r)}{2}}\eta_{\pm}, \tag{43}\]
and
\[\eta_{\pm}=\delta\alpha{\rm cosh}(r)\mp\delta\alpha^{*}{\rm sinh}(r), \tag{44}\]
with
\[\delta\alpha=\delta x+{\rm i}\delta p. \tag{45}\]
In Fig. 12, we plot this overlap for \(n\)=10, 15, and 20. For the large \(n\), a small displacement \(|\delta\alpha|\ll 1\) can turn the SPASVS into an state orthogonal to its original state, and this orthogonality occurs in all phase-space directions. We have normalized overlaps to their maximum amplitudes, \(O_{\hat{\rho}}(0)\).
Let us now consider the mixture of PASVSs (19), for which the overlap (14) is calculated as
\[O_{\hat{\rho}_{\rm PA}}(\delta\alpha)=\left|\langle\psi_{\rm PA}^ {+}|\,\hat{D}(\delta\alpha)\,|\psi_{\rm PA}^{+}\rangle\right|^{2}+\left| \langle\psi_{\rm PA}^{-}|\,\hat{D}(\delta\alpha)\,|\psi_{\rm PA}^{-}\rangle \right|^{2}. \tag{46}\]
We plot this overlap with \(n\)=10, 15, and 20 in Fig. 13. Again, we see that the overlap \(O_{\hat{\rho}_{\rm PA}}(\delta\alpha)\) disappears for the displacement \(|\delta\alpha|\ll 1\), but unlike the SPASVS, this orthogonality now take place when \(\delta x=\pm\delta p\) in the phase space.
Similarly, overlap (14) for SPSSVS is obtained as
\[O_{\rm SPS}(\delta\alpha)=\left[\langle\psi_{\rm PS}^{+}|\,\hat{D}(\delta \alpha)\,|\psi_{\rm PS}^{+}\rangle+\langle\psi_{\rm PS}^{-}|\,\hat{D}(\delta \alpha)\,|\psi_{\rm PS}^{-}\rangle\right]^{2}, \tag{47}\]
where
\[\langle\psi_{\rm PS}^{\pm}|\,\hat{D}(\delta\alpha)\,|\psi_{\rm PS }^{\pm}\rangle= \frac{[\mp\sinh(2r)]^{n}\,{\rm e}^{-|\eta_{\pm}|^{2}/2}}{4^{n}} \sum_{l=0}^{n}\frac{(n!)^{2}}{l![(n-l)!]^{2}}\] \[[\mp 2{\rm tanh}(r)]^{l}H_{n-l}\left[\theta_{\pm}\right]H_{n-l} \left[\theta_{\pm}^{*}\right], \tag{48}\]
with
\[\theta_{\pm}={\rm i}\sqrt{\pm\frac{{\rm tanh}(r)}{2}}\eta_{\pm}. \tag{49}\]
In Fig. 14, we plot this overlap with \(n\) equal to \(10\), \(15\), and \(20\). The SPSSVS overlap plot exhibits the same behavior as the SPASVS: the overlap disappears in any direction in phase space when \(|\delta\alpha|\ll 1\). The only distinction is that, for the same \(n\) and \(r\), the central pattern of the overlap of the SPSSVS is larger than that of the SPASVS, showing that the SPSSVS is less sensitive than the SPASVS.
Finally, we consider mixtures of PSSVSs (24). The overlap (14) for this state leads to
\[O_{\hat{p}r_{5}}(\delta\alpha)=\left|\bra{\psi_{\text{PS}}^{+}} \hat{D}(\delta\alpha)\ket{\psi_{\text{PS}}^{+}}\right|^{2}+\left|\bra{\psi_{ \text{PS}}^{-}}\hat{D}(\delta\alpha)\ket{\psi_{\text{PS}}^{-}}\right|^{2}. \tag{50}\]
In Fig. 15, we plot the overlap \(O_{\hat{p}r_{5}}(\delta\alpha)\) using the same parameter values as that of the SPSSVS. We observe that the overlap of the mixture of the PSSVSs looks similar to the mixture of the PASVSs, with the distinction being that the mixture of PSSVSs appears to be less sensitive for a given number of \(n\) and \(r\), which is manifested as the larger chessboardlike pattern in the center of the phase space than that of the mixture of PASVSs.
In summary, we have demonstrated that the sensitivity associated to our proposed states depends on the quantity of photons added (or subtracted), and it drops considerably below the sensitivity of the coherent state when there are excessive amount of photons added (or subtracted). For both SPASVS and SPSSVS, the enhanced sensitivity is unaffected by the directions of phase-space displacements. However, for the mixtures related to PASVSs or PSSVSs, this enhancement only takes place in particular phase-space directions. This implies that compared to mixed states, our superposition states have more potential for quantum sensing applications. Moreover, it has been found that the quantum states associated with photon addition cases are more sensitive than their counterparts of the photon subtraction cases.
## V Summary and outlook
We have shown that the Wigner function of the SPASVS (or SPSSVS) contains the chessboardlike pattern around the origin of the phase space. Similar chessboardlike pattern is also emerged by the mixtures related to PASVSs and PSSVSs. The support area of the phase-space structures contained by this chessboardlike pattern varies inversely with the photon number added (or subtracted). When a sizable number of photons are added (or subtracted), the support area of these structures is noticeably smaller than that of the coherent state, and these are the same sub-Planck structures, as shown by compass states.
The average photon numbers of our states, which are increased either by photon-addition or photon
subtraction actions on the Gaussian SVS, have an impact on the size of the sub-Planck structures in the phase space. The sub-Planck structures associated with the SPASVS are smaller than those of the SPSSVS for the same number of photons added or subtracted. This is because the photon-addition operation always leads to the higher average photon number in the resultant states. The association of the average photon number with the sub-Planck structures in our states is much similar to that of the compass states, i.e., higher average photon number in the compass state correspond to the smaller sub-Planck structures in the phase space.
We have demonstrated that the sensitivity of our proposed states is noticeably higher than that of the coherent state when a significant number of photons are added (or subtracted). Both the SPASVS and SPSSVS exhibit the enhanced sensitivity, which is independent of the phase-space directions, indicating that they hold more promise for quantum metrology. In addition, the difference in the sensitivities between the photon addition and subtraction cases is arose from the different average photon numbers in the states; photon addition cases are demonstrated to have greater sensitivity than the subtracted cases.
It is incredibly exciting that sub-Planck structures can possibly build from photons being added to or subtracted from states. As a result, it will be able to apply a variety of ways to engineer compass-like states in association with contemporary experiments [94; 95; 96; 97]. For example, the theoretical research in [94] shows that subtracting a photon from the Gaussian SVS results in an odd Schrodinger cat state. Another subsequent theoretical approach introduces the idea of photon subtraction from the Gaussian SVS to produce the compass-like states [61]. Additionally, theoretical research to construct the cat-like states advocated in [94] is subsequently applied in actual experiments [96; 97]. These illustrations unequivocally demonstrate that it is possible to add or subtract photons from states in both theory and experiments to produce compass-like states. It may possible that some of these techniques can be modified to produce SPASVS and SPSSVS, which are entirely new research avenues that can be adapted in the future.
###### Acknowledgements.
NA acknowledges the support of postdoctoral fund of the Zhejiang Normal University under Grant No. ZC304021937. GX acknowledges support by the National Natural Science Foundation of China (Grants Nos. 11835011 and No. 12174346). WML acknowledges the support from the National Key R&D Program of China under grants No. 2021YFA1400900, 2021YFA0718300, 2021YFA1402100, NSFC under grants Nos. 61835013, 12174461, 12234012, and Space Application System of China Manned Space Program.
## Appendix A Wigner functions of SPASVS and SPSSVS
This section provides the main steps to drive the Wigner functions of SPASVS and SPSSVS.
### Derivations of the Wigner function of SPASVS
Let us first consider the Wigner function of SPASVS given by Eq. (29). First term of this Wigner function is given by
\[I_{\Xi}(\mathbf{\zeta})=\left\langle\psi_{\rm PA}^{+}\left|\,\hat{\Delta}(\alpha) \,\right|\psi_{\rm PA}^{-}\right\rangle, \tag{100}\]
where the alternative form of the displaced parity operator \(\hat{\Delta}(\alpha)\) is [77]
\[\hat{\Delta}(\alpha):= \frac{1}{\pi^{2}}{\rm e}^{2|\alpha|^{2}}\int_{-\infty}^{\infty} {\rm d}^{2}\beta{\rm e}^{-2\alpha^{*}\beta+2\alpha\beta^{*}}\left|\beta \right\rangle\langle-\beta|\,. \tag{101}\]
Eq. (100) can be rewritten as
\[I_{\Xi}(\mathbf{\zeta})=\frac{(-1)^{n}{\rm e}^{2|\alpha|^{2}}}{\pi^{2}\cosh(r)} \int_{-\infty}^{\infty}{\rm d}^{2}\beta|\beta|^{2n}\exp\bigg{[}-|\beta|^{2}- \frac{\tanh(r)}{2}\big{(}\beta^{2}+\beta^{*2}\big{)}-2\beta\alpha^{*}+2\beta^{ *}\alpha\bigg{]}. \tag{102}\]
We incorporate the factor \(|\beta|^{2n}\) into a differential equation as
\[I_{\Xi}(\mathbf{\zeta})=\frac{(-1)^{n}{\rm e}^{2|\alpha|^{2}}}{\pi^{2}\cosh(r)} \frac{\partial^{2n}}{\partial s^{n}\partial t^{n}}\int_{-\infty}^{\infty}{\rm d }^{2}\beta\exp\bigg{[}-|\beta|^{2}-\frac{\tanh(r)}{2}\big{(}\beta^{2}-\beta^{ *2}\big{)}-2\beta\alpha^{*}+2\beta^{*}\alpha+s\beta+t\beta^{*}\bigg{]}\bigg{|}_ {s=t=0}. \tag{103}\]
Consider the integral formula [107]
\[\int_{-\infty}^{\infty}\mathrm{d}^{2}\beta\exp\bigg{[}a|\beta|^{2}+b\beta+c\beta^ {*}+d\beta^{2}+k\beta^{*2}\bigg{]}=\frac{\pi}{\sqrt{a^{2}-4dk}}\exp\bigg{[}\frac {-abc+b^{2}k+c^{2}d}{a^{2}-4dk}\bigg{]}, \tag{100}\]
whose convergent conditions are \(\mathrm{Re}\left[a\pm d\pm k\right]<0\) and \(\mathrm{Re}\left[(a^{2}-4dk)/a\pm d\pm k\right]<0\). By using this integral Eq. (101) leads to
\[I_{\Xi}(\mathbf{\zeta})= \frac{(-1)^{n}\mathrm{e}^{\xi}}{\pi\cosh(r)\sqrt{1+\tanh^{2}(r)} }\frac{\partial^{2n}}{\partial s^{n}\partial t^{n}}\exp\bigg{[}\frac{\tanh(2r )}{4}s^{2}-\frac{\tanh(2r)}{4}t^{2}+\left(1+\tanh^{2}(r)\right)^{-1}st-2\cosh (r)\mathrm{sech}(2r)\alpha_{-}s\] \[-2\cosh(r)\mathrm{sech}(2r)\alpha_{+}^{*}t\bigg{]}\bigg{|}_{s=t=0}, \tag{101}\]
with
\[\xi:=-(\alpha^{2}-\alpha^{*2})\tanh(2r)-2|\alpha|^{2}\mathrm{sech}(2r). \tag{102}\]
It is challenging to solve Eq. (101) because it has \(\mathrm{e}^{\gamma st}\) terms. We employ the following sum series [73] to get rid of it.
\[\exp(Cs+Dt+Est)=\sum_{l=0}^{\infty}\frac{E^{l}}{l!}\frac{\partial^{2l}}{ \partial C^{l}\partial D^{l}}\left[\exp\left(Cs+Dt\right)\right]. \tag{103}\]
Using this formula Eq. (101) modifies as
\[I_{\Xi}(\mathbf{\zeta})= \frac{(-1)^{n}\mathrm{e}^{\xi}}{\pi\cosh(r)\sqrt{1+\tanh^{2}(r)} }\sum_{l=0}^{\infty}\frac{1}{l!}\frac{1}{2^{2l}}\frac{\left[1+\tanh^{2}(r) \right]^{-l}}{\cosh^{2l}(r)\mathrm{sech}^{2l}(2r)}\frac{\partial^{2l}}{ \partial\alpha_{+}^{*l}\alpha_{-}^{*}}\frac{\partial^{2n}}{\partial s^{n} \partial t^{n}}\exp\bigg{[}\frac{\tanh(2r)}{4}s^{2}-\frac{\tanh(2r)}{4}t^{2}\] \[-2\cosh(r)\mathrm{sech}(2r)(\alpha_{-}s+\alpha_{+}^{*}t)\bigg{]} \bigg{|}_{s=t=0}. \tag{104}\]
Noticing the generating function of Hermite polynomial
\[H_{n}(x)=\frac{\partial^{n}}{\partial s^{n}}\exp\left(2xs-s^{2}\right)\big{|} _{s=0}, \tag{105}\]
and its recursive relation
\[\frac{\mathrm{d}^{l}}{\mathrm{d}x^{l}}H_{n}(x)=\frac{2^{l}n!}{(n-l)!}H_{n-l}(x). \tag{106}\]
The preceding equation can then be simplified in the form of Eq. (30) by applying the relationships (105) and (106).
Let us now calculate second term of the Wigner function (29). This term can be written as
\[W_{\mathbb{H}}(\mathbf{\zeta})=\left\langle\psi_{\mathrm{PA}}^{+}\,\middle|\,\hat{ \Delta}(\alpha)\,\middle|\,\psi_{\mathrm{PA}}^{+}\right\rangle+\left\langle \psi_{\mathrm{PA}}^{-}\,\middle|\,\hat{\Delta}(\alpha)\,\middle|\,\psi_{ \mathrm{PA}}^{-}\right\rangle, \tag{107}\]
where
\[\left\langle\psi_{\mathrm{PA}}^{\pm}\,\middle|\,\hat{\Delta}( \alpha)\,\middle|\,\psi_{\mathrm{PA}}^{\pm}\right\rangle= \frac{(-1)^{n}}{\pi^{2}}\frac{\mathrm{e}^{2|\alpha|^{2}}}{\cosh(r) }\frac{\partial^{2n}}{\partial s^{n}\partial t^{n}}\int_{-\infty}^{\infty} \mathrm{d}^{2}\beta\exp\big{[}-|\beta|^{2}\pm\frac{1}{2}\tanh(r)(\beta^{2}+ \beta^{*2})-2\beta\alpha^{*}+2\beta^{*}\alpha+s\beta+t\beta^{*}\big{]}\big{|}_{ s=t=0}.\]
Using the integral (100), we get
\[\left\langle\psi_{\mathrm{PA}}^{\pm}\,\middle|\,\hat{\Delta}( \alpha)\,\middle|\,\psi_{\mathrm{PA}}^{\pm}\right\rangle= \frac{(-1)^{n}}{\pi}\exp\big{[}\pm\sinh(2r)(\alpha^{*2}+\alpha^{ 2})-4\cosh^{2}(r)|\alpha|^{2}\big{]}\frac{\partial^{2n}}{\partial s^{n} \partial t^{n}}\exp\bigg{[}\pm\frac{1}{4}\sinh(2r)(s^{2}+t^{2})\] \[+2\cosh(r)(\bar{\alpha}_{\pm}s-\bar{\alpha}_{\pm}^{*}t)+\cosh^{2}( r)st\bigg{]}\bigg{|}_{s=t=0}. \tag{108}\]
Again, use of sum series (103) eliminates the factors \(\mathrm{e}^{\gamma st}\), that is,
\[\left\langle\psi_{\mathrm{PA}}^{\pm}\,\middle|\,\hat{\Delta}(\alpha)\,\middle| \,\psi_{\mathrm{PA}}^{\pm}\right\rangle=\sum_{l=0}^{\infty}\frac{(-1)^{l}}{2^{2 l}l!}\frac{\partial^{2l}}{\partial\bar{\alpha}_{\pm}^{l}\partial\bar{\alpha}_{\pm}^{ *l}}\left.\frac{\partial^{2n}}{\partial s^{n}\partial t^{n}}\exp\bigg{[}\pm \frac{\sinh(2r)}{4}\left(s^{2}+t^{2}\right)+2\cosh r\left(\bar{\alpha}_{\pm}s- \bar{\alpha}_{\pm}^{*}t\right)\bigg{]}\bigg{|}_{s=t=0}. \tag{109}\]
Then, by using the relations (105) and (106), the expression (39) is obtained.
### Derivations of the Wigner function of SPSSVS
This section presents the detailed derivation of the Eq. (36), for which the first term gets form as below
\[I_{\Xi}(\mathbf{\zeta})= \left\langle\psi_{\mathrm{PS}}^{+}\right|\hat{\Delta}(\alpha)\left| \,\psi_{\mathrm{PS}}^{-}\right\rangle. \tag{101}\]
This term is calculated as
\[I_{\Xi}(\mathbf{\zeta})= \frac{1}{\pi^{2}}\frac{\mathrm{e}^{2|\alpha|^{2}}}{\cosh(r)} \frac{\partial^{2n}}{\partial s^{n}\partial t^{n}}\exp\bigg{[}-\frac{\tanh(r)} {2}\big{(}t^{2}-s^{2}\big{)}\bigg{]}\int_{-\infty}^{\infty}\mathrm{d}^{2}\beta \exp\bigg{[}-|\beta|^{2}-\big{(}\tanh(r)t+2\alpha^{*}\big{)}\beta-\big{(}\tanh (r)s\] \[-2\alpha\big{)}\beta^{*}-\frac{\tanh(r)}{2}(\beta^{2}-\beta^{*2} \big{)}\bigg{]}\bigg{|}_{s=t=0}. \tag{102}\]
Using the integral (100), we obtain
\[I_{\Xi}(\mathbf{\zeta})= \frac{\mathrm{e}^{\xi}}{\pi\cosh(r)\sqrt{1+\tanh^{2}(r)}}\frac{ \partial^{2n}}{\partial s^{n}\partial t^{n}}\exp\bigg{[}\frac{\tanh(2r)}{4}s ^{2}-\frac{\tanh(2r)}{4}t^{2}+2\mathrm{sech}(2r)\sinh(r)(\alpha_{+}^{*}s- \alpha_{-}t)\] \[+\mathrm{sech}(2r)\sinh^{2}(r)st\bigg{]}\bigg{|}_{s=t=0}. \tag{103}\]
Now, we eliminate \(\mathrm{e}^{\gamma st}\) terms by using Eq. (100)
\[I_{\Xi}(\mathbf{\zeta})= \frac{\mathrm{e}^{\xi}}{\pi\cosh(r)\sqrt{1+\tanh^{2}(r)}}\sum_{l= 0}^{\infty}\frac{1}{l!2^{2l}\mathrm{sech}^{l}(2r)}\frac{\partial^{2l}}{ \partial\alpha_{+}^{*l}\partial\alpha_{-}^{l}}\frac{\partial^{2}}{\partial s^ {n}\partial t^{n}}\exp\bigg{[}\frac{\tanh(2r)}{4}s^{2}-\frac{\tanh(2r)}{4}t^ {2}\] \[+2\mathrm{sech}(2r)\sinh(r)(\alpha_{+}^{*}s+\alpha_{-}t)\bigg{]} \bigg{|}_{s=t=0}. \tag{104}\]
Then, using the relations (102) and (103) we obtain expression (38).
Finally, we derive the second term of the Eq. (36). This term can be written as
\[W_{\Xi}(\mathbf{\zeta})=\left\langle\psi_{\mathrm{PS}}^{+}\left|\,\hat{\Delta}( \alpha)\left|\,\psi_{\mathrm{PS}}^{+}\right\rangle+\left\langle\psi_{\mathrm{ PS}}^{-}\left|\,\hat{\Delta}(\alpha)\left|\,\psi_{\mathrm{PS}}^{-}\right\rangle,\right.\right.\right. \tag{105}\]
where
\[\left\langle\psi_{\mathrm{PS}}^{\pm}\left|\,\hat{\Delta}(\alpha) \left|\,\psi_{\mathrm{PS}}^{\pm}\right\rangle= \frac{1}{\pi}\exp\big{[}\pm\sinh(2r)(\alpha^{2}+\alpha^{*2})-2 \cosh(2r)|\alpha|^{2}\big{]}\frac{\partial^{2n}}{\partial s^{n}t^{n}}\exp \bigg{[}\pm\frac{1}{4}\sinh(2r)(s^{2}+t^{2})\] \[\left.\pm 2\sinh(r)(\bar{\alpha}_{\pm}t+\bar{\alpha}_{\pm}^{*}s)- \sinh^{2}(r)st\bigg{]}\right|_{s,t=0}. \tag{106}\]
Again, we use Eq. (100) to get rid of all \(\mathrm{e}^{\gamma st}\) factors, obtaining
\[\left\langle\psi_{\mathrm{PS}}^{\pm}\left|\,\hat{\Delta}(\alpha) \left|\,\psi_{\mathrm{PS}}^{\pm}\right\rangle= \frac{1}{\pi}\exp\big{[}\pm\sinh(2r)(\alpha^{2}+\alpha^{*2})-2 \cosh(2r)|\alpha|^{2}\big{]}\sum_{l=0}^{\infty}\frac{(-1)^{l}}{2^{2l}l!}\frac{ \partial^{2l}}{\partial\bar{\alpha}_{\pm}^{l}\bar{\alpha}_{\pm}^{*l}}\frac{ \partial^{2n}}{\partial s^{n}t^{n}}\exp\bigg{[}\pm\frac{\sinh(2r)}{4}\big{(}s ^{2}+t^{2}\big{)}\] \[\left.\pm 2\sinh(r)\big{(}\bar{\alpha}_{\pm}t+\bar{\alpha}_{\pm}^{*}s \big{)}\bigg{]}\right|_{s=t=0}. \tag{107}\]
Finally, this equation can be simplified to expression (39) by utilizing the relations (102) and (103).
## Appendix D Overlaps of Spasv and Spssv
In this section, we calculate the overlap (14) of SPASVS and SPSSVS. Note that, for \(n\gg 1\) and \(|\delta\alpha|\ll 1\), the contribution of the cross terms between the states to the overlap is negligible, that is,
\[\left\langle\psi_{\mathrm{PA}}^{+}\right|\hat{D}(\delta\alpha)\left|\psi_{ \mathrm{PA}}^{-}\right\rangle=0\,\mathrm{and}\,\left\langle\psi_{\mathrm{PS}}^ {+}\right|\hat{D}(\delta\alpha)\left|\psi_{\mathrm{PS}}^{-}\right\rangle=0. \tag{108}\]
First, we drive each term of Eq. (41). PASV (19) can be rewritten as [73]
\[\ket{\psi_{\rm PA}^{\pm}}=\hat{S}(\pm r)\left[\hat{a}^{\dagger}\cosh(r)\pm\hat{a} \sinh(r)\right]^{n}\ket{0}. \tag{101}\]
Then, considering relation given by [73]
\[(f\hat{a}+g\hat{a}^{\dagger}):=\bigg{(}-{\rm i}\sqrt{\frac{fg}{2}}\bigg{)}^{n}H_ {n}\bigg{(}{\rm i}\sqrt{\frac{f}{2g}}\hat{a}+{\rm i}\sqrt{\frac{g}{2f}}\hat{a}^ {\dagger}\bigg{)}, \tag{102}\]
which leads to
\[[\hat{a}^{\dagger}\cosh(r)+\hat{a}\sinh(r)]^{n}=\bigg{[}-{\rm i} \sqrt{\frac{\sinh(2r)}{4}}\bigg{]}^{n}H_{n}\bigg{[}{\rm i}\sqrt{\frac{\tanh(r) }{2}}\hat{a}+{\rm i}\sqrt{\frac{\coth(r)}{2}}\hat{a}^{\dagger}\bigg{]}, \tag{103}\] \[[\hat{a}\cosh(r)+\hat{a}^{\dagger}\sinh(r)]^{n}=\bigg{[}-{\rm i} \sqrt{\frac{\sinh(2r)}{4}}\bigg{]}^{n}H_{n}\bigg{[}{\rm i}\sqrt{\frac{\tanh(r )}{2}}\hat{a}^{\dagger}+{\rm i}\sqrt{\frac{\coth(r)}{2}}\hat{a}\bigg{]}. \tag{104}\]
By using these relations, we obtain
\[\bra{\psi_{\rm PA}^{\pm}}\hat{D}(\delta\alpha)\ket{\psi_{\rm PA}^ {\pm}}=\] \[=\] \[H_{n}\bigg{(}{\rm i}\sqrt{\pm\frac{\coth(r)}{2}}(\alpha^{*}-\eta _{\pm}^{*})\bigg{)}, \tag{105}\]
where
\[\hat{D}(\eta_{\pm})=\hat{S}^{\dagger}(\pm r)\hat{D}(\delta\alpha)\hat{S}(\pm r )\ {\rm with}\ \eta_{\pm}=\delta\alpha\cosh(r)\mp\delta\alpha^{*}\sinh(r). \tag{106}\]
By using (100), we get
\[\bra{\psi_{\rm PA}^{\pm}}\hat{D}(\delta\alpha)\ket{\psi_{\rm PA}^ {\pm}}= \bigg{[}\mp\frac{\sinh(2r)}{4}\bigg{]}^{n}\frac{\partial^{2n}}{ \partial\tau^{n}\partial t^{n}}\exp\big{(}-{\rm i}\sqrt{\pm 2\coth(r)}\ \eta_{\pm}^{*}\tau \big{)}\exp\big{(}-\tau^{2}-t^{2}\big{)}\] \[\int_{-\infty}^{\infty}\frac{{\rm d}^{2}\alpha}{\pi}\exp\bigg{(}- \frac{|\alpha|^{2}}{2}-\frac{\alpha}{2}\eta_{\pm}^{*}+\frac{\alpha^{*}}{2}\eta _{\pm}-\frac{|\alpha-\eta_{\pm}|^{2}}{2}+{\rm i}\sqrt{\pm 2\coth(r)}\ \alpha t+{\rm i} \sqrt{\pm 2\coth(r)}\ \alpha^{*}\tau\bigg{)}\bigg{|}_{\tau=t=0}. \tag{107}\]
Using the integral (100), the previous equation yields
\[\bra{\psi_{\rm PA}^{\pm}}\hat{D}(\delta\alpha)\ket{\psi_{\rm PA}^ {\pm}}= \bigg{[}\mp\frac{\sinh(2r)}{4}\bigg{]}^{n}\exp\bigg{(}-\frac{| \eta_{\pm}|^{2}}{2}\bigg{)}\frac{\partial^{2n}}{\partial\tau^{n}\partial t^{n }}\exp\bigg{(}-t^{2}+{\rm i}\sqrt{\pm 2\coth(r)}\ \eta_{\pm}t-\tau^{2}-{\rm i} \sqrt{\pm 2\coth(r)}\ \eta_{\pm}^{*}\tau \tag{108}\] \[\mp 2\coth(r)\ t\tau\bigg{)}\bigg{|}_{t=\tau=0}. \tag{109}\]
First, we rid out the factors \({\rm e}^{\gamma rt}\) from above equation by using (109). Then, by using (100) and (102), the preceding equation is simplified to (42).
Similarly, PSSVS can be rewritten as [73]
\[\ket{\psi_{\rm PS}^{\pm}}=\hat{S}(\pm r)\left[\hat{a}\cosh(r)\pm\hat{a}^{ \dagger}\sinh(r)\right]^{n}\ket{0}. \tag{110}\]
The overlap
\[\bra{\psi_{\rm PS}^{\pm}}\hat{D}(\delta\alpha)\ket{\psi_{\rm PS}^ {\pm}}= \bigg{[}\mp\frac{\sinh(2r)}{4}\bigg{]}^{n}\bra{0}H_{n}\bigg{(}{ \rm i}\sqrt{\pm\frac{\tanh(r)}{2}}\hat{a}\bigg{)}\hat{D}(\eta_{\pm})H_{n} \bigg{(}{\rm i}\sqrt{\pm\frac{\tanh(r)}{2}}\hat{a}^{\dagger}\bigg{)}\ket{0},\] \[= \bigg{[}\mp\frac{\sinh(2r)}{4}\bigg{]}^{n}\int_{-\infty}^{\infty} \frac{{\rm d}^{2}\alpha}{\pi}\exp\bigg{(}-\frac{|\alpha|^{2}}{2}-\frac{\alpha}{2 }\eta_{\pm}^{*}+\frac{\alpha^{*}}{2}\eta_{\pm}-\frac{|\alpha-\eta_{\pm}|^{2}}{ 2}\bigg{)}H_{n}\bigg{(}{\rm i}\sqrt{\pm\frac{\tanh(r)}{2}}\alpha\bigg{)}\] \[H_{n}\bigg{(}{\rm i}\sqrt{\pm\frac{\tanh(r)}{2}}(\alpha^{*}-\eta _{\pm}^{*})\bigg{)}, \tag{111}\]
can be easily simplified to (48).
|
2309.05624 | An exact algorithm for linear optimization problem subject to
max-product fuzzy relational inequalities with fuzzy constraints | Fuzzy relational inequalities with fuzzy constraints (FRI-FC) are the
generalized form of fuzzy relational inequalities (FRI) in which fuzzy
inequality replaces ordinary inequality in the constraints. Fuzzy constraints
enable us to attain optimal points (called super-optima) that are better
solutions than those resulted from the resolution of the similar problems with
ordinary inequality constraints. This paper considers the linear objective
function optimization with respect to max-product FRI-FC problems. It is proved
that there is a set of optimization problems equivalent to the primal problem.
Based on the algebraic structure of the primal problem and its equivalent
forms, some simplification operations are presented to convert the main problem
into a more simplified one. Finally, by some appropriate mathematical
manipulations, the main problem is transformed into an optimization model whose
constraints are linear. The proposed linearization method not only provides a
super-optimum (that is better solution than ordinary feasible optimal
solutions) but also finds the best super-optimum for the main problem. The
current approach is compared with our previous work and some well-known
heuristic algorithms by applying them to random test problems in different
sizes. | Amin Ghodousian, Romina Omidi | 2023-09-11T17:13:34Z | http://arxiv.org/abs/2309.05624v1 | An exact algorithm for linear optimization problem subject to max-product fuzzy relational inequalities with fuzzy constraints
###### Abstract
Fuzzy relational inequalities with fuzzy constraints (FRI-FC) are the generalized form of fuzzy relational inequalities (FRI) in which fuzzy inequality replaces ordinary inequality in the constraints. Fuzzy constraints enable us to attain optimal points (called super-optima) that are better solutions than those resulted from the resolution of the similar problems with ordinary inequality constraints. This paper considers the linear objective function optimization with respect to max-product FRI-FC problems. It is proved that there is a set of optimization problems equivalent to the primal problem. Based on the algebraic structure of the primal problem and its equivalent forms, some simplification operations are presented to convert the main problem into a more simplified one. Finally, by some appropriate mathematical manipulations, the main problem is transformed into an optimization model whose constraints are linear. The proposed linearization method not only provides a super-optimum (that is better solution than ordinary feasible optimal solutions) but also finds the best super-optimum for the main problem. The current approach is compared with our previous work and some well-known heuristic algorithms by applying them to random test problems in different sizes.
Keywords: Fuzzy relational inequalities, Fuzzy relational equations Fuzzy constraints, Product t-norm, Linear optimization.
## 1 Introduction
The theory of fuzzy relational equations (FRE) as a generalized version of Boolean relation equations was firstly proposed by Sanchez and was applied to problems related to the medical diagnosis [41]. Pedrycz categorized and extended two ways of the generalizations of FRE in terms of sets under discussion and various operations which are taken account [36]. Since then, FRE was applied in many other fields such as fuzzy control, prediction of fuzzy systems,
fuzzy decision making, fuzzy pattern recognition, image compression and reconstruction, fuzzy clustering and so on. Generally, when rules of inference are applied and their corresponding consequences are known, the problem of determining antecedents is simplified and mathematically reduced to solving an FRE [34]. Nowadays, it is well known that many of the issues associated to the body knowledge can be treated as FRE problems [35].
The solvability identification and finding set of solutions are the primary, and the most fundamental, matter concerning the FRE problems. Di Nola et al. proved that the solution set of FRE (if it is nonempty), defined by continuous max-t-norm composition is often a non-convex set. This non-convex set is completely determined by one maximum solution and a finite number of minimal solutions [5]. Such non-convexity property is one of two bottlenecks making a major contribution towards an increase in complexity of FRE-related problems, particularly, in the optimization problems subjected to a system of fuzzy relations. Another bottleneck point is concerned with detecting the minimal solutions for FREs. Chen and Wang [2] presented an algorithm for obtaining the logical representation of all minimal solutions and deduced that a polynomial-time algorithm with the ability to find all minimal solutions of FRE (with max-min composition) may not exist. Also, Markovskii showed that solving max-product FRE is closely related to the covering problem which is a type of NP-hard problem [33]. In fact, the same result holds true for a more general t-norms instead of the minimum and product operators [3, 29, 30]. Over the past decades, the solvability of FRE which is defined using different max-t compositions have been investigated by many researchers [15, 16, 19, 39, 43, 46, 47, 50, 52]. Moreover, some other researchers have worked on introducing novel concept and at times improving some of the existing theoretical aspects and applications of fuzzy relational inequalities (FRI) [13, 17, 18, 20, 21, 27, 53]. Li and Yang [27] studied FRI with addition-min composition and presented an algorithm to search for minimal solutions. They applied FRI to data transmission mechanism in a BitTorrent-like Peer-to-Peer file sharing systems. In [13], the authors focused on the algebraic structure of two fuzzy relational inequalities \(A\varphi x\leq b^{1}\) and \(D\varphi x\geq b^{2}\). Their research focuses on the study of a mixed fuzzy system formed by two of the earlier FRIs, where \(\varphi\) is an operator with (closed) convex solutions. Generally, if \(\varphi\) is an operator with closed convex solutions, the set of solutions for \(D\varphi x\geq b^{2}\) is determined by a finite number of maximal solutions as well as the same number for minimal ones. In particular, if \(\varphi\) is a continuous non-decreasing function (specially, a continuous t-norm), all maximal solutions overlap each other [13]. Guo et al. [21] investigated a type of FRI problems and the relationship between minimal solutions and FRI paths. They also introduced some rules for reducing the problems and presented an algorithm for solving optimization problems using FRI constraints.
The optimization problem subject to FRE and FRI is one of the most interesting and on-going research topics amongst similar problems [1, 8, 12, 13, 15-23, 26, 31, 40, 44, 48, 53]. Many methods were designed based on the translation of the main problem into an integer linear programming problem which is
then solved using well-developed techniques. On the contrary, other algorithms benefit the resolution of the feasible region, some necessary and sufficient conditions for the optimality and simplification processes. The most methods of this category are based on analytical results provided mainly by Sanchez [42] and Pedrycz [37]. Fang and Li converted a linear optimization problem subjected to FRE constraints with max-min operation into an integer programming problem and solved it by a branch-and-bound method using jump-tracking technique [9]. Wu et al. worked on improvement of the method employed by Fang and Li; this was done by decreasing the search domain and presented a simplification process by three rules which resulted from a necessary condition [49]. Chang and Shieh presented new theoretical results concerning the linear optimization problem constrained by fuzzy max-min relation equations [1]. They improved an upper bound on the optimal objective value, some rules for simplifying the problem and proposed a rule for reducing the solution tree. In [25], an application of optimizing the linear objective with max-min composition was employed for the streaming media provider seeking a minimum cost while fulfilling the requirements assumed by a three-tier framework. Linear optimization problem was further investigated by numerous scholars focusing on max-product operation [23, 32]. Loetamonphong and Fang defined two sub-problems by separating negative from non-negative coefficients in the objective function, and then obtained an optimal solution by combining the optimal solutions of the two sub-problems [32]. The maximum solution of FRE is the optimum for the sub-problem having negative coefficients. Another sub-problem was converted into a binary programming problem and was solved using a branch-and-bound method. Also, in [23] some necessary conditions to test the feasibility and simplification techniques were presented in order to solve FRE with max-product composition. Moreover, generalizations of the linear optimization problem with respect to FRE have been studied; this was done through replacement of max-min and max-product compositions with different fuzzy compositions such as max-average composition [48] or max-t-norm composition [15, 16, 19, 22, 26, 44]. For example, Li and Fang solved the linear optimization problem subjected to a system of sup-t equations by reducing it to a 0-1 integer optimization problem [26]. In [22], a method was presented for solving linear optimization problems with the max-Archimedean t-norm fuzzy relation equation constraint. In [44], the authors solved the same problem whit continuous Archimedean t-norm, and to obtain some optimal variables, they used the covering problem rather than the branch-and-bound methods.
Recently, many interesting forms of generalizations of the linear programming applied to the system of fuzzy relations have been introduced, and developed based on composite operations used in FRE, fuzzy relations used in the definition of the constraints, some developments on the objective function of the problems and other ideas [4, 6, 10, 28, 31, 51]. For example, Wu et al. represented an efficient method to optimize a linear fractional programming problem under FRE with max-Archimedean t-norm composition [51]. Dempe and Ruziyeva generalized the fuzzy linear optimization problem by considering
fuzzy coefficients [4]. In addition, Dubey et al. studied linear programming problems involving interval uncertainty modeled using intuitionistic fuzzy set [6]. The linear optimization of bipolar FRE was also the focus of study carried out by some researchers where FRE was defined with max-min composition [10] and max-Lukasiewicz composition [28, 31]. For example, in [28], the authors introduced a linear optimization problem subjected to a system of bipolar FRE defined as \(X(A^{+},A^{-},b)=\{x\in[0,1]^{m}:x\circ A^{+}\vee\tilde{x}\circ A^{-}=b\}\) where \(\tilde{x}_{i}=1-x_{i}\) for each component of \(\tilde{x}=(\tilde{x}_{i})_{1\times m}\) and the notations "\(\vee\)" and "\(\circ\)" denote max operation and the max-Lukasiewicz composition, respectively. They translated the original problem into a 0-1 integer linear problem which is then solved using well-developed techniques. In a separate, the foregoing bipolar linear optimization problem was solved by an analytical method based on the resolution and some structural properties of the feasible region (using a necessary condition for characterizing an optimal solution and a simplification process for reducing the problem) [31].
The optimization problem subjected to various versions of FRI is widely available in the literature as well [12, 13, 17, 18, 20, 21, 53, 54]. In [13], the authors studied the linear optimization with constraints formed by \(X(A,D,b^{1},b^{2})=\{x\in[0,1]^{n}:A\varphi x\leq b^{1},D\varphi x\geq b^{2}\}\) where \(\varphi\) represents an operator with convex solutions (e.g., non-decreasing or non-increasing operator). They showed that the feasible region can be expressed as the union of a finite number of convex sets. In particular, if \(\varphi\) is an operator with closed convex solutions such as continuous non-decreasing (non-increasing) operator, the preceding convex sets become closed as well. Therefore, since each t-norm is a non-decreasing function (resulted directly from the property of the monotonicity in t-norms), continuous t-norms introduce important especial examples of operators with closed convex sets. For this reason, the authors proved that the feasible solutions set defined by a continuous t-norm can be formed as the union of a finite number of compact convex sets. Additionally, because of the identity law of t-norms, it was concluded that the feasible region actually consists of points being between one maximum solution and a finite number of minimal solutions. Yang studied the optimal solution of minimizing a linear objective function subject to a FRI where the constraints defined as \(a_{i1}\wedge x_{1}+a_{i2}\wedge x_{2}+...+a_{in}\wedge x_{n}\geq b_{i}\) for \(i=1,...,m\) and \(a\wedge b=min\{a,b\}\)[54]. He presented an algorithm based on some properties of the minimal solutions of the FRI. Also, in [53], the authors introduced the latticized linear programming problem subject to max-product fuzzy relation inequalities with application in the optimization management model for wireless communication emission base stations. The latticized linear programming problem was defined by minimizing the objective function \(z(x)=x_{1}\lor x_{2}\vee...\lor x_{n}\) subject to the feasible region \(X(A,b)=\{x\in[0,1]^{n}:A\circ x\geq b\}\) where "\(\circ\)" denotes fuzzy max-product composition. They also presented an algorithm based on the resolution of the feasible region.
The FRI-FC problem was introduced in [12] as the following mathematical model in which \(\varphi\) is the minimum t-norm:
\[\begin{array}{c}\min\;\;c^{T}x\\ A\varphi x\circ b\\ x\in[0,1]^{n}\end{array} \tag{1}\]
where \(A=(a_{ij})_{m\times n}\) is a fuzzy matrix and \(b=(b_{i})_{m\times 1}\) is a fuzzy vector such that \(0\leq a_{ij}\leq 1\) and \(0\leq b_{i}\leq 1\) for each \(i\in I=\{1,2,...,m\}\) and each \(j\in J=\{1,2,...,n\}\), the constant vector \(c=(c_{j})_{n\times 1}\) and the unknown vector \(x=(x_{j})_{n\times 1}\) are in \(\mathbb{R}^{n}\), \(A\varphi x\circ b\) denotes a fuzzy max-\(\varphi\) composition and "\(\circ\)" denotes relaxed or fuzzy version of the ordinary inequality "\(\leq\)". If \(a_{i}\) denotes \(i\)'th row of matrix \(A\), then \(i\)'th constraint of problem (1) can be expressed as \(a_{i}\varphi x\circ b_{i}\) which means \(\underset{j=1}{max}\{\varphi(a_{ij},x_{j})\}\circ b_{i}\), \(\forall i\in I\). So, problem (1) can be also interpreted as a generalization of the following problem with the ordinary inequality [13]:
\[\begin{array}{c}\min\;\;c^{T}x\\ A\varphi x\leq b\\ x\in[0,1]^{n}\end{array} \tag{2}\]
In [12], the authors presented an analytical constructive algorithm to find a super-optimum for problem (1), and used this fuzzy system to convincingly optimize the quality of education in school education (with minimum cost) while were to be selected by parents. In [14], the authors presented a modified PSO algorithm for solving the most general version of problem (1) where \(\varphi\) is an arbitrary continuous t-norm. They showed that the modified PSO can produce high quality solutions with low average error (good accuracy) and low standard deviation (good stability). Also, it was shown that the modified PSO generates better solutions compared with the solutions obtained by the max-min algorithm [12] and the original PSO [24] when \(\varphi\) is considered as the minimum t-norm and an arbitrary t-norm, respectively.
Unlike most optimization algorithms using only the feasible solutions as the search domain, the FRI-FC problem benefits from infeasible points as well as the feasible ones. For this reason, the relaxed or perturbed inequalities (denoted by "\(\circ\)") replace the ordinary ones (denoted by "\(\leq\)") in the constraints of problem (1). Obviously, it is not reasonable to consider all the infeasible points with the same evaluation. For example, an infeasible point (however, with a good objective value) may be so far from the feasible region. Consequently, such a point cannot be reasonably considered as a solution for the problem. Thus, to evaluate the feasibility (or infeasibility) magnitude of the points, the feasibility property is defined as a fuzzy set so that closer points to the feasible set attain larger membership values. Figure 1(a) illustrates the notion of the fuzzy set "feasibility" by using grayscale colors varying from white to black in which the brighter parts include the points with larger membership values for "feasibility" (the white square - including all points between zero vector 0 and maximum solution \(\bar{x}\) - depicts the feasible region of problem (2)). The consideration of such infeasible points as admissible selections for decision makers can be also viewed as perturbing some constraint(s) such that
the feasible solutions set becomes greater and includes new points with likely better objective function values. In practice, the applied aspects of the problems sometimes allow some restrictions or limitations to be perturbed while preserving the purposes of the decision makers.
Similarly, the quality level or satisfaction amount of the objective value is defined as a fuzzy set. To define the fuzzy set "satisfaction", the optimal objective value of problem (2), say \(z^{*}=c^{T}x^{*}\), is assumed to be the initial or current optimal value. Also, we consider a pre-determined value \(z_{0}(z_{0}<z^{*})\) as the best case and set its degree of membership (equivalently, its satisfaction amount) equal to one. Actually, the best case \(z_{0}\) is an approximate value we wish to be achieved at some near-feasible point (since for each feasible solution \(x\) in problem (2), we have \(z_{0}<z^{*}\leq z=c^{T}x\)). The value \(z_{0}\) may be determined f from some previous experiments or by the human experts. Anyway, the satisfaction amount for each objective value \(z=c^{T}x\) (especially, for \(z^{*}\) is realized by considering the difference between \(z_{0}\) and \(z\) (see Figure 1(b)). Ultimately, total values are obtained by evaluating the feasibility amount of the points as well as the quality level of their objective values and a point with the highest total value is introduced as an optimal solution. Briefly, the target of the FRI-FC problem is to find an infeasible point with the acceptable infeasibility and better objective value than \(z^{*}\) by taking advantage of the flexibility of constraint(s). Similar to [12,14], we refer to such these solutions as Super-Optima in this paper. If there is not a super-optimum, then the algorithm gives the same optimal solution obtained by other algorithms using only the feasible region as the search domain. In this case, in order to find a super-optimum, a decision maker has to consider more freedom for constraints to be perturbed (if it is possible based on the problem structure and the view of the human experts).
In this paper, we present a linearization approach for solving problem (1) in which \(\varphi\) is the product t-norm. In the proposed approa
Figure 1: (a) Fuzzy evaluation of feasibility. (b) Fuzzy evaluation of objective values.
is initially converted into an equivalent linear optimization. By taking advantage of the linearity of the equivalent model, we can use many efficient methods such as Dantzig's simplex algorithm, Karmarkar's algorithm and interior-point methods for solving the problem. It is shown that the proposed linearization approach is an efficient and fast method that can find better solutions than those obtained by other related algorithms. On the contrary to the max-min algorithm [12] which stops once the first super-optimum is found, the proposed algorithm can find the best super-optimum. Moreover, in contrast to the modified PSO [14], the current approach provides exact solutions for the problem. The rest of the paper is organized as follows. Section 2 takes a brief look at some basic results on the feasible solutions set of problem (2). In Section 3, the fuzzy constraints of problem (1) are precisely define by employing some membership functions. These membership functions are used for evaluating the feasibility and optimality amounts of points. Subsequently, problem (1) is transformed into an equivalent problem in which the constraints are linear. Two simplification rules are presented in Section 4 and in Section 5, the experimental results are demonstrated.
## 2 Some previous theoretical results
Consider problem (2) in which \(\varphi\) is the product t-norm and let \(x^{*}\) be its optimal solution with objective value \(z^{*}=c^{T}x^{*}\). Additionally, suppose there exist at least a flexible constraint in the problem, i.e., there are some points that violate the constraints and are still permissible to be solutions for the problem based on the decision maker's view. Based on these assumptions, we can define problem (1) and find a better solution than \(x^{*}\) for problem (2) by taking advantage of the flexibility of some constraints. However, if the flexibility of the constraints is not sufficient, \(x^{*}\) is given again as an optimal solution. In this case, in order to find a super-optimum, a decision maker has to consider more freedom for constraints to be perturbed. At first, we mention some previously obtained results that are used throughout the paper. For the sake of simplicity, we let \(S(A,B)\) denote the feasible solutions set of problem (2), that is, \(S(A,B)=\{x\in[0,1]^{n}:A\varphi x\leq b\}\). So, similar to problem (1), we can rewrite \(S(A,B)\) in terms of rows \(a_{i}(i\in I)\) of matrix \(A\) as \(S(A,B)=\{x\in[0,1]^{n}:A\varphi x\leq b,i\in I\}\) where the constraints mean \(a_{i}\varphi x=max_{j=1}^{n}\{\varphi(a_{ij},x_{j})\}=max_{j=1}^{n}\{\varphi(a_ {ij},x_{i})\}\leq b_{i},\forall i\in I\).
**Definition 1. (a)** For each \(j\in J\) let \(I(j)=\{i\in I:a_{ij}>b_{j}\}\). **(b)** Let \(\tilde{x}=(\tilde{x}_{j})_{1\times n}\) be an dimensional vector in whose components are defined as \(\tilde{x}_{j}=\underset{i\in I(j)}{min}\{(b_{i}/a_{ij})\},\forall j\in J\).
**Theorem 1. \(S(A,B)=[0,\tilde{x}]\)**. In other words, \(S(A,B)\) is a cube including all points \(x\in[0,1]^{n}\) such that \(0\leq x_{j}\leq\tilde{x}_{j},\forall j\in J\). (b) Solution \(x^{*}=({x_{1}}^{*},{x_{2}}^{*},...,{x_{n}}^{*})\) as defined below is the optimal solution for problem (2).
\[x_{j}^{*}=\begin{cases}\tilde{x}_{j}&,c_{j}<b\\ 0&,c_{j}\geq b\end{cases} \tag{3}\]
**Proof.** See Theorems 3 and 4 in [14].
**Theorem 2** (first simplification).: Consider problem (2) and let \(A^{\prime}=(a^{\prime}_{ij})_{m\times n}\) be a matrix resulted from matrix \(A=(a_{ij})_{m\times n}\) a follows. For \(i\in I\) and \(j_{0}\in J\), if there exist \(k\in I\) such that \(a_{ij_{0}}\geq b_{i}\), \(a_{kj_{0}}>b_{k}\) and \(b_{i}/a_{ij_{0}}>b_{k}/a_{kj_{0}}\), we set \(a^{\prime}_{ij_{0}}=0\); otherwise, \(a^{\prime}_{ij_{0}}=a_{ij_{0}}\). Then, \(S(A^{\prime},B)=S(A,B)\).
**Proof.** Suppose that \(a_{ij_{0}}\geq b_{i}\), \(a_{kj_{0}}>b_{k}\) and \(b_{i}/a_{ij_{0}}>b_{k}/a_{kj_{0}}\). We show that "resetting \(a_{ij_{0}}\) to zero" has no effect on \(S(A,B)\); that is, \(a_{i}\varphi x=max^{n}_{j=1}\{a_{ij}x_{j}\}\leq b_{i}\) is equivalent to \(\underset{j=1,j\neq j_{0}}{max}\{a_{ij}x_{j}\}<b_{i},\forall x\in S(A,B)\). To this end, it is sufficient to prove \(a_{ij_{0}}x_{j_{0}}<b_{i},\forall x\in S(A,B)\). From \(max^{n}_{j=1}\{a_{ij}x_{j}\}\leq b_{i}\), it is obvious that \(a_{ij_{0}}x_{j_{0}}>b_{i}\) never holds. So, assume that \(a_{ij_{0}}x_{j_{0}}=b_{i}\). So, \(x_{j_{0}}=b_{i}/a_{ij_{0}}>b_{k}/a_{kj_{0}}\). Therefore, \(a_{kj_{0}}x_{j_{0}}>b_{k}\) which implies \(max^{n}_{j=1}\{a_{kj}x_{j}\}>b_{k}\). It contradicts \(x\in S(A,B)\) and the proof is complete.
## 3 A mathematical model for Fuzzy constraints and equivalent problems
In this section, we consider the constraints as well as the objective function of problem (1) as fuzzy sets by defining their associated membership functions. Via this approach, all points (whether feasible or not) in problem (2) are treated as feasible ones in (1) with different degrees of membership in the interval \([0,1]\). Actually, a point being in the feasible set of (2) belongs to that of (1) with the degree of membership equal to one, and a farther point from the feasible set of (2) gets a lower degree of membership. To treat the \(i\)'th fuzzy inequality of problem (1), we employ the same membership functions used in [12] as follows:
\[\mu(a_{i}\varphi x)=\begin{cases}1&,a_{i}\varphi x\leq b_{i}\\ 1-\frac{a_{i}\varphi x-b_{i}}{d_{i}}&,b_{i}\leq a_{i}\varphi x\leq b_{i}+d_{i} \\ 0&,a_{i}\varphi x\geq b_{i}+d_{i}\end{cases},i=1,2,...m \tag{4}\]
where each \(d_{i}\) (\(i=1,2,...,m\)) is initially determined as the limit of the admissible violation of the \(i\)'th inequality. From relation (4), \(i\)'th membership function is equal to \(1\) if the \(i\)'th constraint of problem (2) is well satisfied, \(0\) if the constraint is violated beyond its admissible limit \(d_{i}\), and decreasing linearly from \(1\) to \(0\).
**Definition 2**.: For each \(x\in[0,1]^{n}\) the crisp constraints violation vector at point \(x\) is an \(m\times 1\) vector \(CCV(x)=(CCV(x)_{i})_{m\times 1}\) whose \(i\)'th component is defined
as follows:
\[CCV(x)_{i}=max\{0,a_{i}\varphi x-b_{i}\} \tag{5}\]
Also, we define the fuzzy constraints violation vector at point \(x\) as an \(m\times 1\) vector \(FCV(x)=(FCV(x)_{i})_{mx\times 1}\) whose \(i\)'th component is
\[FCV(x)_{i}=max\{0,a_{i}\varphi x-(b_{i}+d_{i})\} \tag{6}\]
From the above-mentioned definition, relations (5) and (6) show how much point \(x\) violates \(i\)'th constraint of problem (2) (i.e., \(a_{i}\varphi x\leq b_{i}\)) and that of problem (1) (i.e., \(a_{i}\varphi x\circ b_{i}\)), respectively. Similar to (4), for the objective function of problem (1), we define
\[\mu(c^{T}x)=\begin{cases}1&,c^{T}x\leq z_{0}\\ 1-\frac{c^{T}x-z_{0}}{d_{0}}&,z_{0}\leq c^{T}x\leq z_{0}+d_{0}\\ 0&,c^{T}x\geq z_{0}+d_{0}\end{cases} \tag{7}\]
where \(z_{0}=c^{T}x^{*}-vd_{0}\) for parameters \(v\in(0,1)\) and \(d_{0}\geq 0\). From relation (7), we have \(\mu(z_{0})=1\), \(\mu(c^{T}x^{*})=1-v\) and \(\mu(z_{0}+d_{0})=0\). The parameters \(v\) and \(d_{0}\) determine the symmetry and length of the interval \([z_{0},z_{0}+d_{0}]=[c^{T}x^{*}-vd_{0},c^{T}x^{*}+(1-v)vd_{0}]\), respectively. If \(v\approx 0\) (\(v\approx 1\)) then \(\mu(c^{T}x^{*})\approx 1\) (\(\mu(c^{T}x^{*})\approx 0\)) that means the optimal solution of problem (2), \(x^{*}\), has a high (low) degree of satisfaction. If \(v=0.5\) then this interval is converted into the closure of the symmetric \(\varepsilon\)-neighborhood around \(c^{T}x^{*}\) with radius \(\varepsilon=0.5\). In this case, point \(x^{*}\) is interpreted as a solution that the satisfaction amount of its objective value is mediocre. Also, parameter \(d_{0}\) determines the length of the interval in which we want to survey attainability or non-attainability of a better objective function value than \(c^{T}x^{*}\); that is, we want to find a solution with a better objective function value than \(c^{T}x^{*}\) in this interval. Obviously, if \(d_{0}\) is selected very small (\(d_{0}\approx 0\)), then \(x^{*}\) can be considered as the best solution. In other words, this interval is actually an approximate domain in which we guess to be able to find a better objective function value than \(c^{T}x^{*}\) by partly perturbing some constraints. In [14], a discussion was provided on the initial setup of the parameters \(v\) and \(d_{0}\), and their influences on the convergence of the algorithm. Also, we refer the reader to [12] for a more detailed analysis of the influences of these parameters and some theoretical and experimental aspects that should be considered in problems.
By considering relations (4) and (7), we can express both the feasibility and optimality amounts of a point by one variable as defined in the following definition.
**Definition 3**.: For each \(x\in[0,1]^{n}\), let \(\mu_{0}(x)=\mu(c^{T}x)\) (i.e., the optimality value of \(x\)) and \(\mu_{F}(X)=min_{i=1}^{n}\{\mu(a_{i}\varphi x)\}\) (i.e., the feasibility value of \(x\)). Then, the total value of \(x\) is defined by \(\mu_{T}(X)=min\{\mu_{0}(x),mu_{F}(x)\}=min\{\mu(c^{T}x),\underset{i=1}{min}\{ \mu(a_{i}\varphi x)\}\}\).
Definition 3 defines the whole space \(S=[0,1]^{n}\) as a fuzzy set by associating the membership value \(\mu(c^{T}x)\) to each \(x\in S\). Based on this fact, our purpose becomes equivalent to finding a point with the greatest total value among all \(x\in[0,1]^{n}\). In other words, we can express problem (1) as the following problem:
\[\underset{x\in S}{max}\{\mu_{T}(x)\} \tag{8}\]
or equivalently
\[\underset{x\in[0,1]^{n}}{max}\{min\{\mu(c^{T}x),\underset{i=1}{min}\{\mu(a_{i }\varphi x)\}\}\} \tag{9}\]
Therefore, by considering relations (4) and (7), problem (9) is rewritten as
\[\underset{x\in[0,1]^{n}}{max}\{min\{B_{0}-D_{0}(c^{T}x),\underset{i=1}{min}\{ B_{0}-D_{0}(a_{i}\varphi x)\}\}\} \tag{10}\]
where \(D_{i}=\frac{1}{d_{i}}\) for \(i\in I\cup 0\), \(B_{0}=1+\frac{z_{0}}{d_{0}}\) and \(B_{i}=1+\frac{b_{i}}{d_{i}}\) for \(i\in I\). Now, by introducing the auxiliary variable \(\lambda\), problem (10) can be transformed into the following equivalent programming problem:
\[\begin{array}{l}\max\ \ \lambda\\ \
As mentioned before, the modified PSO algorithm [14] was presented for solving problem (1) defined by an arbitrary continuous t-norm. Furthermore, by converting problem (1) into equivalent problem (8), we can use many heuristic algorithms which have been proposed for solving unconstrained optimization problems or those problems with some lower and upper bounds on the variables as their constraints. In Section 5, we apply the simplex algorithm to problem (13) and compare the generated results to those obtained by the modified PSO [14] and some well-known meta-heuristic methods which have been applied to many practical optimization problems.
## 4 Simplification Rules
As mentioned above, problems (12) and (13) are mathematically equivalent. So, by simplifying problem (12), we will have a more simplified linear optimization model (problem (13)) for solving. In this section, two simplification rules are presented to convert problem (12) into a more simplified equivalent form.
**Definition 4.** Let \(\lambda_{0}(x)=B_{0}-D_{0}(c^{T}x)\), \(\lambda_{i}(x)=B_{i}-D_{i}(max_{j=1}^{n}a_{ij}x_{j})\), \(\forall i\in I\), \(\lambda_{ij}(x_{j})=B_{i}-D_{I}a_{ij}x_{j}\), \(\forall i\in I\) and \(\forall j\in J\), and \(\Lambda(x)=max_{i=0}^{m}\lambda_{i}(x)\).
**Corollary 1.** Functions \(\lambda_{i}(x)\), \(\forall i\in I\), are non-increasing continuous functions. Functions \(\lambda_{ij}(x_{j})\), \(\forall i\in I\) and \(\forall j\in J\), are strictly decreasing and function \(\lambda_{0}(x)\) is strictly decreasing with respect to component \(x_{j}\) if \(c_{j}>0\) and strictly increasing if \(c_{j}<0\).
By Definition 4, problem (12) can be expressed as follows:
\[\underset{x\in[0,1]^{n}}{max}\{\underset{i=0}{min}\{\lambda_{i}(x)\}\} \tag{14}\]
or equivalently,
\[\underset{x\in[0,1]^{n}}{max}\{\Lambda(x)\} \tag{15}\]
Theorem 3 provides a simplification rule for solving problem (14) by finding some components of the optimal solution.
**Theorem 3.** Suppose that \(x^{*}\) is an optimal solution of problem (14). Then, \(x_{j}{}^{*}=0\) for each \(j\in J\) such that \(c_{j}>0\).
**Proof.** Since (12) and (15) are equivalent problems, \(x^{*}\) is also an optimal for (15). Thus, \(\Lambda(x^{*})=\Lambda(x)\) for each \(x\in[0,1]^{n}\). By contradiction, suppose that \({x_{j_{0}}}^{*}>0\) for some \(j_{0}\in J\) such that \(c_{j_{0}}^{*}>0\). Let \(x^{\prime}\in[0,1]^{n}\) such that \(x_{j_{0}}^{*^{\prime}}={x_{j_{0}}}^{*}\) if \(c_{j}<0\) and \(x_{j_{0}}^{\prime}=0\) if \(c_{j}>0\). Then, from Corollary 1, we have \(\lambda_{i}(x^{\prime})\geq\lambda_{i}(x^{*})\), \(\forall i\in I\cap\{0\}\), and therefore \(\Lambda(x^{\prime})=\Lambda(x^{*})\) that contradicts the optimality of \(x^{*}\).
**Corollary 2**.: (second simplification). For each \(j\in J\) such that \(c_{j}>0\), we can assign \({x_{j}}^{*}=0\) and remove the \(j\)'th column of matrix \(A\). Therefore, in order to solve problem (12) (or problem (13)), it is sufficient to consider only the columns of matrix \(A\) that belong to \(J^{\prime}=\{j\in J:c_{j}<0\}\).
**Lemma 1**.: Let \(i\in I\). Then, \(\lambda_{i}(x)=min_{j\in I}\{\lambda_{ij}(x_{j})\}\), \(\forall x\in[0,1]^{n}\).
**Proof.** Let \(x\in[0,1]^{n}\) and \(\lambda_{ij_{0}}(x_{j_{0}})=min_{j\in I}\{\lambda_{ij}(x_{j})\}\). So, from Definition 4 we have \(B_{i}-D_{i}a_{ij_{0}}x_{j_{0}}\leq B_{i}-D_{i}a_{ij}x_{j_{0}}\)\(\forall j\in J\). Therefore, \(a_{ij}x_{j}\leq a_{ij_{0}}x_{j_{0}}\), \(\forall j\in J\), which together with Definition 4 imply that \(\lambda_{i}(x)=B_{i}-D_{i}(max_{j=1}^{n}a_{ij}x_{j})=B_{i}-D_{i}a_{ij_{0}}x_{j_ {0}}=\lambda_{ij_{0}}(x_{j_{0}})\).
From Corollary 2 and Lemma 1, problem (14) is converted into the following simplified problem:
\[\underset{x\in[0,1]^{n}}{max}\{min\{\lambda_{0}(x),\underset{i\in I,j\in J}{ min}\{\lambda_{ij}(x_{j})\}\}\} \tag{16}\]
**Lemma 2**.: Let \(i\in I\) and \(j\in J^{\prime}\). If \(a_{ij}\leq b_{i}\), then \(\lambda_{ij}(x_{j})\geq 1\), \(\forall x_{j}\in[0,1]^{n}\).
**Proof.** Suppose that \(a_{ij}\geq b_{i}\). Thus, \(a_{ij}x_{j}\geq b_{i}\) and therefore \(1-\frac{a_{ij}-b_{i}}{d_{i}}\geq 1\), \(\forall x_{j}\in[0,1]\). Now, by substituting \(D_{i}=\frac{1}{d_{i}}\) and \(B_{i}=1+\frac{b_{i}}{d_{i}}\) in Definition 4, we have \(\lambda_{ij}(x_{j})=1-\frac{a_{ij}-b_{i}}{d_{i}}\geq 1\).
**Theorem 4**.: Suppose the second simplification (Corollary 2) is done and \(i\in I\). Then, \(\lambda_{i}(x)=min_{j\in J^{\prime}_{i}}\{\lambda_{ij}(x_{j})\}\) where \(J^{\prime}_{i}=\{j\in J^{\prime}:a_{ij}>b_{i}\}\).
**Proof.** From Corollary 2 and Lemma 1, \(\lambda_{i}(x)=min_{j\in J}\{\lambda_{ij}(x_{j})\}\), \(\forall i\in I\). Hence, \(\lambda_{i}(x)=min\{min_{j\in J^{\prime}_{i}}\{\lambda_{ij}(x_{j})\},min_{j\in J ^{\prime}_{i}}\{\lambda_{ij}(x_{j})\}\}\), \(\forall i\in I\). Since \(j\in J^{\prime}_{i}\) implies \(a_{ij}\leq b_{i}\), from Lemma 2 we obtain \(\lambda_{ij}(x_{j})\geq 1\), \(\forall j\in J^{\prime}-j^{\prime}_{i}\) and \(\forall x\in[0,1]\). Therefore, \(min_{j\in J^{\prime}_{i}}\{\lambda_{ij}(x_{j})\}\geq 1\) and then \(\lambda_{i}(x)=min_{j\in J^{\prime}_{i}}\{\lambda_{ij}(x_{j})\}\).
From Theorem 4, problem (16) is converted into the more simplified form (17) as follows:
\[\underset{x\in[0,1]^{n}}{max}\{min\{\lambda_{0}(x),\underset{i\in I,j\in J}{ min}\{\lambda_{ij}(x_{j})\}\}\} \tag{17}\]
**Corollary 3** (third simplification).: (a) Suppose that \(i_{0}\in I\) and \(j_{0}\not\in J^{\prime}_{i}\). Then, "resetting \(a_{i_{0}j_{0}}\) to zero" has no effect on the optimal solution of problem (14) (or that of problem (12). As a result, if \(j_{0}\not\in J^{\prime}_{i}\), \(\forall i\in I\), then we can remove \(j_{0}\)'th column of matrix \(A\). (b) Let \(x^{*}\) be an optimal solution of problem (14). \(j_{0}\not\in J^{\prime}_{i}\), \(\forall i\in I\), then \({x_{j_{0}}}^{*}=1\).
**Proof.** **a** Since \(j_{0}\in J_{i_{0}}^{\prime}\), then \(\lambda_{i_{0}j_{0}}(x_{j_{0}})\) does not appear in (17) and therefore we can take \(\lambda_{i_{0}j_{0}}(x_{j_{0}})\) out of consideration. So, the value of \(a_{i_{0}j_{0}}\) plays no role in (17) and can be assigned to zero. **b** By contradiction, suppose \({x_{j_{0}}}^{*}<1\). Let \(x^{\prime}\in[0,1]^{n}\) such that \(x^{\prime}_{j_{0}}=1\) and \(x^{\prime}_{j}=x_{j}{}^{*}\), \(\forall j\in J_{i}^{\prime}\). Then, \(\underset{in\mid j\in J_{i}^{\prime}}{min}\{\lambda_{ij(x^{\prime}_{j})}\}= \Lambda(x^{*})\) that contradicts the optimality of \(x^{*}\).
## 5 Comparisons with other works and some numerical examples
In this section, we present the experimental results for evaluating the performance of our algorithm. In Section 5.1, the proposed algorithm is applied for solving the transformed equivalent problem (problem (13)) for the test problems described in Appendix A. The test problems have been randomly generated in different sizes by using product t-norm. For each test problem, we have a pair of problems that are associated with each other; one FRI-FC problem (problem (1)) and one FRI problem with ordinary (crisp) inequalities (problem (2)). From Theorem 1(a), we know that problem (2) is feasible and \(S(A,B)=[0,\tilde{x}]\) where the maximum solution \(\tilde{x}\) is obtained from Definition 1. Using this result, the feasible optimal solution \(x^{*}\) of problem (2) is directly given by Theorem 1(b). To perform a fair comparison, we use the same parameters for each experiment, that is, we set \(v=0.5\) and \(d_{i}=0.1\) for \(i=0,1,...,m\). Therefore, from (4) and (7) we have \(\mu_{T}(z_{0})=1\) and \(\mu_{T}(c^{T}x^{*})=1-v=0.5\). The equality \(\mu_{T}(c^{T}x^{*})=0.5\) means that \(x^{*}\) (the optimal solution of problem (2)) has a mediocre objective value for us. Actually, the target of the methods is to find a solution \(x^{**}\) such that \(c^{T}x^{**}\in[z_{0},z_{0}+d_{0}]\) and \(c^{T}x^{**}\) is as close as possible to \(z_{0}\) (the best case) by perturbing the constraints within the range determined by \(d_{i}=0.1\), \(i=0,1,...,m\). Finally, a comparison is made between the current method, the Modified PSO (MPSO) [14], Original PSO (OPSO) [24], Continuous Ant Colony Optimization (CACO) [45], Differential Evolution (DE) [38] and Harmony Search (HS) [11] algorithms. For this purpose, in Section 5.2, we apply these methods to the ten test problems described in Appendix A.
### Results of the linearization approach
In this section, the proposed algorithm is applied to the test problems described in Appendix A. Table 1 includes the feasibility values \(\mu_{F}(x^{**})\), optimality values \(\mu_{0}(x^{**})\) and total values \(\mu_{T}(x^{**})\) for the best solutions \(x^{**}\) found by the linearization algorithm. In each case, to simplify the comparison between the current optimum \(x^{*}\) (the optimal solution for problem (2)) and the best super-optimum \(x^{**}\) found by the proposed algorithm, values \(c^{T}x^{*}\) and \(c^{T}x^{**}\) have been reported. Also, we reported admissible ranges \([z_{0},z_{0}+\varepsilon_{0}]\) for each test problem to determine if the proposed method can find a super-optimum \(x^{**}\in[z_{0},z_{0}+d_{0}]\) such that \(c^{T}x^{**}<c^{T}x^{*}\). Additionally, for each test problem, Table 2 presents the optimal solutions \(x^{*}\) of problem (2), solutions \(x^{**}\) found by the current algorithm and the crisp constraints violation vectors at points \(x^{**}\).
As shown in Table 1, we have \(\mu_{T}(x^{**})>\mu_{T}(c^{T}x^{*})=0.5\) in all the cases, that means the proposed algorithm can find solutions with higher quality than \(x^{*}\). According to Table 1, objective values \(c^{T}x^{**}\) belong to the admissible intervals \([z_{0},z_{0}+d_{0}]\) (and very close to the best cases \(z_{0}\)) and are strictly less than \(c^{T}x^{*}\) for all the test problems. Therefore, the linearization method produces better solutions (i.e., solutions with less objective values) compared to the optimal solutions of problem (2). Table 2 includes solutions \(x^{*}\), \(x^{**}\) and the crisp constraints violation vectors \(CCV(x^{**})\) for each test problem. The results, in Table 2, show that the proposed method produces optimal solutions with admissible infeasibilities; more precisely, for each \(i\in I\), we have \(CCV(x^{**})_{i}<d_{i}=0.1\). As a key result, we see that the best super-optimum \(x^{**}\) satisfies \(\mu_{F}(x^{**})=\mu_{0}(x^{**})\) for each test problem. As it will be shown in the next section, this equality does not hold true for each super-optimum.
### Linearization approach versus the other related methods
In this section, a comparison is made between the current linearization method and the modified PSO algorithm [14] proposed for solving the FRI-FC problems. As mentioned before, since problem (1) is equivalent to problem (9), many heuristic algorithms may be used for solving the problem. So, the generated solutions for the linearization method and modified PSO are also compared with some well-known meta-heuristics such as Original PSO (OPSO) [24], Continuous Ant Colony Optimization (CACO) [45], Differential Evolution (DE) [38] and Harmony Search (HS) [11] algorithms. For the heuristic algorithms, 30 experiments are performed for each test problem. The maximum number of iterations is equal to 100. The parameters of the PSO algorithms that are used in each case are as follows. Swarm size is set to 10, \(c_{1}=C_{2}=2\) and inertia factor \(w\) is decreasing linearly from 1 to 0 by a damping factor of [14,24]. For CACO, \(m=10\) (number of ants used in an iteration), \(\xi=0.85\)
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline Test problems & \(\mu_{F}(x^{**})\) & \(\mu_{0}(x^{**})\) & \(\mu_{T}(x^{**})\) & \(c^{T}x^{**}\) & \(c^{T}x^{*}\) & \([z_{0},z_{0}+d_{0}]\) \\ \hline A.1 & 0.9910 & 0.9910 & 0.9910 & - 0.9232 & - 0.8741 & [- 0.9241, - 0.8241] \\ \hline A.2 & 0.9933 & 0.9933 & 0.9933 & - 11.3722 & - 11.3228 & [- 11.3728, - 11.2728] \\ \hline A.3 & 0.9765 & 0.9765 & 0.9765 & - 1.3501 & - 1.3024 & [- 1.3524, - 1.2524] \\ \hline A.4 & 0.9916 & 0.9916 & 0.9916 & - 9.7886 & - 9.7395 & [- 9.7895, - 9.6895] \\ \hline A.5 & 0.9793 & 0.9793 & 0.9793 & - 1.4395 & - 1.3916 & [- 1.4416, - 1.3416] \\ \hline A.6 & 0.9809 & 0.9809 & 0.9809 & - 0.1638 & - 0.1157 & [- 0.1657, - 0.0657] \\ \hline A.7 & 0.9647 & 0.9647 & 0.9647 & - 0.0820 & - 0.0356 & [- 0.0856, 0.0144] \\ \hline A.8 & 0.9373 & 0.9373 & 0.9373 & - 0.1337 & - 0.0899 & [- 0.14, - 0.04] \\ \hline A.9 & 0.9907 & 0.9907 & 0.9907 & - 1.0552 & - 1.0061 & [- 1.0561, - 0.9561] \\ \hline A.10 & 0.9914 & 0.9914 & 0.9914 & - 1.7222 & - 1.6731 & [- 1.7231, - 1.6231] \\ \hline \end{tabular}
\end{table}
Table 1: A comparison between the quality of the best super-optima (i.e., solutions \(x^{**}\)) found by the proposed linearization approach and that of the optimal solutions of problem (2) (i.e., solutions \(x^{*}\)) for the test problems A.1 - A.10.
(the speed of convergence), \(q=10^{-4}\) (locality of the search process) and \(k=50\) (archive size) [45], and for DE, \(c_{T}=0.95\) (crossover rate) and \(F=0.5\) (mutation scaling factor) [38], and for HS, \(HMS=10\) (size of harmony memory), \(HMCR=0.825\) (harmony memory consideration rate) and \(PAR=0.35\) (Pitch Adjustment Rate) [11].
Table 3 presents the total values of the best solutions obtained by the algorithms. As shown in this table, CACO, DE and HS have the worst results (i.e., solutions \(x\) with the least total values \(\mu_{T}(x)=0\)) in most cases (70% of the test problems). Moreover, for the remaining 30% of the cases, they could not necessarily find a super-optimum (i.e., solutions \(x\) such that \(\mu_{T}(x)>\mu_{T}(x^{*})\) ); A.3 for CACO, and A.4 for DE and HS. Also, OPSO produced the worst solutions in half of the cases and could find super-optima only for 50% of the test problems. On the other hand, the modified PSO could find a super-optimum for each test problem that is very close to the best super-optimum \(x^{**}\) obtained by the linearization approach. Particularly, for the test problem A.8, we have \(\mu_{T}(x^{*}_{MPSO})=\mu_{T}(x^{**})\). However, for test problem A.4, although \(\mu_{T}(x^{*}_{MPSO})>\mu_{T}(x^{*})\), the distance between \(\mu_{T}(x^{*}_{MPSO})\) and \(\mu_{T}(x^{**})\) is very large in average.
In Table 4, the results have been averaged over 30 runs and the average best-so-far (Avg), median best-so-far (Mdn) in the last iterations and the standard deviations (Sd) are reported for the heuristic algorithms. The results, in Table 4, show the modified PSO produces better solutions with a higher convergence rate when compared against the other heuristic algorithms. As this table illus
\begin{table}
\begin{tabular}{|c|l|l|} \hline A.1 & \(x^{**}=[0.1859,0.115,0.0165,0,0,0]\) \\ & \(x^{**}=[0.1964,0.1215,0.0174,0,0,0]\) & \(CCV(x^{**})=[0,0.0009,0,0]\) \\ \hline A.2 & \(x^{**}=[0,0.8702,0,0,0.0165,0.5835]\) & \\ & \(x^{**}=[0,0.8731,0,0,0,6506,0.5854]\) & \(CCV(x^{**})=[0,0.0007,0.0007,0,0]\) \\ \hline A.3 & \(x^{**}=[0,0.0979,0,1.334,0,0.1532]\) & \\ & \(x^{**}=[0,0.1015,0,0.1383,0,0.1588]\) & \(CCV(x^{**})=[0,0,0,0,0.0023,0]\) \\ \hline A.4 & \(x^{**}=[0.2069,0,0,0.0339,0,0,0.8705]\) & \\ & \(x^{**}=[0.2123,0,0,0.0348,0,0,0.8717]\) & \(CCV(x^{**})=[0.0008,0,0,0.0008,0]\) \\ \hline A.5 & \(x^{**}=[0.3282,0,0,0.1228,0,0,1.1221,0]\) & \\ & \(x^{**}=[0.3319,0,0,0.1302,0,0,1.294,0]\) & \(CCV(x^{**})=[0,0.0021,0.0021,0,0,0]\) \\ \hline A.6 & \(x^{**}=[0,0,0,0,0.0058,0.0067,0.0056]\) & \(CCV(x^{**})=[0,0,0,0.0019,0,0,0]\) \\ & \(x^{**}=[0,0,0,0.0082,0.0094,0.0079]\) & \(CCV(x^{**})=[0,0,0,0,0.0019,0,0,0]\) \\ \hline A.7 & \(x^{**}=[0.0057,0,0,0,0,0,0,0,0.0043]\) & \\ & \(x^{**}=[0.0131,0,0,0,0,0,0,0,0]\) & \(CCV(x^{**})=[0,0,0,0,0,0,0.0035]\) \\ \hline A.8 & \(x^{**}=[0.0613,0,0,0,0,0.0163,0,0,0]\) & \\ & \(x^{**}=[0.0910,0,0,0,0,0.0242,0,0,0,0]\) & \(CCV(x^{**})=[0,0,0,0,0.0063,0,0,0]\) \\ \hline A.9 & \(x^{**}=[0,0.0234,0.1036,0,0,0.0235,0.0228,0,0,0]\) & \\ & \(x^{**}=[0,0.0245,0.1087,0,0,0.246,0.0240,0,0,0]\) & \(CCV(x^{**})=[0,0.0009,0,0,0,0,0,0,0]\) \\ \hline A.10 & \(x^{**}=[0,0.0295,0,0,0,0.0335,0.0633,0,0,0,0,1846]\) & \\ & \(x^{**}=[0,0.0304,0,0,0.0345,0.0652,0,0,0,0,1900]\) & \(CCV(x^{**})=[0.0009,0,0,0,0,0,0,0]\) \\ \hline \end{tabular}
\end{table}
Table 2: The best super-optima, \(x^{**}\), and their crisp constraints violation vectors \(CCV(x^{**})\).
\(x_{\text{MPSO}}\), \(x_{\text{CACO}}^{*}\), \(x_{\text{DE}}^{*}\) and \(x_{\text{HS}}^{*}\) found by the linearization approach, MPSO, OPSO, CACO, DE and HS algorithms, respectively, for the test problems of Appendix A.
\begin{tabular}{|c|c|c|c|c|c|c|} \hline Test problems & \(\mu_{T}(x^{**})\) & \(\mu_{T}(x_{\text{MPSO}}^{*})\) & \(\mu_{T}(x_{\text{OPSO}}^{*})\) & \(\mu_{T}(x_{\text{CACO}}^{*})\) & \(\mu_{T}(x_{\text{DE}}^{*})\) & \(\mu_{T}(x_{\text{HS}}^{*})\) \\ \hline A.1 & 0.9910 & 0.9904 & 0.9608 & 0.9632 & 0.5638 & 0.7519 \\ \hline A.2 & 0.9933 & 0.9924 & 0.9929 & 0.9895 & 0.9480 & 0.9480 \\ \hline A.3 & 0.9765 & 0.9717 & 0.6582 & 0.0834 & 0 & 0 \\ \hline A.4 & 0.9916 & 0.9901 & 0.9494 & 0 & 0.2619 & 0.1217 \\ \hline A.5 & 0.9793 & 0.9749 & 0 & 0 & 0 & 0 \\ \hline A.6 & 0.9809 & 0.9788 & 0.9390 & 0 & 0 & 0 \\ \hline A.7 & 0.96472 & 0.96471 & 0 & 0 & 0 & 0 \\ \hline A.8 & 0.9373 & 0.9373 & 0 & 0 & 0 & 0 \\ \hline A.9 & 0.9907 & 0.9870 & 0 & 0 & 0 & 0 \\ \hline A.10 & 0.9914 & 0.9875 & 0 & 0 & 0 & 0 \\ \hline \end{tabular}
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline Test problems & & MPSO & OPSO & CACO & DE & HS \\ \hline \multirow{3}{*}{A.1} & Avg & 0.9759 & 0.091549 & 0.059136 & 0.018792 & 0.045539 \\ \cline{2-6} & Mdn & 0.9787 & 0 & 0 & 0 & 0 \\ \cline{2-6} & Sd & 0.0035 & 2.7014 & 1.3873 & 1.3413 & 1.5425 \\ \hline \hline \multirow{3}{*}{A.2} & Avg & 0.93161 & 0.34901 & 0.6167 & 0.49248 & 0.49248 \\ \cline{2-6} & Mdn & 0.97558 & 0 & 0.84622 & 0.57185 & 0.57185 \\ \cline{2-6} & Sd & 0.58775 & 2.8787 & 2.5854 & 0.89195 & 0.89195 \\ \hline \hline \multirow{3}{*}{A.3} & Avg & 0.83368 & 0.038722 & 0.0027787 & 0 & 0 \\ \cline{2-6} & Mdn & 0.87382 & 0 & 0 & 0 & 0 \\ \cline{2-6} & Sd & 0.62053 & 2.7763 & 1.885 & 1.9039 & 1.7758 \\ \hline \hline \multirow{3}{*}{A.4} & Avg & 0.5177 & 0.031648 & 0 & 0.0087283 & 0.0040561 \\ \cline{2-6} & Mdn & 0.50478 & 0 & 0 & 0 & 0 \\ \cline{2-6} & Sd & 0.40083 & 2.1511 & 1.8494 & 1.8822 & 2.1892 \\ \hline \hline \multirow{3}{*}{A.5} & Avg & 0.92709 & 0 & 0 & 0 & 0 \\ \cline{2-6} & Mdn & 0.96284 & 0 & 0 & 0 & 0 \\ \cline{2-6} & Sd & 0.092625 & 4.3403 & 2.0897 & 1.9291 & 2.5145 \\ \hline \hline \multirow{3}{*}{A.6} & Avg & 0.93838 & 0.031301 & 0 & 0 & 0 \\ \cline{2-6} & Mdn & 0.95304 & 0 & 0 & 0 & 0 \\ \cline{2-6} & Sd & 0.0024304 & 1.6968 & 1.4244 & 1.505 & 1.68 \\ \hline \hline \multirow{3}{*}{A.7} & Avg & 0.96435 & 0 & 0 & 0 & 0 \\ \cline{2-6} & Mdn & 0.96422 & 0 & 0 & 0 & 0 \\ \cline{2-6} & Sd & 1.2739e-05 & 4.0588 & 2.4427 & 2.0715 & 2.7796 \\ \hline \hline \multirow{3}{*}{A.8} & Avg & 0.93015 & 0 & 0 & 0 & 0 \\ \cline{2-6} & Mdn & 0.93718 & 0 & 0 & 0 & 0 \\ \cline{2-6} & Sd & 0.0020195 & 5.0862 & 3.454 & 3.3177 & 2.7028 \\ \hline \hline \multirow{3}{*}{A.9} & Avg & 0.94184 & 0 & 0 & 0 & 0 \\ \cline{2-6} & Mdn & 0.92721 & 0 & 0 & 0 & 0 \\ \cline{2-6} & Sd & 0.0027109 & 2.4319 & 1.4109 & 1.1276 & 0.90631 \\ \hline \hline \multirow{3}{*}{A.10} & Avg & 0.91627 & 0 & 0 & 0 & 0 \\ \cline{2-6} & Mdn & 0.95904 & 0 & 0 & 0 & 0 \\ \cline{2-6} & Sd & 0.0082584 & 2.3339 & 1.7099 & 1.796 & 1.788 \\ \hline \end{tabular}
\end{table}
Table 4: A comparison between the results found by the MPSO, OPSO, CACO, DE and HS algorithms for the test problems of Appendix A. The results have been averaged over 30 runs. Maximum number of iterations=100.
illustrated in Table 7, the p-values are mostly lower than \(10^{-7}\) which indicates significant difference between the algorithms. These results show that the differences between the optimal solutions found by the OPSO, CACO, DE and HS algorithms and the best super-optima found by the linearization approach are statistically significant. Moreover, for the modified PSO, the results indicate the closeness between these solutions and the best super-optima.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline Test problems & Proposed Method & MPSO & OPSO & CACO & DE & HS \\ \hline A.1 & 0.036123 & 0.28818 & 0.15882 & 0.19304 & 0.14393 & 0.20819 \\ \hline A.2 & 0.099543 & 0.31729 & 0.17306 & 0.2198 & 0.1677 & 0.1677 \\ \hline A.3 & 0.071229 & 0.33334 & 0.18697 & 0.23123 & 0.17436 & 0.2457 \\ \hline A.4 & 0.063235 & 0.35112 & 0.19826 & 0.23182 & 0.17636 & 0.25039 \\ \hline A.5 & 0.092431 & 0.3857 & 0.21215 & 0.2583 & 0.19873 & 0.28562 \\ \hline A.6 & 0.088895 & 0.39704 & 0.21742 & 0.26146 & 0.20715 & 0.28364 \\ \hline A.7 & 0.041282 & 0.55216 & 0.25899 & 0.31076 & 0.24577 & 0.34284 \\ \hline A.8 & 0.026400 & 0.516 & 0.28584 & 0.33663 & 0.27376 & 0.3705 \\ \hline A.9 & 0.042752 & 0.5221 & 0.28127 & 0.33833 & 0.27599 & 0.36654 \\ \hline A.10 & 0.035890 & 0.59621 & 0.32267 & 0.38421 & 0.32517 & 0.41314 \\ \hline \end{tabular}
\end{table}
Table 6: Time involved in the execution of the methods.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline Test problems & Proposed Method & MPSO & OPSO & CACO & DE & HS \\ \hline A.1 & 0.00056521 & 0.0022895 & 2.9873 & 2.1724 & 2.6526 & 2.3742 \\ \hline A.2 & 0.00047085 & 0.12277 & 2.8629 & 1.4587 & 0.52682 & 0.52682 \\ \hline A.3 & 0.0013702 & 0.12296 & 3.6869 & 3.3445 & 3.5706 & 3.9572 \\ \hline A.4 & 0.00058783 & 0.31098 & 3.3696 & 3.0948 & 3.1164 & 3.6959 \\ \hline A.5 & 0.0013822 & 0.019544 & 7.9953 & 7.3276 & 7.544 & 7.7492 \\ \hline A.6 & 0.0010926 & 0.0035921 & 2.3294 & 1.6659 & 2.3062 & 2.6638 \\ \hline A.7 & 0.0020163 & 0.0020349 & 8.5333 & 8.5898 & 8.7956 & 9.5084 \\ \hline A.8 & 0.0035273 & 0.0038748 & 10.7297 & 13.4616 & 12.9967 & 12.9733 \\ \hline A.9 & 0.00051485 & 0.003506 & 2.9832 & 2.0228 & 1.9556 & 1.7978 \\ \hline A.10 & 0.00047489 & 0.0061162 & 3.6554 & 2.6794 & 2.7964 & 2.9454 \\ \hline \hline MSE & 2.2866e-06 & 0.012737 & 32.1624 & 35.035 & 34.9926 & 37.3003 \\ \hline \end{tabular}
\end{table}
Table 5: Errors averaged over the last iterations of 30 runs.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline Test & Difference between & Difference between & Difference between & Difference between & Difference between \\ problems & proposed method & proposed method & proposed method & proposed method & proposed method \\ & and MPSO & and OPSO & and CACO & and DE & and HS \\ \hline A-1 & 0.0151 & 0.89941 & 0.93182 & 0.97217 & 0.94542 \\ \hline A.2 & 0.06166 & 0.64426 & 0.37657 & 0.50079 & 0.50079 \\ \hline A.3 & 0.14283 & 0.93779 & 0.97373 & 0.97651 & 0.97651 \\ \hline A.4 & 0.4739 & 0.95995 & 0.9916 & 0.98287 & 0.98754 \\ \hline A.5 & 0.05218 & 0.97927 & 0.97927 & 0.97927 & 0.97927 \\ \hline A.6 & 0.0425 & 0.94958 & 0.98088 & 0.98088 & 0.98088 \\ \hline A.7 & 0.00037 & 0.96472 & 0.96472 & 0.96472 & 0.96472 \\ \hline A.8 & 0.00714 & 0.93729 & 0.93729 & 0.93729 & 0.93729 \\ \hline A.9 & 0.04889 & 0.99073 & 0.99073 & 0.99073 & 0.99073 \\ \hline A.10 & 0.0751 & 0.99137 & 0.99137 & 0.99137 & 0.99137 \\ \hline Average & 0.091967 & 0.92544 & 0.9118 & 0.92766 & 0.92545 \\ \hline Sd & 0.14025 & 0.10266 & 0.18927 & 0.15079 & 0.15035 \\ \hline SEM & 0.044352 & 0.032465 & 0.059853 & 0.047683 & 0.047546 \\ \hline p-value & 0.067972 & 3.9164e-10 & 9.861e-08 & 1.1585e-08 & 1.1534e-08 \\ \hline
95\% CI & (-0.00836,0.1923) & (0.852,0.99888) & (0.7764,1.0472) & (0.81979,1.0355) & (0.8179,1.033) \\ \hline \end{tabular}
\end{table}
Table 7: Paired t-test results based on output value and with degree freedom of 7.
Figure 2: The results (averaged over 30 runs) of the MPSO, OPSO, CACO, DE and HS algorithms on test problem A.1 and their differences from the best super-optimum found by the linearization approach.
Figure 4: The results (averaged over 30 runs) of the MPSO, OPSO, CACO, DE and HS algorithms on test problem A.3 and their differences from the best super-optimum found by the linearization approach.
Figure 3: The results (averaged over 30 runs) of the MPSO, OPSO, CACO, DE and HS algorithms on test problem A.2 and their differences from the best super-optimum found by the linearization approach.
Figure 5: The results (averaged over 30 runs) of the MPSO, OPSO, CACO, DE and HS algorithms on test problem A.4 and their differences from the best super-optimum found by the linearization approach.
Figure 6: The results (averaged over 30 runs) of the MPSO, OPSO, CACO, DE and HS algorithms on test problem A.6 and their differences from the best super-optimum found by the linearization approach.
Figure 8: The results (averaged over 30 runs) of the MPSO, OPSO, CACO, DE and HS algorithms on test problem A.8 and their differences from the best super-optimum found by the linearization approach.
Figure 7: The results (averaged over 30 runs) of the MPSO, OPSO, CACO, DE and HS algorithms on test problem A.7 and their differences from the best super-optimum found by the linearization approach.
## Conclusion
In this paper, a new algorithm was presented for solving the fuzzy relational inequalities with fuzzy constraints (FRI-FC) defined by the product t-norm. The linear optimization problem with respect to the regions formed as FRI-FC was studied and a set of equivalent optimization problems was introduced by the theoretical aspects of the fuzzy relational inequalities. It is shown that the main problem can be converted into an equivalent linear model that can be solved by many efficient methods such as the simplex algorithm. Moreover, three simplification operations were presented to convert the problem into a more simplified one. A comparison was made between the proposed method and the modified PSO which solves the linear optimization problems subjected to the FRI-FC regions. Furthermore, since the main problem can be transformed into an unconstrained optimization problem, the solutions of the linearization method and the modified PSO were compared to the results obtained by some well-known heuristic algorithms such as original PSO (OPSO), continuous Ant Colony Optimization (CACO), Differential Evolution (DE) and Harmony Search (HS). On the contrary to the OPSO, CACO, DE and HS algorithms, the solutions found by the modified PSO were mostly very close to the best super-optima generated by the linearization method. From the results of the errors and times of the execution of the proposed method, we can observe that our algorithm finds the best super-optima with the least constraints violation and finishes an experiment in less than 0.996 seconds. As future works, we aim at testing our algorithm in other types of FRI-FC problems.
## Appendix A
### Test Problem A.1
\(c^{T}=[\begin{array}{ccccc}-1.7005&-4.3370&-3.5848&7.7951&0.4787&9.3360 \end{array}]\)
\(A=\left[\begin{array}{ccccc}0.1359&0.8372&0.1439&0.8102&0.8317&0.0801\\ 0.0866&0.1400&0.9757&0.1262&0.7061&0.8810\\ 0.7325&0.0336&0.3292&0.6075&0.6609&0.6825\\ 0.8851&0.3458&0.6788&0.5612&0.8223&0.3284\end{array}\right]\)
\(b^{T}=[\begin{array}{ccccc}0.2014&0.0161&0.6792&0.8360\end{array}]\)
### Test Problem A.2
\(c^{T}=[\begin{array}{ccccc}6.1322&-1.5004&4.2041&0.1303&-7.0502&-9.3533 \end{array}]\)
\(A=\left[\begin{array}{ccccc}0.5847&0.1338&0.4806&0.2675&0.3970&0.3215\\ 0.1819&0.1038&0.0074&0.0476&0.1761&0.0927\\ 0.2044&0.2334&0.1171&0.3456&0.2832&0.3481\\ 0.0659&0.4362&0.5850&0.3090&0.7723&0.4813\\ 0.8992&0.0619&0.7021&0.0030&0.8549&0.8145\end{array}\right]\)
\(b^{T}=[\ 0.9366\ 0.1139\ 0.2031\ 0.8282\ 0.8752\ ]\)
**Test Problem A.3**
\(c^{T}=[\ 6.8552\ -1.8291\ 8.6740\ -6.4465\ 5.9379\ -1.7184\ ]\)
\(A=\left[\begin{array}{cccc}0.3248\ 0.1426\ 01232\ 0.8051\ 0.1083\ 0.2499\\ 0.7076\ 0.2252\ 0.7964\ 0.0571\ 0.6141\ 0.0879\\ 0.1289\ 0.7975\ 0.0480\ 0.0046\ 0.0144\ 0.1980\\ 0.5316\ 0.8095\ 0.8496\ 0.1967\ 0.5240\ 0.0389\\ 0.6753\ 0.6560\ 0.3397\ 0.4811\ 0.3146\ 0.4192\\ 0.0151\ 0.7319\ 0.4089\ 0.6728\ 0.6220\ 0.7515\ ]\)
\(b^{T}=[\ 0.6034\ 0.4401\ 0.5971\ 0.1162\ 0.0642\ 0.8811\ ]\)
**Test Problem A.4**
\(c^{T}=[\ -5.9968\ 1.6453\ 7.2174\ -6.3971\ 2.9091\ 1.7317\ -9.5134\ ]\)
\(A=\left[\begin{array}{cccc}0.1556\ 0.7437\ 0.3357\ 0.9491\ 0.8195\ 0.7906\ 0.0144\\ 0.8460\ 0.1222\ 0.4980\ 0.7749\ 0.7375\ 0.4606\ 0.3457\\ 0.6406\ 0.5979\ 0.9258\ 0.2752\ 0.2663\ 0.4620\ 0.6144\\ 0.5399\ 0.3409\ 0.4983\ 0.9553\ 0.7804\ 0.4636\ 0.7174\\ 0.1448\ 0.1067\ 0.8405\ 0.8741\ 0.6756\ 0.8831\ 0.6736\ ]\)
\(b^{T}=[\ 0.0322\ 0.4687\ 0.8801\ 0.6245\ 0.8080\ ]\)
**Test Problem A.5**
\(c^{T}=[\ -2.2349\ 9.8684\ 6.0145\ -0.1409\ 4.4755\ 5.1849\ -5.2513\ 7.4044\ ]\)
\(A=\left[\begin{array}{cccc}0.6685\ 0.8277\ 0.7281\ 0.1500\ 0.0963\ 0.9766\ 0.0665\ 0.0789\\ 0.1002\ 0.7566\ 0.3847\ 0.2817\ 0.6514\ 0.7538\ 0.2835\ 0.4277\\ 0.5461\ 0.8736\ 0.7027\ 0.3276\ 0.9032\ 0.3744\ 0.5540\ 0.5274\\ 0.5924\ 0.2766\ 0.5094\ 0.3399\ 0.3381\ 0.9006\ 0.5930\ 0.4494\\ 0.9426\ 0.0540\ 0.7771\ 0.8831\ 0.1892\ 0.4538\ 0.4534\ 0.3912\\ 0.3466\ 0.2082\ 0.2217\ 0.2299\ 0.7907\ 0.0611\ 0.3830\ 0.9836\ ]\)
\(b^{T}=[\ 0.9642\ 0.0346\ 0.1792\ 0.7077\ 0.9931\ 0.3542\ ]\)
**Test Problem A.6**
\(c^{T}=[\ 3.9815\ 7.8181\ 9.1858\ 0.9443\ -7.2275\ -7.0141\ -4.8498\ ]\)
\[A=\left[\begin{array}{cccccccc}0.8407&0.2511&0.9172&0.0540&0.0119&0.6020&0.2 290\\ 0.2543&0.6160&0.2858&0.5308&0.3371&0.2630&0.9133\\ 0.8143&0.4733&0.7572&0.7792&0.1622&0.6541&0.1524\\ 0.2435&0.3517&0.7537&0.9340&0.7943&0.6892&0.8258\\ 0.9293&0.8308&0.3804&0.1299&0.3112&0.7482&0.5383\\ 0.3500&0.5853&0.5678&0.5688&0.5285&0.4505&0.9961\\ 0.1966&0.5497&0.0759&0.4694&0.1656&0.0838&0.0782\end{array}\right]\]
\(b^{T}=[\begin{array}{cccccccc}0.4427&0.1067&0.9619&0.0046&0.7749&0.8173&0.86 87\end{array}]\)
#### Test Problem A.7
\(c^{T}=[\begin{array}{cccccccc}-6.2250&7.3391&7.0215&6.5683&1.2437&5.2404&4.11 14&1.4945&7.7537&-0.0706\end{array}]\)
\(A=\left[\begin{array}{cccccccc}0.8376&0.5607&0.4054&0.9161&0.7397&0.0716&0.3 919&0.2039&0.2089&0.5333\\ 0.0516&0.9841&0.0849&0.2607&0.1426&0.3386&0.1199&0.4257&0.5544&0.3795\\ 0.4259&0.7705&0.8464&0.3001&0.4591&0.6629&0.3465&0.5807&0.0754&0.7783\\ 0.2721&0.0729&0.5261&0.4080&0.6138&0.5058&0.5298&0.0238&0.5426&0.8281\\ 0.7275&0.3988&0.5304&0.8283&0.3916&0.9934&0.1053&0.7491&0.2099&0.8877\\ 0.8102&0.0718&0.8724&0.4534&0.4194&0.4759&0.3741&0.0360&0.7298&0.2837\\ 0.4767&0.1241&0.7166&0.9912&0.3778&0.8198&0.6201&0.2147&0.1651&0.6314\end{array}\right]\)
\(b^{T}=[\begin{array}{cccccccc}0.1699&0.3880&0.1850&0.2983&0.0652&0.7516&0.00 27\end{array}]\)
#### Test Problem A.8
\(c^{T}=[\begin{array}{cccccccc}-0.6133&9.5958&8.9277&2.5328&4.5288&-3.2146&8.4 352&9.0807&9.8011&2.4952\end{array}]\)
\(A=\left[\begin{array}{cccccccc}0.9576&0.1524&0.4591&0.1542&0.6114&0.3348&0.5 124&0.5708&0.5980&0.4008\\ 0.0317&0.1542&0.0775&0.4483&0.4208&0.2367&0.2025&0.2892&0.7608&0.7968\\ 0.4176&0.9340&0.5365&0.5206&0.3109&0.6503&0.9856&0.7474&0.0019&0.3848\\ 0.5935&0.2486&0.9793&0.3124&0.2278&0.9957&0.1159&0.8994&0.6739&0.7614\\ 0.2106&0.3407&0.2622&0.4472&0.3391&0.7915&0.3352&0.6066&0.1036&0.2750\\ 0.2554&0.2953&0.2070&0.3073&0.8167&0.2889&0.1124&0.7823&0.5541&0.2769\\ 0.6388&0.2688&0.6292&0.7254&0.9561&0.5072&0.0170&0.4677&0.3709&0.7724\\ 0.5010&0.2386&0.5260&0.3181&0.2825&0.2465&0.6896&0.1970&0.4726&0.8429\end{array}\right]\)
\(b^{T}=[\begin{array}{cccccccc}0.9750&0.0237&0.5693&0.5947&0.0129&0.7476&0.89 17&0.9195\end{array}]\)
#### Test Problem A.9
\(c^{T}=[\begin{array}{cccccccc}2.7276&-9.1572&-7.1452&0.9314&-1.9806&-0.2452&8.1 128&2.4546&2.85\end{array}]\)
\[A=\left[\begin{array}{cccccccc}0.6637&0.7432&0.8188&0.5660&0.6163&0.6756&0.8136&0.9008&0.1373\\ 0.4567&0.8137&0.1834&0.5090&0.8100&0.8336&0.0360&0.6520&0.7940\\ 0.9142&0.6858&0.3362&0.4732&0.9350&0.3233&0.1749&0.3490&0.6930\\ 0.3936&0.8700&0.9014&0.4516&0.8772&0.2706&0.2962&0.7862&0.8683\\ 0.5089&0.8362&0.9459&0.6415&0.7242&0.3912&0.0342&0.8280&0.1411\\ 0.1390&0.9381&0.4557&0.4086&0.4804&0.7904&0.2785&0.5207&0.4494\\ 0.1239&0.4953&0.3745&0.3644&0.8924&0.5274&0.5470&0.6684&0.5228\\ 0.6215&0.6577&0.2243&0.2977&0.8589&0.8018&0.0108&0.4651&0.9415\\ 0.6028&0.2612&0.0686&0.7647&0.0017&0.2003&0.2626&0.1225&05671\end{array}\right]\]
\(b^{T}=[\begin{array}{cccccccc}0.6412&0.0190&0.1702&0.6046&0.2470&0.8351&0.7 981&0.4645&0.6098\end{array}]\)
**Test Problem A.10**
\(c^{T}=[\begin{array}{cccccccc}0.1117&-5.4486&4.4966&2.1297&-0.4456&-6.9081&6. 4627&4.4522&5.6057&-5.7421\end{array}]\)
\(A=\left[\begin{array}{cccccccc}0.7906&0.9957&0.0597&0.5569&0.8766&0.4642&0.79 50&0.4590&0.5371&0.1593\\ 0.4117&0.3598&0.0890&0.7960&0.6433&0.7929&0.9148&0.2328&0.5097&0.8664\\ 0.8080&0.8941&0.6725&0.9067&0.6509&0.5584&0.0008&0.0250&0.5247&0.0786\\ 0.0878&0.9908&0.1796&0.9319&0.1598&0.0141&0.8120&0.0548&0.7654&0.8247\\ 0.0338&0.7684&0.8985&0.1536&0.8718&0.2336&0.8460&0.5478&0.6549&0.7402\\ 0.8519&0.1464&0.0081&0.5379&0.8072&0.3497&0.9215&0.9925&0.1810&0.1014\\ 0.2547&0.9694&0.0436&0.7910&0.0878&0.7878&0.4435&0.6274&0.8116&0.3670\\ 0.1954&0.8315&0.7230&0.8740&0.4403&0.2030&0.0607&0.2924&0.2905&0.8977\\ 0.7841&0.2572&0.5130&0.9844&0.5482&0.5547&0.2540&0.7913&0.8254&0.0586\\ 0.1628&0.6455&0.8100&0.2105&0.3994&0.9538&0.9343&0.3079&0.1052&0.5025\end{array}\right]\)
\(b^{T}=[\begin{array}{cccccccc}0.0294&0.9041&0.9020&0.9824&0.8485&0.1962&0. 4210&0.2162&0.8787&0.3932\end{array}]\)
### Acknowledgment
We are very grateful to the anonymous referees for their comments and suggestions, which were very helpful in improving the paper.
|
2309.11651 | Drift Control of High-Dimensional RBM: A Computational Method Based on
Neural Networks | Motivated by applications in queueing theory, we consider a stochastic
control problem whose state space is the $d$-dimensional positive orthant. The
controlled process $Z$ evolves as a reflected Brownian motion whose covariance
matrix is exogenously specified, as are its directions of reflection from the
orthant's boundary surfaces. A system manager chooses a drift vector
$\theta(t)$ at each time $t$ based on the history of $Z$, and the cost rate at
time $t$ depends on both $Z(t)$ and $\theta(t)$. In our initial problem
formulation, the objective is to minimize expected discounted cost over an
infinite planning horizon, after which we treat the corresponding ergodic
control problem. Extending earlier work by Han et al. (Proceedings of the
National Academy of Sciences, 2018, 8505-8510), we develop and illustrate a
simulation-based computational method that relies heavily on deep neural
network technology. For test problems studied thus far, our method is accurate
to within a fraction of one percent, and is computationally feasible in
dimensions up to at least $d=30$. | Baris Ata, J. Michael Harrison, Nian Si | 2023-09-20T21:32:58Z | http://arxiv.org/abs/2309.11651v4 | # Drift Control of High-Dimensional RBM: A Computational Method Based on Neural Networks
###### Abstract
Motivated by applications in queueing theory, we consider a stochastic control problem whose state space is the \(d\)-dimensional positive orthant. The controlled process \(Z\) evolves as a reflected Brownian motion whose covariance matrix is exogenously specified, as are its directions of reflection from the orthant's boundary surfaces. A system manager chooses a drift vector \(\theta(t)\) at each time \(t\) based on the history of \(Z\), and the cost rate at time \(t\) depends on both \(Z(t)\) and \(\theta(t)\). In our initial problem formulation, the objective is to minimize expected discounted cost over an infinite planning horizon, after which we treat the corresponding ergodic control problem. Extending earlier work by Han et al. (Proceedings of the National Academy of Sciences, 2018, 8505-8510), we develop and illustrate a simulation-based computational method that relies heavily on deep neural network technology. For test problems studied thus far, our method is accurate to within a fraction of one percent, and is computationally feasible in dimensions up to at least \(d=30\).
## 1 Introduction
Beginning with the seminal work of Iglehart and Whitt [26, 27], there has developed over the last 50+ years a large literature that justifies the use of reflected Brownian motions as approximate models of queueing systems under "heavy traffic" conditions. In particular, a limit theorem proved by Reiman [39] justifies the use of \(d\)-dimensional reflected Brownian motion (RBM) as an approximate model of a \(d\)-station queueing network. Reiman's theory is restricted to networks of the generalized Jackson type, also called single-class networks, or networks with homogeneous customer populations, but it has been extended to more complex multi-class networks under certain restrictions, most notably by Peterson [37] and Williams [45]. The survey papers by Williams [44] and by Harrison and Nguyen [20] provide an overview of heavy traffic limit theory through its first 25 years.
Many authors have commented on the compactness and simplicity of RBM as a mathematical model, at least in comparison with the conventional discrete-flow models that it replaces. For example, in the preface to Kushner [32]'s book on heavy traffic analysis one finds the following passage:
"These approximating [Brownian] models have the basic structure of the original problem, but are significantly simpler. Much inessential detail is eliminated... They greatly simplify analysis, design, and optimization, [yielding] good approximations to problems that would otherwise be intractable..."
Of course, having adopted RBM as a system model, one still confronts the question of how to do performance analysis, and in that regard there has been an important recent advance: Blanchet et al. [10] have developed a simulation-based method to estimate steady-state performance measures for RBM in dimensions up to 200, and those estimates come with performance guarantees.
**Descriptive performance analysis versus optimal control.** Early work on heavy traffic approximations, including the papers cited above, focused on descriptive performance analysis under fixed operating policies. Harrison [18, 19] expanded the framework to include consideration of dynamic control, using informal arguments to justify Brownian approximations for queueing network models where a system manager can make sequencing, routing and/or input control decisions. Early papers in that vein by Harrison and Wein [22, 23] and by Wein [43] dealt with Brownian models simple enough that their associated control problems could be solved analytically. But for larger systems and/or more complex decisions, the Brownian control problem that approximates an original queueing control problem may only be solvable numerically. Such stochastic control problems may be of several different types, depending on context.
At one end of the spectrum are drift control problems, in which the controlling agent can effect changes in system state only at bounded finite rates. At the other end of the spectrum are impulse control problems, in which the controlling agent can effect instantaneous jumps in system state, usually with an associated fixed cost. In between are singular control problems, in which the agent can effect instantaneous state changes of any desired size, usually at a cost proportional to the size of the displacement; see for example, Karatzas [28]. In this paper we develop a computational method for the first of those three problem classes, and then illustrate its use on selected test problems. Our method is a variant of the one developed by Han et al. [17] for solution of semi-linear partial differential equations, and in its implementation we have re-used substantial amounts of the code provided by Han et al. [17] and Zhou et al. [49].
**Literature Review.** Two of the most relevant streams of literature are _i_) drift rate control problems, and _ii_) solving PDEs using deep learning. Ata et al. [5] considers a one-dimensional drift rate control problem on a bounded interval under a general cost of control but no state costs. The authors characterize the optimal policy in closed form; and they discuss the application of their model to a power control problem in wireless communication. Ormeci Matoglu and Vande Vate [36] consider a drift rate control problem where a system controller incurs a fixed cost to change the drift rate. The authors prove that a deterministic, non-overlapping control band policy is optimal; also
see Vande Vate [42]. Ghosh and Weerasinghe [15, 16] extend Ata et al. [5] by incorporating state costs, abandonments and optimally choosing the interval where the process lives.
Drift control problems arise in a broad range of applications in practice. Rubino and Ata [40] studies a dynamic scheduling problem for a make-to-order manufacturing system. The authors model order cancellations as abandonments from their queueing system. This model feature gives rise to a drift rate control problem in the heavy traffic limit. Ata et al. [6] uses a drift control model to study a dynamic staffing problem in order to determine the number of volunteer gleaners, who sign up to help but may not show up, for harvesting leftover crops donated by farmers for the purpose of feeding food-insecure individuals. Bar-Ilan et al. [7] use a drift control model to study international reserves.
All of the papers mentioned above study one-dimensional drift-rate control problems. To the best of our knowledge, there have not been any papers studying such problems in high dimensions. One exception to this is the recent working paper Ata and Kasikaralar [4] that studies dynamic scheduling of a multiclass queue motivated by call center industry. Focusing on the Halfin-Whitt asymptotic regime, the authors derive a (limiting) drift rate control problem whose state space is \(\mathbb{R}^{d}\), where \(d\) is the number of buffers in their queueing model. Similar to us, the authors build on Han et al. [17] to solve their (high-dimensional) drift rate control problem. However, our work differs from theirs significantly, because their control problem has no state space constraints.
As mentioned earlier, our work builds on the seminal paper Han et al. [17]. In the last five years, there have been many papers written on solving PDEs using deep neural networks. We refer the reader to the recent survey Beck et al. [8]; also see E et al. [14].
**The remainder of this paper.** Section 2 recapitulates essential background knowledge from RBM theory, after which Section 3 states in precise mathematical terms the discounted control and ergodic control problems that are the object of our study. In each case, the problem statement is expressed in probabilistic terms initially, and then re-expressed analytically in the form of an equivalent Hamilton-Jacobi-Bellman equation (hereafter abbreviated to HJB equation). Section 4 derives key identities, that significantly contribute to the subsequent development of our computational method. Section 5 describes our computational method in detail.
Section 6 specifies three families of drift control test problems, each of which has members of dimensions \(d=1,2,\ldots\). The first two families arise as heavy traffic limits of certain queueing network control problems, and we explain that motivation in some detail. Drift control problems in the third family have a separable structure that allows them to be solved exactly by analytical means, which is of obvious value for assessing the accuracy of our computational method. Section 7 presents numerical results obtained with our method for all three families of test problems. In that admittedly limited context, our computed solutions are accurate to within a fraction of one percent, and our method remains computationally feasible up to at least dimension \(d=30\), and in some cases up to dimension \(100\) or more. In Section 8 we describe variations and generalizations of the problems formulated in Section 3 that are of interest for various purposes, and which we expect to be addressed in future work. Finally, there are a number of appendices that contain proofs or other
technical elaboration for arguments or procedures that have only been sketched in the body of the paper.
## 2 RBM preliminaries
We consider here a reflected Brownian motion \(Z=\left\{Z(t),t\geq 0\right\}\) with state space \(\mathbb{R}_{+}^{d},\) where \(d\geq 1.\) The data of \(Z\) are a (negative) drift vector \(\mu\in\mathbb{R}^{d},\) a \(d\times d\) positive-definite covariance matrix \(A=(a_{ij}),\) and a \(d\times d\) reflection matrix \(R\) of the form
\[R=I-Q,\text{ where }Q\text{ has non-negative entries and spectral radius }\rho(Q)<1. \tag{1}\]
The restriction to reflection matrices of the form (1) is not essential for our purposes, but it simplifies the technical development and is consistent with usage in the related earlier paper by Blanchet et al. [10]. Denoting by \(W=\left\{W(t),t\geq 0\right\}\) a \(d\)-dimensional Brownian motion with zero drift, covariance matrix \(A,\) and \(W(0)=0,\) we then have the representation
\[Z(t)=Z(0)+W(t)-\mu t+RY(t),\text{ }t\geq 0,\text{ where} \tag{2}\] \[Y_{i}(\cdot)\text{ is continuous and non-decreasing with }Y_{i}(0)=0\text{ }(i=1,2,\ldots,d),\text{ and}\] (3) \[Y_{i}(\cdot)\text{ only increases at those times }t\text{ when }Z_{i}(t)=0\text{ }(i=1,2,\ldots,d). \tag{4}\]
Harrison and Reiman [21] showed that the relationships (1) to (4) determine \(Y\) and \(Z\) as pathwise functionals of \(W,\) and that the mapping \(W\to(Y,Z)\) is continuous in the topology of uniform convergence. We interpret the \(i^{\text{th}}\) column of \(R\) as the direction of reflection on the boundary surface \(\left\{z\in\mathbb{R}_{+}^{d}:z_{i}=0\right\}\), and call \(Y_{i}=\left\{Y_{i}(t),t\geq 0\right\}\) the "pushing process" on that boundary surface.
In preparation for future developments, let \(f\) be an arbitrary \(C^{2}\) (that is, twice continuously differentiable) function \(\mathbb{R}^{d}\rightarrow\mathbb{R},\) and let \(\nabla f\) denote its gradient vector as usual. Also, we define a second-order differential operator \(\mathcal{L}\) via
\[\mathcal{L}f=\frac{1}{2}\sum_{i=1}^{d}\sum_{j=1}^{d}a_{ij}\frac{\partial^{2}} {\partial z_{i}\partial z_{j}}f, \tag{5}\]
and a first-order differential operator \(\mathcal{D}=(\mathcal{D}_{1},\ldots,\mathcal{D}_{d})^{\top}\) via
\[\mathcal{D}f=R^{\top}\nabla f, \tag{6}\]
where \(\top\) in (6) denotes transpose. Thus \(\mathcal{D}_{i}f(\cdot)\) is the directional derivative of \(f\) in the direction of reflection on the boundary surface \(\left\{z_{i}=0\right\}.\) With these definitions, an application of Ito's formula now gives the following identify, cf. Harrison and Reiman [21], Section 3:
\[\mathrm{d}f(X(t))=\nabla f(Z(t))\cdot\mathrm{d}W(t)+(\mathcal{L}f-\mu\cdot \nabla f)(Z(t))\,\mathrm{d}t+\mathcal{D}f(Z(t))\cdot\mathrm{d}Y(t),\text{ }t\geq 0. \tag{7}\]
In the obvious way, the first inner product on the right side of (7) is shorthand for a sum of \(d\) Ito differentials, while the last one is shorthand for a sum of \(d\) Riemann-Stieltjes differentials.
## 3 Problem statements and HJB equations
Let us now consider a stochastic control problem whose state space is \(\mathbb{R}^{d}_{+}\) (\(d\geq 1\)). The controlled process \(Z\) has the form
\[Z(t)=Z(0)+W(t)-\int_{0}^{t}\theta(s)\mathrm{d}s+RY(t),\ t\geq 0, \tag{8}\]
where (i) \(W=\{W(t),t\geq 0\}\) is a \(d\)-dimensional Brownian motion with zero drift, covariance matrix \(A\), and \(W(0)=0\) as in Section 2, (ii) \(\theta=\{\theta(t),t\geq 0\}\) is a non-anticipating control, or non-anticipating drift process, chosen by a system manager and taking values in a bounded set \(\Theta\subset\mathbb{R}^{d}\), and (iii) \(Y=\{Y(t),t\geq 0\}\) is a \(d\)-dimensional pushing process with components \(Y_{i}\) that satisfy (3) and (4). Note that our sign convention on the drift in the basic system equation (8) is _not_ standard. That is, we denote by \(\theta(t)\) the _negative_ drift vector at time \(t\).
The control \(\theta\) is chosen to optimize an economic objective (see below), and attention will be restricted to _stationary_ Markov controls, or stationary control policies, by which we mean that
\[\theta(t)=u(Z(t)),\ t\geq 0\ \text{for some measurable policy function}\ u:\mathbb{R}^{d}_{+}\to\Theta. \tag{9}\]
Hereafter the set \(\Theta\) of drift vectors available to the system manager will be referred to as the _action space_ for our control problem, a function \(u:\mathbb{R}^{d}_{+}\to\Theta\) will simply be called a _policy_, and we denote by \(Z^{u}\) the controlled RBM defined via (8) and (9). With regard to the system manager's objective, we take as given a continuous cost function \(c:\mathbb{R}^{d}_{+}\times\Theta\to\mathbb{R}\) with polynomial growth (see below for the precise meaning of that phrase), and assume that the cumulative cost incurred over the time interval \([0,t]\) under policy \(u\) is
\[C^{u}(t)\equiv\int_{0}^{t}c(Z^{u}(s),u(Z^{u}(s)))\,\mathrm{d}s,\ t\geq 0. \tag{10}\]
To be more specific, for \(m,n\geq 1\), a function \(g:D\subset\mathbb{R}^{m}\to\mathbb{R}^{n}\) is said to have polynomial growth if there exist constants \(\alpha_{1}\), \(\beta_{1}>0\) such that
\[|g(z)|\leq\alpha_{1}\left(1+|z|^{\beta_{1}}\right),\ z\in D.\]
Because the action space \(\Theta\) is bounded, the polynomial growth assumption on \(c\) reduces to the following:
\[|c(z,\theta)|\leq\alpha_{2}\left(1+|z|^{\beta_{2}}\right)\ \text{for all}\ z\in\mathbb{R}^{d}_{+}\ \text{and}\ \theta\in\Theta, \tag{11}\]
where \(\alpha_{2}\), \(\beta_{2}\) are positive constants.
Because our action space \(\Theta\) is bounded by assumption, the controlled RBM \(Z^{u}\) has bounded drift under any policy \(u\), from which one can prove the following mild but useful property; see Appendix A for its proof.
**Proposition 1**.: _Under any policy \(u\) and for any integer \(n=1,2,\ldots\) the function_
\[g_{n}(z,t)=\mathbb{E}_{z}\left\{|Z^{u}(t)|^{n}\right\},\ t\geq 0,\]
_has polynomial growth in \(t\) for each fixed \(z\in\mathbb{R}_{+}^{d}\)._
### Discounted control
In our first problem formulation, an interest rate \(r>0\) is taken as given, and we adopt the following discounted cost objective: choose a policy \(u\) to minimize
\[V^{u}(z)\equiv\mathbb{E}_{z}\left[\int_{0}^{\infty}e^{-rt}\mathrm{d}C^{u}(t) \right]=\mathbb{E}_{z}\left[\int_{0}^{\infty}e^{-rt}c(Z^{u}(t),u(Z^{u}(t))) \,\mathrm{d}t\right], \tag{12}\]
where \(\mathbb{E}_{z}\left(\cdot\right)\) denotes a conditional expectation given that \(Z(0)=z.\) Given the polynomial growth condition (11), it follows from Proposition 1 that the moments of \(Z(t)\) are polynomially bounded as functions of \(t\) for each fixed initial state \(z\). Given the assumed positivity of the interest rate \(r\), the expectation in (12) is therefore well defined and finite for each \(z\in\mathbb{R}_{+}^{d}\).
Hereafter we refer to \(V^{u}(\cdot)\) as the _value function_ under policy \(u\), and define the _optimal value function_
\[V(z)=\min_{u\in\mathcal{U}}V^{u}(z)\ \text{for each}\ z\in\mathbb{R}_{+}^{d}, \tag{13}\]
where \(\mathcal{U}\) is the set of stationary Markov control policies. To solve for the value function \(V^{u}(\cdot)\) under an arbitrary policy \(u\), a standard argument gives the following PDE with boundary conditions, where \(\mathcal{L}\) and \(\mathcal{D}_{i}\) are the differential operators defined via (5) and (6), respectively:
\[\mathcal{L}V^{u}(z)-u(z)\cdot\nabla V^{u}(z)+c(z,u(z))=rV^{u}(z),\ \ z\in \mathbb{R}_{+}^{d}, \tag{14}\]
with boundary conditions
\[\mathcal{D}_{i}V^{u}(z)=0\ \text{if}\ z_{i}=0\ (i=1,2,\ldots,d). \tag{15}\]
The corresponding HJB equation, to be solved for the _optimal_ value function \(V(\cdot)\), is
\[\mathcal{L}V(z)-\max_{\theta\in\Theta}\left\{\theta\cdot\nabla V(z)-c(z, \theta)\right\}=rV(z),z\in\mathbb{R}_{+}^{d}, \tag{16}\]
with boundary conditions
\[\mathcal{D}_{i}V(z)=0\ \text{if}\ z_{i}=0\ (i=1,2,\ldots,d). \tag{17}\]
Moreover, the policy
\[u^{*}(z)=\arg\max_{\theta\in\Theta}\{\theta\cdot\nabla V(z)-c(z,\theta)\} \tag{18}\]
is optimal, meaning that \(V^{u^{*}}(z)=V(z)\) for \(z\in\mathbb{R}^{d}_{+}\).
There will be no attempt here to prove existence of \(C^{2}\) solutions, but our computational method proceeds as if that were the case, striving to compute a \(C^{2}\) function \(V\) that satisfies (16)-(17) as closely as possible in a certain sense. In Appendix B.1 we use (7) to verify that a sufficiently regular solution of the PDE (14)-(15) does in fact satisfy (12) as intended, and similarly, that a sufficiently regular solution of (16)-(17) does in fact satisfy (13).
### Ergodic control
For our second problem formulation, it is assumed that
\[c(z,\theta)\geq 0\text{ for all }(z,\theta)\in\mathbb{R}^{d}_{+}\times\Theta. \tag{19}\]
Readers will see that our analysis can be extended to cost functions that take on negative values in at least some states, but to do so one must deal with certain irritating technicalities. To be specific, the issue is whether the expected values involved in our formulation are well defined.
In preparation for future developments, let us recall that a square matrix \(R\) of the form (1), called a _Minkowski matrix_ in linear algebra (or just _M-matrix_ for brevity), is non-singular, and its inverse is given by the Neumann expansion
\[R^{-1}=I+Q+Q^{2}+\ldots\geq 0.\]
Hereafter, we assume that
\[\text{there exists at least one }\theta\in\Theta\text{ such that }R^{-1}\theta>0. \tag{20}\]
It is known that an RBM with a non-singular covariance matrix, reflection matrix \(R\), and negative drift vector \(\theta\) has a stationary distribution if and only if the inequality in (20) holds, cf. Section 6 of Harrison and Williams [24]. Of course, our statement of this "stability condition" reflects the non-standard sign convention used in this paper. That is, \(\theta\) denotes the _negative_ drift vector of the RBM under discussion.
For our ergodic control problem, a policy function \(u:\mathbb{R}^{d}_{+}\rightarrow\Theta\) is said to be _admissible_ if, first, the corresponding controlled RBM \(Z^{u}\) has a unique stationary distribution \(\pi^{u}\), and if, moreover,
\[\int_{\mathbb{R}^{d}_{+}}\left|f(z)\right|\pi^{u}(dz)<\infty \tag{21}\]
for any function \(f:\mathbb{R}^{d}_{+}\rightarrow\mathbb{R}\) with polynomial growth. Our assumption (20) ensures the existence of at least one admissible policy \(u\), as follows. Let \(\theta\in\Theta\) be a negative drift vector satisfying (20),
and consider the constant policy \(u(\cdot)\equiv\theta\). The corresponding controlled process \(Z^{u}\) is then an RBM having a unique stationary distribution \(\pi^{u}\), as noted above. It has been shown in Budhiraja and Lee [11] that the moment generating function of \(\pi^{u}\) is finite in a neighborhood of the origin, from which it follows that \(\pi^{u}\) has finite moments of all orders. Thus \(\pi^{u}\) satisfies (21) for any function \(f\) with polynomial growth, so \(u\) is admissible.
Because our cost function \(c(z,\theta)\) has polynomial growth and our action space \(\Theta\) is bounded, the steady-state average cost
\[\xi^{u}\equiv\int_{\mathbb{R}^{d}_{+}}c(z,u(z))\,\pi^{u}(dz) \tag{22}\]
is well defined and finite under any admissible policy \(u\). The objective in our ergodic control problem is to find an admissible policy \(u\) for which \(\xi^{u}\) is minimal.
To solve for the steady-state average cost \(\xi^{u}\) and the corresponding _relative value function_, denoted by \(v^{u}(\cdot)\), under an admissible policy \(u\), a standard argument gives the following PDE:
\[\mathcal{L}v^{u}(z)-u(z)\cdot\nabla v^{u}(z)+c(z,\theta)=\xi^{u}\text{ for each }z\in\mathbb{R}^{d}_{+}, \tag{23}\]
with boundary conditions
\[\mathcal{D}_{i}v^{u}(z)=0\text{ if }z_{i}=0\text{ }(i=1,2,\ldots,d). \tag{24}\]
The HJB equation for ergodic control is again of a standard form, involving a constant \(\xi\) (interpreted as the minimum achievable steady-state average cost) and a relative value function \(v:\mathbb{R}^{d}_{+}\to\mathbb{R}\). To be specific, the HJB equation is
\[\mathcal{L}v(z)-\max_{\theta\in\Theta}\left\{\theta\cdot\nabla v(z)-c(z, \theta)\right\}=\xi\text{ for each }z\in\mathbb{R}^{d}_{+}, \tag{25}\]
with boundary conditions
\[\mathcal{D}_{i}v(z)=0\text{ if }z_{i}=0\text{ }(i=1,2,\ldots,d). \tag{26}\]
Paralleling the previous development for discounted control, we show the following in Appendix B.2: if a \(C^{2}\) function \(v\) and a constant \(\xi\) jointly satisfy (25)-(26), then
\[\xi=\inf_{u\in\mathcal{U}}\xi^{u}, \tag{27}\]
where \(\mathcal{U}\) denotes the set of admissible controls for the ergodic cost formulation. Moreover, the policy
\[u^{*}(z)=\arg\max_{\theta\in\Theta}\left\{\theta\cdot\nabla v(z)-c(z,\theta) \right\},\ z\in\mathbb{R}^{d}_{+}, \tag{28}\]
is optimal, meaning that \(\xi^{u^{*}}=\xi.\) Again paralleling the previous development for discounted control, there is no attempt to prove that such a solution for (25)-(26) exists. In Appendix B.2 we use (7) to verify that a sufficiently regular solution of the PDE (23)-(24) does in fact satisfy (22) as intended,
and similarly, that a sufficiently regular solution of (25)-(26) does in fact satisfy (27).
## 4 Equivalent SDEs
In this section we prove two key identities, Equations (31) and (40) below, that are closely patterned after results used by Han et al. [17] to justify their "deep BSDE method" for solution of certain non-linear PDEs. That earlier work provided both inspiration and detailed guidance for our study, but we include these derivations to make the current account as nearly self-contained as possible. Sections 4.1 and 4.2 treat the discounted and ergodic cases, respectively.
Our method begins by specifying what we call a _reference policy_. This is a nominal or default policy, specified at the outset but possibly revised in light of computational experience, that we use to generate sample paths of the controlled RBM \(Z\). Roughly speaking, one wants to choose the reference policy so that its paths tend to occupy parts of the state space thought to be most frequently visited by an optimal policy.
### Discounted control
Our reference policy for the discounted case chooses a constant action \(u(z)=\tilde{\theta}>0\) in every state \(z\in\mathbb{R}_{+}^{d}\). (Again we stress that, given the non-standard sign convention embodied in (8) and (19), this means that \(\tilde{Z}\) has a constant drift vector \(-\tilde{\theta}\), with all components negative.) Thus the corresponding _reference process_\(\tilde{Z}\) is a \(d\)-dimensional RBM which, in combination with its \(d\)-dimensional pushing process \(\tilde{Y}\) and the \(d\)-dimensional Brownian motion \(W\) defined in Section 2, satisfies
\[\tilde{Z}(t)=\tilde{Z}(0)+W(t)-\tilde{\theta}\,t+R\,\tilde{Y}(t),\ t\geq 0, \tag{29}\]
plus the obvious analogs of Equations (3) and (4). For the key identity (31) below, let
\[F(z,x)=\tilde{\theta}\cdot x-\max_{\theta\in\Theta}\left\{\theta\cdot x-c(z, \theta)\right\}\text{ for }z\in\mathbb{R}_{+}^{d}\text{ and }x\in\mathbb{R}^{d}. \tag{30}\]
**Proposition 2**.: _If \(V\left(\cdot\right)\) satisfies the HJB equation (16) - (17), then it also satisfies the following identity almost surely for any \(T>0\):_
\[e^{-rT}V(\tilde{Z}(T))-V(\tilde{Z}(0))=\int_{0}^{T}e^{-rt}\nabla V(\tilde{Z}( t))\cdot\mathrm{d}W(t)-\int_{0}^{T}e^{-rt}F(\tilde{Z}(t),\nabla V(\tilde{Z}(t))) \mathrm{d}t. \tag{31}\]
Proof.: Applying Ito's formula to \(e^{-rt}V(\tilde{Z}(t))\) and using Equation (7) yield
\[e^{-rT}V(\tilde{Z}(T))-V(\tilde{Z}(0)) = \int_{0}^{T}e^{-rt}\nabla V(\tilde{Z}(t))\cdot\mathrm{d}W(t)+ \int_{0}^{T}e^{-rt}\mathcal{D}V(\tilde{Z}(t))\cdot\mathrm{d}\tilde{Y}(t)\] \[+\int_{0}^{T}e^{-rt}\left(\mathcal{L}V(\tilde{Z}(t))-\tilde{ \theta}\cdot\nabla V(\tilde{Z}(t))-rV(\tilde{Z}(t))\right)\mathrm{d}t.\]
Using the HJB boundary condition (17), plus the complementarity condition (4) for \(\tilde{Y}\) and \(\tilde{Z},\) one has
\[\int_{0}^{T}e^{-rt}\mathcal{D}V(\tilde{Z}(t))\cdot\mathrm{d}\tilde{Y}(t)=0.\]
Furthermore, substituting \(z=\tilde{Z}(t)\) in the HJB equation (16), multiplying both sides by \(e^{-rt},\) rearranging the terms, and integrating over \([0,T]\) yields
\[\int_{0}^{T}e^{-rt}\left(\mathcal{L}V(\tilde{Z}(t))-rV(\tilde{Z}(t))\right) \mathrm{d}t=\int_{0}^{T}e^{-rt}\max_{\theta\in\Theta}\left(\theta\cdot\nabla V (\tilde{Z}(t))-c(\tilde{Z}(t),\theta)\right)\mathrm{d}t. \tag{33}\]
Substituting Equation (33) into Equation (32) gives Equation (31).
Proposition 2 provides the motivation for the loss function that we strive to minimize in our computational method (see Section 5). Before developing that approach, we prove the following, which can be viewed as a converse of Proposition 2.
**Proposition 3**.: _Suppose that \(V:\mathbb{R}_{+}^{d}\rightarrow\mathbb{R}\) is a \(C^{2}\) function, \(G:\mathbb{R}_{+}^{d}\rightarrow\mathbb{R}^{d}\) is continuous, and \(V,\)\(\nabla V\), and \(G\) all have polynomial growth. Also assume that the following identity holds almost surely for some fixed \(T>0\) and every \(Z(0)=z\in\mathbb{R}_{+}^{d}\):_
\[e^{-rT}V(\tilde{Z}(T))-V(\tilde{Z}(0))=\int_{0}^{T}e^{-rt}G(\tilde{Z}(t))\cdot \mathrm{d}W(t)-\int_{0}^{T}e^{-rt}F(\tilde{Z}(t),G(\tilde{Z}(t)))\,\mathrm{d}t. \tag{34}\]
_Then \(G(\cdot)=\nabla V(\cdot)\) and \(V\) satisfies the HJB equation (16) - (17)._
**Remark 1**.: The surprising conclusion that (34) implies \(\nabla V(\cdot)=G(\cdot),\) without any _a priori_ relationship between \(G\) and \(\nabla V\) being assumed, motivates the "double parametrization" method in Section 5.
Proof of Proposition 3.: Because \(\tilde{Z}\) is a time-homogeneous Markov process, we can express (34) equivalently as follows for any \(k=0,1,\ldots\) :
\[e^{-rT}V(\tilde{Z}((k+1)T)-V(\tilde{Z}(kT) =\int_{kT}^{(k+1)T}e^{-r(t-kT)}G(\tilde{Z}(t))\cdot dW(t)\] \[-\int_{kT}^{(k+1)T}e^{-r(t-kT)}F(\tilde{Z}(t),G(\tilde{Z}(t)))\,dt. \tag{35}\]
Now multiply both sides of (35) by \(e^{-rkT},\) then add the resulting relationships for \(k=0,1,\ldots,n-1\) to arrive at the following:
\[e^{-rnT}V(\tilde{Z}(nT))=V(\tilde{Z}(0))+\int_{0}^{nT}e^{-rt}G(\tilde{Z}(t)) \cdot dW(t)-\int_{0}^{nT}e^{-rt}F(\tilde{Z}(t),G(\tilde{Z}(t)))dt. \tag{36}\]
Because \(G\) has polynomial growth, one can show that
\[\mathbb{E}_{z}\left[\int_{0}^{nT}e^{-2rt}G(\tilde{Z}(t))^{2}dz\right]<\infty\]
for all \(n\geq 1\). Thus, when we take \(\mathbb{E}_{z}\) of both sides of (36), the stochastic integral (that is, the second term) on the right side vanishes, and then rearranging terms gives the following:
\[V(z)=e^{-rnT}\mathbb{E}_{z}\left[V(\tilde{Z}(nT))\right]+\mathbb{E}_{z}\left[ \int_{0}^{nT}e^{-rt}F(\tilde{Z}(t),G(\tilde{Z}(t)))\mathrm{d}t\right],\]
for arbitrary positive integer \(n\).
By Proposition 1 and the polynomial growth condition of \(V\), we have \(e^{-rnT}\mathbb{E}_{z}[V(\tilde{Z}(nT))]\to 0\) as \(n\to\infty\). Therefore,
\[V(z)=\lim_{n\to\infty}\mathbb{E}_{z}\left[\int_{0}^{nT}e^{-rt}F\left(\tilde{Z} (t),G(\tilde{Z}(t))\right)\mathrm{d}t\right]\text{ for }z\in\mathbb{R}_{+}^{d}.\]
Similarly, since \(F\) and \(G\) have polynomial growth, we conclude that
\[\mathbb{E}_{z}\left[\int_{0}^{\infty}e^{-rt}\left|F\left(\tilde{Z }(t),G(\tilde{Z}(t))\right)\right|\mathrm{d}t\right]<+\infty\text{ for }z\in\mathbb{R}_{+}^{d},\text{ and}\] \[\int_{0}^{nT}e^{-rt}F\left(\tilde{Z}(t),G(\tilde{Z}(t))\right) \mathrm{d}t\leq\int_{0}^{\infty}e^{-rt}\left|F\left(\tilde{Z}(t),G(\tilde{Z}(t ))\right)\right|\mathrm{d}t<+\infty\text{ for }z\in\mathbb{R}_{+}^{d}.\]
Thus, by dominated convergence, we have
\[V(z)=\mathbb{E}_{z}\left[\int_{0}^{\infty}e^{-rt}F\left(\tilde{Z}(t),G(\tilde{ Z}(t))\right)\mathrm{d}t\right]\text{ for }z\in\mathbb{R}_{+}^{d}. \tag{37}\]
In other words, \(V(z)\) can be viewed as the expected discounted cost associated with the RBM under the reference policy starting in state \(\tilde{Z}(0)=z\), where \(F\left(\cdot,G(\cdot)\right)\) is the state-cost function. Therefore, it follows from Equations (14) - (15) with \(u(z)=\tilde{\theta}\) and \(c(z,u(z))=c(z,\tilde{\theta})=F(z,G(z))\) for \(z\in\mathbb{R}_{+}^{d}\) that \(V\) satisfies the following PDE:
\[\mathcal{L}V(z)-\tilde{\theta}\cdot\nabla V(z)+F\left(z,G(z)\right)=rV(z),\ z \in\mathbb{R}_{+}^{d}, \tag{38}\]
with boundary conditions (15), and that it has polynomial growth.
Suppose that \(G(\cdot)=\nabla V(\cdot)\) (which we will prove later). Substituting this into Equation (38) and using the definition of \(F\), it follows that
\[\mathcal{L}V(z)-\max_{\theta\in\Theta}\left\{\theta\cdot\nabla V(z)-c(z, \theta)\right\}=rV(z),\ z\in\mathbb{R}_{+}^{d},\]
which along with the boundary condition (15) gives the desired result.
To complete the proof, it remains to show that \(G(\cdot)=\nabla V(\cdot).\) By applying Ito's formula to
\(e^{-rt}V(\tilde{Z}(t))\) and using Equations (3)-(4) and (17), we conclude that
\[e^{-rT}V(\tilde{Z}(T))-V(\tilde{Z}(0))= \int_{0}^{T}e^{-rt}\left(\mathcal{L}V(\tilde{Z}(t))-\tilde{\theta} \cdot\nabla V(\tilde{Z}(t))-rV(\tilde{Z}(t))\right)\mathrm{d}t\] \[+\int_{0}^{T}e^{-rt}\nabla V(\tilde{Z}(t))\cdot\mathrm{d}W(t).\]
Then, using Equation (38), we rewrite the preceding equation as follows:
\[e^{-rT}V(\tilde{Z}(T))-V(\tilde{Z}(0))=\int_{0}^{T}e^{-rt}\nabla V(\tilde{Z}(t ))\cdot\mathrm{d}W(t)-\int_{0}^{T}e^{-rt}F\left(\tilde{Z}(t),G(\tilde{Z}(t)) \right)\mathrm{d}t.\]
Comparing this with Equation (34) yields
\[\int_{0}^{T}e^{-rt}\left(G(\tilde{Z}(t))-\nabla V(\tilde{Z}(t))\right)\cdot \mathrm{d}W(t)=0,\]
which yields the following:
\[\mathbb{E}_{z}\left[\left(\int_{0}^{T}e^{-rt}\left(G(\tilde{Z}(t))-\nabla V( \tilde{Z}(t))\right)\cdot\mathrm{d}W(t)\right)^{2}\right]=0. \tag{39}\]
Thus, provided that \(e^{-rt}\left(G(\tilde{Z}(t))-\nabla V(\tilde{Z}(t))\right)\) is square integrable, Ito's isometry [48, Lemma D.1] yields the following:
\[\mathbb{E}_{z}\left[\left(\int_{0}^{T}e^{-rt}\left(G(\tilde{Z}(t))-\nabla V( \tilde{Z}(t))\right)\cdot\mathrm{d}W(t)\right)^{2}\right]=\mathbb{E}_{z} \left[\int_{0}^{T}\left\|e^{-rt}\left(G(\tilde{Z}(t))-\nabla V(\tilde{Z}(t)) \right)\right\|_{A}^{2}\mathrm{d}t\right]=0,\]
where \(\|x\|_{A}:=x^{\top}Ax\). The square integrability of \(e^{-rt}\left(G(\tilde{Z}(t))-\nabla V(\tilde{Z}(t))\right)\) follows because \(G\) and \(\nabla V\) have polynomial growth and the action space \(\Theta\) is bounded. Because of \(A\) is a positive definite matrix, we then have \(\nabla V(\tilde{Z}(t))=G(\tilde{Z}(t))\) almost surely. By the continuity of \(\nabla V\left(\cdot\right)\) and \(G(\cdot)\), we conclude that \(\nabla V(\cdot)=G(\cdot)\).
### Ergodic control
Again we use a reference policy with constant (negative) drift vector \(\tilde{\theta}\), and now we assume that \(R^{-1}\tilde{\theta}>0\), which ensures that the reference policy is admissible for our ergodic control formulation. For the following analogs of Propositions 2 and 3, let
\[f(z,x)=\tilde{\theta}\cdot x-\max_{\theta\in\Theta}\left\{\theta\cdot x-c(z, \theta)\right\}\text{ for }x\in\mathbb{R}^{d},\,z\in\mathbb{R}^{d}_{+}.\]
**Proposition 4**.: _If \(v\left(\cdot\right)\) and \(\xi\) solve the HJB equation (25) - (26), then we have_
\[v(\tilde{Z}(T))-v(\tilde{Z}(0))=\int_{0}^{T}\nabla v(\tilde{Z}(t))\cdot\mathrm{d }W(t)+T\xi-\int_{0}^{T}f(\tilde{Z}(t),\nabla v(\tilde{Z}(t)))\,\mathrm{d}t. \tag{40}\]
Proof.: Applying Ito's formula to \(v(z)\) yields
\[v(\tilde{Z}(T)) -v(\tilde{Z}(0)) \tag{41}\] \[=\int_{0}^{T}\nabla v(\tilde{Z}(t))\cdot\mathrm{d}W(t)+\int_{0}^{ T}\mathcal{D}v(\tilde{Z}(t))\cdot\mathrm{d}\tilde{Y}(t)+\int_{0}^{T}\left( \mathcal{L}v(\tilde{Z}(t))-\tilde{\theta}\cdot\nabla v(\tilde{Z}(t))\right) \mathrm{d}t.\]
Recall the boundary condition of the HJB equation is \(\mathcal{D}_{j}v(z)=0\) if \(z_{j}=0\). Thus Equations (3)-(4) jointly imply
\[\int_{0}^{T}\mathcal{D}v(\tilde{Z}(t))\cdot\mathrm{d}\tilde{Y}(t)=0.\]
Then, substituting the HJB equation (25) into Equation (41) gives (40).
**Proposition 5**.: _Suppose that \(v:\mathbb{R}^{d}_{+}\to\mathbb{R}\) is a \(C^{2}\) function, \(g:\mathbb{R}^{d}_{+}\to\mathbb{R}^{d}\) is continuous, and \(v,\,\nabla v,\,g\) all have polynomial growth. Also assume that the following identity holds almost surely for some fixed \(T>0\), a scalar \(\xi\) and every \(Z(0)=z\in\mathbb{R}^{d}_{+}\):_
\[v(\tilde{Z}(T))-v(\tilde{Z}(0))=\int_{0}^{T}g(\tilde{Z}(t))\cdot\mathrm{d}W(t )+T\xi-\int_{0}^{T}f(\tilde{Z}(t),g(\tilde{Z}(t)))\,\mathrm{d}t. \tag{42}\]
_Then, \(g(\cdot)=\nabla v(\cdot)\) and \((v,\xi)\) satisfies the HJB equation (16) - (17)._
Proof.: Let \(\tilde{\pi}\) be the stationary distribution of the RBM \(\tilde{Z}\) under the reference policy and \(\tilde{Z}(\infty)\) be a random variable with the distribution \(\tilde{\pi}\). Then, assuming the initial distribution of the RBM under the reference policy is \(\tilde{\pi}\), i.e. \(\tilde{Z}(0)\sim\tilde{\pi}\), its marginal distribution at time t is also \(\tilde{\pi}\), i.e. \(\tilde{Z}(t)\sim\tilde{\pi}\) for every \(t\geq 0\).
Because \(g\) has polynomial growth, one can show that the expectation of the stochastic integral (that is, the first term) on the right side of (42) vanishes. Then, by taking the expectation over \(\tilde{Z}(0)\sim\tilde{\pi}\), Equation (42) implies
\[\mathbb{E}_{\tilde{\pi}}\left[v(\tilde{Z}(0))\right]=\mathbb{E}_{\tilde{\pi}} \left[v(\tilde{Z}(T))\right]+\mathbb{E}_{\tilde{\pi}}\left[\int_{0}^{T}f( \tilde{Z}(t),g(\tilde{Z}(t)))\mathrm{d}t\right]-T\xi. \tag{43}\]
By observing that \(\mathbb{E}_{\tilde{\pi}}[v(\tilde{Z}(0))]=\mathbb{E}_{\tilde{\pi}}[v(\tilde{Z }(T))]\) and
\[\mathbb{E}_{\tilde{\pi}}\left[f(\tilde{Z}(t),g(\tilde{Z}(t)))\right] = \mathbb{E}\left[f(\tilde{Z}(\infty),g(\tilde{Z}(\infty)))\right] \text{ for }t\geq 0,\]
we conclude that \(\xi=\mathbb{E}[f(\tilde{Z}(\infty),g(\tilde{Z}(\infty)))]\). In other words, \(\xi\) can be viewed as the expected steady-state cost associated with the RBM under the reference policy, where \(f\left(\cdot,g(\cdot)\right)\) is the state-cost function. Therefore, it follows (by assumption) from Equations (23) - (24) with \(u(z)=\tilde{\theta}\) and
\(c(z,u(z))=c(z,\theta)=f(z,g(z))\) for \(z\in\mathbb{R}_{+}^{d}\) that there exists a \(C^{2}\)_relative value function_\(\tilde{v}\) with polynomial growth that satisfies the following PDE:
\[\mathcal{L}\tilde{v}(z)-\tilde{\theta}\cdot\nabla\tilde{v}(z)+f\left(z,g(z) \right)=\xi,\ \ z\in\mathbb{R}_{+}^{d}, \tag{44}\]
with boundary conditions \(\mathcal{D}_{i}\tilde{v}(z)=0\) if \(z_{i}=0\ \ (i=1,\ldots,d)\). Furthermore, applying Ito's formula to \(\tilde{v}(\tilde{Z}(t))\) yields
\[\tilde{v}(\tilde{Z}(T))- \tilde{v}(\tilde{Z}(0)) \tag{45}\] \[=\int_{0}^{T}\nabla\tilde{v}(\tilde{Z}(t))\cdot\mathrm{d}W(t)+ \int_{0}^{T}\mathcal{D}\tilde{v}(\tilde{Z}(t))\cdot\mathrm{d}\tilde{Y}(t)+ \int_{0}^{T}\left(\mathcal{L}\tilde{v}(\tilde{Z}(t))-\tilde{\theta}\cdot \nabla\tilde{v}(\tilde{Z}(t))\right)\mathrm{d}t.\]
Since \(\tilde{v}(z)\) also satisfies the boundary conditions \(\mathcal{D}_{i}\tilde{v}(z)=0\) if \(z_{i}=0\ \ (i=1,\ldots,d)\), it follows from Equations (3)-(4) that
\[\int_{0}^{T}\mathcal{D}\tilde{v}(\tilde{Z}(t))\cdot\mathrm{d}\tilde{Y}(t)=0.\]
Then, substituting Equation (44) into Equation (45) gives
\[\tilde{v}(\tilde{Z}(T))-\tilde{v}(\tilde{Z}(0))=\int_{0}^{T}\nabla\tilde{v}( \tilde{Z}(t))\cdot\mathrm{d}W(t)+T\xi-\int_{0}^{T}f(\tilde{Z}(t),g(\tilde{Z}(t )))\,\mathrm{d}t. \tag{46}\]
In the proof of Proposition 3, we first showed that, because \(\tilde{Z}\) is a time-homogeneous Markov process, the assumed stochastic relationship (34) can be extended to the more general form (36) with \(n\) an arbitrary positive integer. In the current context one can argue in exactly the same way to establish the following. First, the assumed stochastic relationship (42) actually holds in the more general form where \(T\) is replaced by \(nT\), with \(n\) an arbitrary positive integer. As a consequence, the derived stochastic relationship (46) also holds with \(nT\) in place of \(T\). And finally, after taking expectations on both sides of those generalized versions of (42) and (46) we arrive at the following:
\[v(z) = \mathbb{E}_{z}\left[v(\tilde{Z}(nT))\right]-nT\xi+\mathbb{E}_{z} \left[\int_{0}^{nT}f(\tilde{Z}(t),g(\tilde{Z}(t)))\mathrm{d}t\right],\ \text{and} \tag{47}\] \[\tilde{v}(z) = \mathbb{E}_{z}\left[\tilde{v}(\tilde{Z}(nT))\right]-nT\xi+ \mathbb{E}_{z}\left[\int_{0}^{nT}f(\tilde{Z}(t),g(\tilde{Z}(t)))\mathrm{d}t \right], \tag{48}\]
for \(z\in\mathbb{R}_{+}^{d}\) and an arbitrary positive integer \(n.\) Note that the expectation of the stochastic integral vanishes because \(\nabla\tilde{v}\) has polynomial growth. Subtracting (48) from (47) further yields
\[v(z)-\tilde{v}(z)=\mathbb{E}_{z}\left[v(Z(nT))\right]-\mathbb{E}_{z}\left[ \tilde{v}(Z(nT))\right].\]
Without loss of generality, we assume \(\mathbb{E}\left[\tilde{v}(\tilde{Z}(\infty))\right]=\mathbb{E}\left[v(\tilde{ Z}(\infty))\right]\). Since \(v(\cdot)\) and \(\tilde{v}(\cdot)\) have polynomial growth and the reference policy is admissible, we have that \(\sup_{n>0}\mathbb{E}_{z}\left[\left(v(\tilde{Z}(nT))\right)^{2}\right]<\infty\) and \(\sup_{n>0}\mathbb{E}_{z}\left[\left(\tilde{v}(\tilde{Z}(nT))\right)^{2}\right]<\infty\) by Equation (21). Then, by the Vitali convergence theorem, we
have
\[\lim_{n\rightarrow+\infty}\mathbb{E}_{z}\left[v(Z(nT))\right] = \mathbb{E}\left[v(\tilde{Z}(\infty))\right]\text{ and }\] \[\lim_{n\rightarrow+\infty}\mathbb{E}_{z}\left[\tilde{v}(Z(nT))\right] = \mathbb{E}\left[\tilde{v}(\tilde{Z}(\infty))\right].\]
Therefore, we have
\[v(z)-\tilde{v}(z)=\lim_{n\rightarrow+\infty}\left(\mathbb{E}_{z}\left[v(Z(nT) )\right]-\mathbb{E}_{z}\left[\tilde{v}(Z(nT))\right]\right)=0\text{ for }z\in\mathbb{R}_{+}^{d},\]
which means \(v(\cdot)\) also satisfies the PDE (44) and the associated boundary conditions. That is,
\[\mathcal{L}v(z)-\tilde{\theta}\cdot\nabla v(z)+f\left(z,g(z) \right)=\xi,\text{ for }z\in\mathbb{R}_{+}^{d}. \tag{49}\] \[\mathcal{D}_{i}v(z)=0\text{ if }z_{i}=0\text{ \ }(i=1,\ldots,d). \tag{50}\]
Suppose that \(g(\cdot)=\nabla v(\cdot)\) (which we will prove later). Substituting this into Equation (49) and using the definition of \(f\), it follows that
\[\mathcal{L}v(z)-\max_{\theta\in\Theta}\left\{\theta\cdot\nabla v(z)-c(z, \theta)\right\}=\xi,\text{ \ }z\in\mathbb{R}_{+}^{d},\]
which along with the boundary condition (50) gives the desired result.
To complete the proof, it remains to show that \(g(\cdot)=\nabla v(\cdot).\) By applying Ito's formula to \(v(\tilde{Z}(t))\) and using Equations (3)-(4) and (50), we conclude that
\[v(\tilde{Z}(T))-v(\tilde{Z}(0))=\int_{0}^{T}\left(\mathcal{L}v(\tilde{Z}(t))- \tilde{\theta}\cdot\nabla v(\tilde{Z}(t))\right)\mathrm{d}t+\int_{0}^{T}\nabla v (\tilde{Z}(t))\cdot\mathrm{d}W(t).\]
Then, using Equation (49), we rewrite the preceding equation as follows:
\[v(\tilde{Z}(T))-v(\tilde{Z}(0))=T\xi-\int_{0}^{T}f\left(\tilde{Z}(t),g(\tilde {Z}(t))\right)\mathrm{d}t+\int_{0}^{T}\nabla v(\tilde{Z}(t))\cdot\mathrm{d}W (t).\]
Comparing this with Equation (42) yields
\[\int_{0}^{T}\left(g(\tilde{Z}(t))-\nabla v(\tilde{Z}(t))\right)\cdot\mathrm{d }W(t)=0,\]
which yields the following:
\[\mathbb{E}_{z}\left[\left(\int_{0}^{T}\left(g(\tilde{Z}(t))-\nabla v(\tilde{Z }(t))\right)\cdot\mathrm{d}W(t)\right)^{2}\right]=0. \tag{51}\]
Thus, provided \(g(\tilde{Z}(t))-\nabla v(\tilde{Z}(t))\) is square integrable, Ito's isometry [48, Lemma D.1] gives the
following:
\[\mathbb{E}_{z}\left[\left(\int_{0}^{T}\left(g(\tilde{Z}(t))-\nabla v(\tilde{Z}(t)) \right)\cdot\mathrm{d}W(t)\right)^{2}\right]=\mathbb{E}_{z}\left[\int_{0}^{T} \left\|g(\tilde{Z}(t))-\nabla v(\tilde{Z}(t))\right\|_{A}^{2}\mathrm{d}t \right]=0.\]
The square integrability of \(g(\tilde{Z}(t))-\nabla v(\tilde{Z}(t))\) follows because \(g\) and \(\nabla v\) have polynomial growth, and \(\mathbb{E}_{z}\left(|\tilde{Z}(nT)|^{k}\right)\) is finite for all \(k\) because our action space \(\Theta\) is bounded. Then, since \(A\) is positive definite, \(\nabla v(\tilde{Z}(t))=g(\tilde{Z}(t))\) almost surely. By the continuity of \(\nabla v\left(\cdot\right)\) and \(g(\cdot)\), we conclude that \(\nabla v(\cdot)=g(\cdot)\).
## 5 Computational method
We follow in the footsteps of Han et al. [17], who developed a computational method to solve semilinear parabolic partial differential equations (PDEs). Those authors focused on a backward stochastic differential equation (BSDE) associated with their PDE, and in similar fashion, we focus on the stochastic differential equations (34) and (42) that are associated with our two stochastic control formulations (see Section 4). Our method differs from that of Han et al. [17], because they consider PDEs on a finite-time interval with an unbounded state space and a specified terminal condition, whereas our stochastic control problem has an infinite time horizon and state space constraints. As such, it leads to a PDE on a polyhedral domain with oblique derivative boundary conditions. We modify the approach of [17] to incorporate those additional features, treating the discounted and ergodic formulations in Sections 5.1 and 5.2, respectively.
### Discounted control
We approximate the value function \(V(\cdot)\) and its gradient \(\nabla V(\cdot)\) by deep neural networks \(V_{w_{1}}(\cdot)\) and \(G_{w_{2}}(\cdot)\), respectively, with associated parameter vectors \(w_{1}\) and \(w_{2}\). Seeking an approximate solution of the stochastic equation (34), we define the loss function
\[\ell(w_{1},w_{2}) = \mathbb{E}\left[\left(e^{-rT}\,V_{w_{1}}(\tilde{Z}(T))-V_{w_{1}} (\tilde{Z}(0))\right.\right.\] \[\left.\left.-\int_{0}^{T}e^{-rt}G_{w_{2}}(\tilde{Z}(t))\cdot \mathrm{d}W(t)+\int_{0}^{T}e^{-rt}F(\tilde{Z}(t),G_{w_{2}}(\tilde{Z}(t))) \,\mathrm{d}t\right)^{2}\right],\]
Here the expectation is taken with respect to the sample path distribution of our reference process \(\tilde{Z}\), which will be specified in Algorithm 3 below. Our definition (52) of the loss function does not explicitly enforce the consistency requirement \(\nabla V_{w_{1}}(\cdot)=G_{w_{2}}(\cdot)\), but Proposition 3 provides the justification for this separate parametrization. This type of double parametrization has also been implemented by Zhou et al. [49].
Our computational method seeks a neural network parameter combination \((w_{1},w_{2})\) that minimizes an approximation of the loss defined via (52). Specifically, we first simulate multiple discretized
paths of the reference RBM \(\tilde{Z}\), restricted to a fixed and finite time domain \([0,T]\). To do that, we sample discretized paths of the underlying Brownian motion \(W\), and then solve a discretized Skorohod problem for each path of \(W\) (this is the purpose of Subroutine 2) to obtain the corresponding path of \(\tilde{Z}\). Thereafter, our method computes a discretized version of the loss (52), summing over sampled paths to approximate the expectation and over discrete time steps to approximate the integral over \([0,T]\), and minimizes it using stochastic gradient descent; see Algorithm 3. In Subroutine 2, given the index set \(B\), \(R_{B,B}\) is the submatrix derived by deleting the rows and columns of \(R\) with indices in \(\{1,\ldots,d\}\backslash B\). Similarly, \(R_{:,B}\) is the matrix that one arrives at by deleting the columns of \(R\) whose indices are in the set \(\{1,\ldots,d\}\backslash B\).
```
1:A vector \(x\in\mathbb{R}^{d}\) and the reflection matrix \(R\).
2:A solution to the Skorokhod problem \(y\in\mathbb{R}^{d}_{+}\)
3:Set \(\epsilon=10^{-8}\);
4:functionSkorookhod(\(x\))
5:\(y=x\);
6:while Exists \(y_{i}<-\epsilon\)do
7: Compute the set \(B=\{i:y_{i}<\epsilon\}\);
8: Compute \(L_{B}=-R_{B,B}^{-1}x_{B}\);
9: Compute \(y=x+R_{:,B}\times L_{B}\);
10:endwhile
11:return\(y\).
12:endfunction
```
**Subroutine 1** Euler discretization scheme
After the parameter values \(w_{1}\) and \(w_{2}\) have been determined, our proposed policy is as follows:
\[\theta_{w_{2}}(z)=\arg\max_{\theta\in\Theta}\left(\theta\cdot G_{w_{2}}(z)-c(z, \theta)\right),\ z\in\mathbb{R}^{d}_{+}. \tag{54}\]
**Remark 2**.: One can also consider the policy using \(\nabla V_{w_{1}}(\cdot)\) instead of \(G_{w_{2}}(\cdot).\) That is,
\[\arg\max_{\theta\in\Theta}\left(\theta\cdot\nabla V_{w_{1}}(z)-c(z,\theta)\right),\ z\in\mathbb{R}_{+}^{d}. \tag{55}\]
However, our numerical experiments suggest that this policy is inferior to (54).
### Ergodic control
We parametrize \(v(\cdot)\) and \(\nabla v(\cdot)\) using deep neural networks \(v_{w_{1}}(\cdot)\) and \(g_{w_{2}}(\cdot)\) with parameters \(w_{1}\) and \(w_{2}\), respectively, and then use Equation (42) to define an auxiliary loss function
\[\tilde{\ell}(w_{1},w_{2},\xi) = \mathbb{E}\left[\left(v_{w_{1}}(\tilde{Z}(T))-v_{w_{1}}(\tilde{Z} (0))\right.\right.\] \[\left.\left.-\int_{0}^{T}g_{w_{2}}(\tilde{Z}(t))\mathrm{d}W(t)-T \xi+\int_{0}^{T}f\left(\tilde{Z}(t),g_{w_{2}}\left(\tilde{Z}(t)\right)\right) \mathrm{d}t\right)^{2}\right].\]
Then, defining the loss function \(\ell(w_{1},w_{2})=\min_{\xi}\ell(w_{1},w_{2},\xi)\) and noting that
\[\mathrm{Var}\left(X\right)=\min_{\xi}\mathbb{E}[\left(X-\xi\right)^{2}],\]
we arrive at the following expression for the loss function
\[\ell(w_{1},w_{2})=\mathrm{Var}\left(v_{w_{1}}(\tilde{Z}(T))-v_{w_{1}}(\tilde{Z}(0 ))-\int_{0}^{T}g_{w_{2}}(\tilde{Z}(t))\mathrm{d}W(t)+\int_{0}^{T}f\left(\tilde{Z }(t),g_{w_{2}}\left(\tilde{Z}(t)\right)\right)\mathrm{d}t\right)\right). \tag{57}\]
We present our method for the ergodic control case formally in Algorithm 4.
```
0: The number of iteration steps \(M\), a batch size \(B\), a learning rate \(\alpha\), a time horizon \(T\), a discretization step-size \(h\) (for simplicity, we assume \(N\triangleq T/h\) is an integer), a starting point \(z\), and an optimization solver (SGD, ADAM, RMSProp, etc).
0: A neural network approximation of the value function \(v_{w_{1}}\) and the gradient function \(g_{w_{2}}\).
1: Initialize the neural networks \(v_{w_{1}}\) and \(g_{w_{2}}\); set \(z_{0}^{(i)}=z\) for \(i=1,2,...,B\).
2:for\(k\gets 0\) to \(M-1\)do
3: Simulate \(B\) discretized RBM paths and the Brownian increments \(\{\tilde{Z}^{(i)},\delta^{(i)}\}\) with a time horizon \(T\) and a discretization step-size \(h\) starting from \(\tilde{Z}^{(i)}(0)=z_{k}^{(i)}\) by invoking Discretize\((T,h,z_{k}^{(i)})\), for \(i=1,2,...,B\).
4: Compute the empirical loss \[\hat{\ell}(w_{1},w_{2}) = \widehat{\mathrm{Var}}\left(v_{w_{1}}(\tilde{Z}^{(i)}(T))-v_{w_{ 1}}(\tilde{Z}^{(i)}(0))-\sum_{j=0}^{N-1}g_{w_{2}}(\tilde{Z}^{(i)}(hj))\cdot \delta_{j}^{(i)}\right.\] \[\left.+\sum_{j=0}^{N-1}f\left(\tilde{Z}^{(i)}(hj),g_{w_{2}}\left( \tilde{Z}^{(i)}(hj)\right)\right)h\right).\]
5: Compute the gradient \(\partial\hat{\ell}(w_{1},w_{2})/\partial w_{1},\partial\hat{\ell}(w_{1},w_{2} )/\partial w_{2}\) and update \(w_{1},w_{2}\) using the chosen optimization solver.
6: Update \(z_{k+1}^{(i)}\) as the end point of the path \(\tilde{Z}^{(i)}\): \(z_{k+1}^{(i)}\leftarrow\tilde{Z}^{(i)}(T)\).
7:endfor
8:return Functions \(v_{w_{1}}(\cdot)\) and \(g_{w_{2}}(\cdot)\).
```
**Algorithm 4** Method for the ergodic control case
After the parameters values \(w_{1}\) and \(w_{2}\) have been determined, our proposed policy is the following:
\[\bar{\theta}_{w_{2}}(z)=\arg\max_{\theta\in\Theta}\left(\theta\cdot g_{w_{2}}( z)-c(z,\theta)\right),\ z\in\mathbb{R}_{+}^{d}.\]
## 6 Three families of test problems
Here we specify three families of test problems for which numerical results will be presented later (see Section 7). Each family consists of RBM drift control problems indexed by \(d=1,2,\dots\), where \(d\) is the dimension of the orthant that serves as the problem's state space. The first of the three problem families, specified in Section 6.1, is characterized by a feed-forward network structure and linear cost of control. Recapitulating earlier work by Ata [2], Section 6.2 explains the interpretation of such problems as "heavy traffic" limits of input control problems for certain feed-forward queueing
networks.
Our second family of test problems is identical to the first one except that now the cost of control is quadratic rather than linear. The exact meaning of that phrase will be spelled out in Section 6.3, where we also explain the interpretation of such problems as heavy traffic limits of dynamic pricing problems for queueing networks. In Section 6.4, we describe two parametric families of policies with special structure that will be used later for comparison purposes in our numerical study. Finally, Section 6.5 specifies our third family of test problems, which have a separable structure that allows them to be solved exactly by analytical means. Such problems are of obvious value for evaluating the accuracy of our computational method.
### Main example with linear cost of control
We consider a family of test problems with parameters \(K=0,1,\ldots\), attaching to each such problem the index \(d\) (mnemonic for _dimension_) \(=K+1\). Problem \(d\) has state space \(\mathbb{R}^{d}_{+}\) and the \(d\times d\) reflection matrix
\[R=\left[\begin{array}{cccc}1&&&\\ -p_{1}&1&&&\\ \vdots&&\ddots&\\ -p_{K}&&&1\end{array}\right], \tag{59}\]
where \(p_{1},\ldots,p_{K}>0\) and \(p_{1}+\cdots+p_{K}=1\). Also, the set of drift vectors available in each state is
\[\Theta=\prod_{k=0}^{K}\left[\underline{\theta}_{k},\overline{\theta}_{k} \right]. \tag{60}\]
where the lower limit \(\underline{\theta}_{k}\) and upper limit \(\overline{\theta}_{k}\) are as specified in Section 6.2 below. Similarly, the \(d\times d\) covariance matrix \(A\) for problem \(d\) is as specified in Section 6.2. Finally, the cost function for problem \(d\) has the linear form
\[c(z,\theta)=h^{\top}z+c^{\top}\theta\ \ \mbox{where}\ \ h,\,c\in\mathbb{R}^{d}_{+}. \tag{61}\]
That is, the cost rate \(c(Z(t),u(Z(t)))\) that the system manager incurs under policy \(u\) at time \(t\) is linear in both the state vector \(Z(t)\) and the chosen drift rate \(u(Z(t))\).
In either the discounted control setting or the ergodic control setting, inspection of the HJB equation displayed earlier in Section 3 shows that, given this linear cost structure, there exists an optimal policy \(u^{*}(\cdot)\) such that
\[\mbox{either}\ \ u^{*}_{k}(z)=\underline{\theta}_{k}\ \ \mbox{or}\ \ u^{*}_{k}(z)=\overline{\theta}_{k} \tag{62}\]
for each state \(z\ \in\mathbb{R}^{K+1}_{+}\) and each component \(k=0,1,\ldots,K\).
In the next section we explain how drift control problems of the form specified here arise as heavy traffic limits in queueing theory. Strictly speaking, however, that interpretation of the test problems
is inessential to the main subject of this paper: the computational results presented in Section 7 can be read without reference to the queueing theoretic interpretations of our test problems.
### Interpretation as heavy traffic limits of queueing network control problems
Let us consider the feed-forward queueing network model of a make-to-order production system portrayed in Figure 1. There are \(d=K+1\) buffers, represented by the open-ended rectangles, indexed by \(k=0,1,\ldots,K.\) Each buffer has a dedicated server, represented by the circles in Figure 1. Arriving jobs wait in their designated buffer if the server is busy. There are two types of jobs arriving to the system: regular versus thin streams. Thin stream jobs have the same service time distributions as the regular jobs, but they differ from the regular jobs in two important ways: First, thin stream jobs can be turned away upon arrival. That is, a system manager can exercise admission control in this manner, but in contrast, she must admit all regular jobs arriving to the system. Second, the volume of thin stream jobs is smaller than that of the regular jobs; see Assumption 1.
Regular jobs enter the systems only through buffer zero, as shown by the solid arrow pointing to buffer zero in Figure 1. A renewal process \(E=\{E(t):t\geq 0\}\) models the cumulative number of regular jobs arriving to the system over time. We let \(\lambda\) denote the arrival rate and \(a^{2}\) denote the squared coefficient of variation of the interarrival times for the regular jobs. The thin stream jobs arrive to buffer \(k\) (as shown by the dashed arrows in Figure 1) according to the renewal process \(A_{k}=\{A_{k}(t):t\geq 0\}\) for \(k=0,1,\ldots,K\). We let \(\eta_{k}\) denote the arrival rate and \(b_{k}^{2}\) denote the squared coefficient of variation of the interarrival times for renewal process \(A_{k}\).
Jobs in buffer \(k\) have i.i.d. general service time distributions with mean \(m_{k}\) and squared coefficient of variation \(s_{k}^{2}\geq 0,\ k=0,1,\ldots,K\); \(\mu_{k}=1/m_{k}\) is the corresponding service rate. We let \(S_{k}=\{S_{k}(t):t\geq 0\}\) denote the renewal process associated with the service completions by server \(k\) for \(k=1,\ldots,K\). To be specific, \(S_{k}(t)\) denotes the number of jobs server \(k\) processes by time \(t\) if it incurs no idleness during \([0,t]\). The jobs in each buffer are served on a first-come-first-served (FCFS) basis, and servers work continuously unless their buffer is empty. After receiving service, jobs in buffer zero join buffer \(k\) with probability \(p_{k},\ k=1,2,\ldots,K\), independently of other events. This probabilistic
Figure 1: A feedforward queueing network with thin arrival streams.
routing structure is captured by a vector-valued process \(\Phi(\cdot)\) where \(\Phi_{k}(\ell)\) denotes the total number of jobs routed to buffer \(k\) among the first \(\ell\) jobs served by server zero for \(k=1,\ldots,K\) and \(\ell\geq 1\). We let \(p=(p_{k})\) denote the \(K\)-dimensional vector of routing probabilities. Jobs in buffers \(1,\ldots,K\) leave the system upon receiving service.
As stated earlier, the system manager makes admission control decisions for thin stream jobs. Turning away a thin stream job arriving to buffer \(k\) (externally) results in a penalty of \(c_{k}\). For mathematical convenience, we model admission control decisions as if the system manager can simply "turn off" each of the thin stream arrival processes as desired. In particular, we let \(\Delta_{k}(t)\) denote the cumulative amount of time that the (external) thin stream input to buffer \(k\) is turned off during the interval \([0,t]\). Thus, the vector-valued process \(\Delta=(\Delta_{k})\) represents the admission control policy. Similarly, we let \(T_{k}(t)\) denote the cumulative amount of time server \(k\) is busy during the time interval \([0,t]\), and \(I_{k}(t)=t-T_{k}(t)\) denotes the cumulative amount of idleness that server \(K\) incurs during \([0,t]\).
Letting \(Q_{k}(t)\) denote the number of jobs in buffer \(k\) at time \(t\), the vector-valued process \(Q=(Q_{k})\) will be called the queue-length process. Given a control \(\Delta=(\Delta_{k})\), assuming \(Q(0)=0\), it follows that
\[Q_{0}(t) = E(t)+A_{0}(t-\Delta_{0}(t))-S_{0}(T_{0}(t))\geq 0,\ \ t\geq 0, \tag{63}\] \[Q_{k}(t) = A_{k}(t-\Delta_{k}(t))+\Phi_{k}\left(S_{0}(T_{0}(t))\right)-S_{k }(T_{k}(t))\geq 0,\ \ t\geq 0,\ \ k=1,\ldots,K. \tag{64}\]
Moreover, the following must hold:
\[I(\cdot)\text{ is continuous and nondecreasing with }I(0)=0, \tag{65}\] \[I_{k}(\cdot)\text{ only increases at those times }t\text{ when }Q_{k}(t)=0,\ \ k=0,1,\ldots,K,\] (66) \[\Delta_{k}(t)-\Delta_{k}(s)\leq t-s,\ \ 0\leq s\leq t<\infty,\ \ k=0,1, \ldots,K.\] (67) \[I,\Delta\text{ are non-anticipating.} \tag{68}\]
The system manager also incurs a holding cost at rate \(h_{k}\) per job in buffer \(k\) per unit of time. We use the processes \(\xi=\{\xi(t),t\geq 0\}\) as a proxy for the cumulative cost under a given admission control policy \(\Delta\left(\cdot\right),\) where
\[\xi(t)=\sum_{k=0}^{K}c_{k}\eta_{k}\Delta_{k}(t)+\sum_{k=0}^{K}\int_{0}^{t}h_{k }Q_{k}(s)\,\mathrm{d}s,\ \ t\geq 0.\]
This is an approximation of the realized cost because the first term on the right-hand side replaces the admission control penalties actually incurred with their means.
In order to derive the approximating Brownian control problem, we consider a sequence of systems indexed by a system parameter \(n=1,2,\ldots\); we attach a superscript of \(n\) to various quantities of interest. Following the approach used by Ata [2], we assume that the sequence of systems satisfies the following heavy traffic assumption.
**Assumption 1**.: _For \(n\geq 1,\) we have that_
\[\lambda^{n}=n\lambda,\eta_{k}^{n}=\eta_{k}\sqrt{n}\text{ and }\mu_{k}^{n}=n\mu_{k}+ \sqrt{n}\beta_{k},\ \ k=0,1,\ldots,K,\]
_where \(\lambda,\)\(\mu_{k},\eta_{k}\) and \(\beta_{k}\) are nonnegative constants. Moreover, we assume that_
\[\lambda=\mu_{0}=\frac{\mu_{k}}{p_{k}}\ \text{ for }\ k=1,\ldots,K.\]
One starts the approximation procedure by defining suitably centered and scaled processes. For \(n\geq 1,\) we define
\[\hat{E}^{n}(t)=\frac{E^{n}(t)-\lambda^{n}t}{\sqrt{n}} \text{ and }\hat{\Phi}^{n}(q)=\frac{\Phi\left([nq]\right)-p([nq])}{\sqrt{n}}, \ \ t\geq 0,\ \ q\geq 0,\] \[\hat{A}_{k}^{n}=\frac{Q^{n}(t)}{\sqrt{n}} \text{ and }\hat{S}_{k}^{n}(t)=\frac{S_{k}^{n}(t)-\mu_{k}^{n}t}{\sqrt{n}}, \ t\geq 0,\ k=0,1,\ldots,K,\] \[\hat{Q}^{n}(t)=\frac{Q^{n}(t)}{\sqrt{n}} \text{ and }\hat{\xi}^{n}(t)=\frac{\xi^{n}(t)}{\sqrt{n}},\ t\geq 0.\]
In what follows, we assume
\[T_{k}^{n}(t)=t-\frac{1}{\sqrt{n}}I_{k}(t)+o\left(\frac{1}{\sqrt{n}}\right),\ t \geq 0,\ k=0,1,\ldots,K, \tag{69}\]
where \(I_{k}(\cdot)\) is the limiting idleness process for server \(k;\) see [18] for an intuitive justification of (69).
Then, defining
\[\chi_{0}^{n}(t) = \hat{E}^{n}(t)+\hat{A}_{0}^{n}(t-\Delta_{0}(t))+\hat{S}_{0}^{n}(T _{0}^{n}(t)),\ t\geq 0,\] \[\chi_{k}^{n}(t) = \hat{A}_{k}^{n}(t-\Delta_{0}(t))+\Phi_{k}\left(\frac{1}{n}\hat{S }_{0}^{n}(T_{0}^{n}(t))\right)+p_{k}\hat{S}_{0}^{n}(T_{0}^{n}(t))\] \[\ \ \ -\hat{S}_{k}^{n}(T_{k}^{n}(t)),\ t\geq 0,\ k=1,2,\ldots,K,\]
and using Equations (63) - (64) and (69), it is straightforward to derive the following for \(t\geq 0\) and \(k=1,\ldots,K:\)
\[\hat{Q}_{0}^{n}(t) = \chi_{0}^{n}(t)+\left(\eta_{0}-\beta_{0}\right)t-\eta_{0}\Delta_{ 0}(t)+\mu_{0}I_{0}(t)+o(1), \tag{70}\] \[\hat{Q}_{k}^{n}(t) = \chi_{k}^{n}(t)+\left(\eta_{k}+p_{k}\beta_{0}-\beta_{k}\right)t- \eta_{k}\Delta_{k}(t)+\mu_{k}I_{k}(t)-p_{k}\mu_{0}I_{0}(t)+o(1). \tag{71}\]
Moreover, it follows from Equation (67) that \(\Delta_{k}(t)\) is absolutely continuous. We denote its density by \(\delta_{k}(\cdot),i.e.,\)
\[\Delta_{k}(t)=\int_{0}^{t}\delta_{k}(s)\mathrm{d}s,\ t\geq 0,\ k=0,1,\ldots,K,\]
where \(\delta_{k}(t)\in[0,1].\) Using this, we write
\[\hat{\xi}^{n}(t)=\sum_{k=0}^{K}\int_{0}^{t}v_{k}\eta_{k}\delta_{k}(s)\mathrm{d}s+ \sum_{k=0}^{K}\int_{0}^{t}h_{k}\hat{Q}_{k}^{n}(s)\mathrm{d}s,t\geq 0. \tag{72}\]
Then passing to the limit formally as \(n\rightarrow\infty,\) and denoting the weak limit of \((\hat{Q}^{n},\hat{X}^{n},\hat{\xi}^{n})\) by \(\left(Z,\chi,\xi\right),\) where \(\chi\) is a \((K+1)\)-dimensional driftless Brownian motion with covariance matrix (see Appendix C for its derivation)
\[A=\mu_{0}\left[\begin{array}{cccc}s_{0}^{2}+a^{2}&-p_{1}s_{0}^{2}&\cdots& \cdots&-p_{K}s_{0}^{2}\\ -p_{1}s_{0}^{2}&p_{1}(1-p_{1})+p_{1}^{2}s_{0}^{2}+p_{1}s_{1}^{2}&p_{1}p_{2} \left(s_{0}^{2}-1\right)&\cdots&p_{1}p_{K}\left(s_{0}^{2}-1\right)\\ \vdots&p_{1}p_{2}\left(s_{0}^{2}-1\right)&\ddots&&\vdots\\ \vdots&\vdots&&\ddots&p_{K-1}p_{K}\left(s_{0}^{2}-1\right)\\ -p_{K}s_{0}^{2}&p_{1}p_{K}\left(s_{0}^{2}-1\right)&\cdots&\cdots&p_{K}(1-p_{K} )+p_{K}^{2}s_{0}^{2}+p_{K}s_{K}^{2}\end{array}\right],\]
we deduce from (70) - (71) and (72) that
\[Z_{0}(t) = \chi_{0}(t)+(\eta_{0}-\beta_{0})t-\int_{0}^{t}\eta_{0}\delta_{0} (s)\mathrm{d}s+\mu_{0}I_{0}(t),\] \[Z_{k}(t) = \chi_{k}(t)+(\eta_{k}+p_{k}\beta_{0}-\beta_{k})\,t-\int_{0}^{t} \eta_{k}\delta_{k}(s)\mathrm{d}s+\mu_{k}I_{k}(t),k=1,\ldots,K,\] \[\xi(t) = \sum_{k=0}^{K}\int_{0}^{t}v_{k}\eta_{k}\delta_{k}(s)\mathrm{d}s+ \sum_{k=0}^{K}\int_{0}^{t}h_{k}Z_{k}(s)\mathrm{d}s.\]
In order to streamline the notation, we make the following change of variables:
\[Y_{k}(t) = \mu_{k}I_{k}(t),\ k=0,\ldots,K,\] \[\theta_{0}(t) = \eta_{0}\delta_{0}(t)-(\eta_{0}-\beta_{0}),\ t\geq 0,\] \[\theta_{k}(t) = \eta_{k}\delta_{k}(t)-(\eta_{k}+p_{k}\beta_{0}-\beta_{k})\]
and let
\[\underline{\theta}_{0} = \beta_{0}\ \ \mbox{and}\ \ \overline{\theta}_{k}=\beta_{0}-\eta_{0},\] \[\underline{\theta}_{k} = \beta_{k}-p_{k}\beta_{0}\ \ \mbox{and}\ \ \overline{\theta}_{k}=\beta_{k}-\eta_{k}-p_{k}\beta_{0},\ k=1,\ldots,K.\]
Lastly, we define the set of negative drift vectors available to the system manager as in Equation (60). As a result, we arrive at the following Brownian system model:
\[Z_{0}(t) = \chi_{0}(t)-\int_{0}^{t}\theta_{0}(s)\mathrm{d}s+Y_{0}(t),\ \ t\geq 0, \tag{73}\]
\[Z_{k}(t) = \chi_{k}(t)-\int_{0}^{t}\theta_{k}(s)\mathrm{d}s-Y_{k}(t)+p_{k}Y_{0} (t),\ \ k=1,\ldots,K, \tag{74}\]
which can be written as in Equation (2) with \(d=K+1\), where the reflection matrix \(R\) is given by Equation (59). Moreover, the processes \(Y,Z\) inherit properties in Equation (63) - (73) from their pre-limit counterparts in the queueing model, cf. Equations (65) - (66).
To minimize technical complexity, we restrict attention to stationary Markov control policies as done in Section 3. That is, \(\theta(t)=u(Z(t))\) for \(t\geq 0\) for some policy function \(u:\mathbb{R}_{+}^{d}\to\Theta.\) Then, defining \(c=(c_{0},c_{1},\ldots,c_{K})^{\top},\ h=(h_{0},h_{1},\ldots,h_{K})^{\top}\) and
\[c(z,\theta)=h^{\top}z+c^{\top}\theta,\]
as in Equation (61), the cumulative cost incurred over the time interval \([0,t]\) under policy \(u\) can be written as in Equation (10). Note that \(C^{u}(t)\) and \(\xi(t)\) differ only by a term that is independent of the control. Given \(C^{u}(t)\), one can formulate the discounted control problem as done in Section 3.1. Similarly, the ergodic control problem can be formulated as done in Section 3.2.
**Interpreting the solution of the drift control problem in the context of the queueing network formulation.** Because the instantaneous cost rate \(c(z,\theta)\) is linear in the control, inspection of the HJB equation reveals that the optimal control is of bang-bang nature. That is, \(\theta_{k}(t)\in\{\underline{\theta}_{k},\overline{\theta}_{k}\}\) for all \(k,t\) as stated in Equation (62). This can be interpreted in the context of the queueing network displayed in Figure 1 as follows: For \(k=0,1,\ldots,K\), whenever \(\theta_{k}(t)=\overline{\theta}_{k}\), the system manager turns away the thin stream jobs arriving to buffer \(k\) externally, i.e., she shuts off the renewal process \(A_{k}(\cdot)\) at time \(t\). Otherwise, she admits them to the system. Of course, the optimal policy is determined by the gradient \(\nabla V(z)\) of the value function through the HJB equation, which we solve for using the method described in Section 5.
### Related example with quadratic cost of control
Celik and Maglaras [12] and Ata and Barjesteh [3] advance formulations where a system manager controls the arrival rate of customers to a queueing system by exercising dynamic pricing. One can follow a similar approach for the feed-forward queueing networks displayed in Figure 1 with suitable modifications, e.g., the dashed arrows also correspond to arrivals of regular jobs. This ultimately results in a problem of drift control for RBM with the cost of control
\[c(\theta,z)=\sum_{k=0}^{K}\alpha_{k}(\theta_{k}-\underline{\theta}_{k})^{2}+ \sum_{k=0}^{K}h_{k}z_{k}, \tag{75}\]
where \(\underline{\theta}\) is the drift rate vector corresponding to a nominal price vector.
### Two parametric families of benchmark policies
Recall the optimal policy can be characterized as
\[u^{*}(z)=\arg\max_{\theta\in\Theta}\left\{\theta\cdot\nabla V(z)-c(z,\theta) \right\},\ z\in\mathbb{R}_{+}^{d}. \tag{76}\]
**The benchmark policy for the main test problem.** In our main test problem (see Section 6.1), we have \(c(z,\theta)=h^{\top}z+c^{\top}\theta\). Therefore, it follows from (76) that for \(k=0,1,\ldots,K\),
\[u_{k}^{*}(z)=\left\{\begin{array}{ll}\overline{\theta}_{k}&\text{ if }\left(\nabla V(z)\right)_{k}\geq c_{k},\\ \underline{\theta}_{k}&\text{ otherwise.}\end{array}\right.\]
Namely, the optimal policy is of bang-bang type. Therefore, we consider the following linear-boundary policies as our benchmark polices: For \(k=0,1,\ldots,K\),
\[u_{k}^{\text{lbp}}(z)=\left\{\begin{array}{ll}\overline{\theta}_{k}&\text{ if }\beta_{k}^{\top}z\geq c_{k},\\ \underline{\theta}_{k}&\text{ otherwise,}\end{array}\right.\]
where \(\beta_{0},\beta_{1}\ldots,\beta_{K}\in\mathbb{R}^{K+1}\) are vectors of policy parameters to be tuned.
In our numerical study, we focus attention on the symmetric case where
\[h_{0}>h_{1}=\ldots=h_{K},\] \[c_{0}=c_{1}=\ldots=c_{K},\] \[p_{1}=\ldots=p_{K}=\frac{1}{K},\] \[\underline{\theta}_{1}=\ldots=\underline{\theta}_{K},\] \[\overline{\theta}_{1}=\ldots=\overline{\theta}_{K}.\]
Due to this symmetry, the downstream buffers look identical. As such, we restrict attention to parameter vectors of the following form:
\[\beta_{0} = \left(\phi_{1},\phi_{2},\ldots,\phi_{2}\right),\text{ and}\] \[\beta_{i} = \left(\phi_{3},\phi_{4},\ldots\phi_{4},\phi_{5},\phi_{4},\ldots, \phi_{4}\right)\text{ where }\phi_{5}\text{ is the }i+1^{\text{st}}\text{ element of }\beta_{i}\text{ for }i=1,\ldots,K.\]
The parameter vector \(\beta_{0}\), which is used to determine the benchmark policy for buffer zero, has two distinct parameters: \(\phi_{1}\) and \(\phi_{2}\). In considering the policy for buffer zero, \(\phi_{1}\) captures the effect of its own queue length, whereas \(\phi_{2}\) captures the effects of the downstream buffers \(1,\ldots,K\). We use a common parameter for the downstream buffers because they look identical from the perspective of buffer zero. Similarly, the parameter vector \(\beta_{i}\) (\(i=1,\ldots,K\)) has three distinct parameters: \(\phi_{3}\), \(\phi_{4}\) and \(\phi_{5}\), where \(\phi_{3}\) is used as the multiplier for buffer zero (the upstream buffer), \(\phi_{5}\) is used to capture the effect of buffer \(i\) itself and \(\phi_{4}\) is used for all other downstream buffers. Note that all \(\beta_{i}\) use the same three parameters \(\phi_{3}\), \(\phi_{4}\) and \(\phi_{5}\) for \(i=1,\ldots,K\). They only differ with respect to the position
of \(\phi_{5}\), i.e., it is in the \(i+1^{\text{st}}\) position for \(\beta_{i}\).
In summary, the benchmark policy uses five distinct parameters in the symmetric case. This allows us to do a brute-force search via simulation on a five-dimensional grid regardless of the number of buffers.
**The benchmark policy for the test problem with the quadratic cost of control.** In this case, substituting Equation (75) into Equation (76) gives the following characterization of the optimal policy:
\[u_{k}^{*}(z)=\underline{\theta}_{k}+\frac{(\nabla V(z))_{k}}{2\alpha_{k}},\ k=0,1, \ldots,K. \tag{77}\]
Namely, the optimal policy is affine in the gradient. Therefore, we consider the following affine-rate policies as our benchmark polices: For \(k=0,1,\ldots,K\),
\[u_{k}^{\text{arp}}(z)=\underline{\theta}_{k}+\beta_{k}^{\top}z,\]
where \(\beta_{0},\beta_{1}\ldots,\beta_{K}\in\mathbb{R}^{K+1}\) are vectors of policy parameters to be tuned. We truncate this at the upper bound \(\overline{\theta}_{k}\) if needed.
We focus attention on the symmetric case for this problem formulation too. To be specific, we assume
\[h_{0}>h_{1}=\ldots=h_{K},\] \[\alpha_{0}=\alpha_{1}=\ldots=\alpha_{K},\] \[p_{1}=\ldots=p_{K}=\frac{1}{K},\] \[\underline{\theta}_{1}=\ldots=\underline{\theta}_{K},\] \[\overline{\theta}_{1}=\ldots=\overline{\theta}_{K}.\]
Due to this symmetry, the downstream buffers look identical. As such, we restrict attention to parameter vectors of the following form:
\[\beta_{0} = (\phi_{1},\phi_{2},\ldots,\phi_{2})\,,\text{ and}\] \[\beta_{i} = (\phi_{3},\phi_{4},\ldots\phi_{4},\phi_{5},\phi_{4},\ldots,\phi_{ 4})\text{ where }\phi_{5}\text{ is the }i+1^{\text{st}}\text{element for }i=1,\ldots,K.\]
As done for the first benchmark policy above, this particular form of the parameter vectors can be justified using the symmetry as well.
### Parallel-server test problems
In this section, we consider a problem whose solution can be derived analytically by considering a one-dimensional problem. To be specific, we consider the parallel-server network that consists of \(K\) identical single-server queues as displayed in Figure 2. Clearly, this network can be decomposed into \(K\) separate single-server queues, leading to K separate one-dimensional problem formulations, which
can be solved analytically, see Appendix D for details. For this example we have that \(R=I_{d\times d}\) and \(A=I_{d\times d}\). In addition, we assume that the action space \(\Theta\) and the cost function \(c(z,\theta)\) are the same as above.
## 7 Computational results
For the test problems introduced in Section 6, we now compare the performance of policies derived using our method (see Section 5) with the best benchmark we could find. The results show that our method performs well, and it remains computationally feasible up to at least dimension \(d=30\). We implement our method using three-layer or four-layer neural networks with the elu activation function [38] in Tensorflow 2 [1], and using code adapted from that of Han et al. [17] and Zhou et al. [50]; see Appendix E for further details of our implementation.1
Footnote 1: Our code is available in [https://github.com/nian-si/RBMSSolver](https://github.com/nian-si/RBMSSolver).
For our main test problem with linear cost of control (introduced previously in Section 6.1), and also for its variant with quadratic cost of control (Section 6.3), the following parameter values are assumed: \(h_{0}=2\), \(h_{k}=1.9\) for \(k=1,\ldots,K\), \(c_{k}=1\) for \(k=0,\ldots,K\), and \(p_{k}=1/K\) for \(k=1,\ldots,K\). Also, the reflection matrix \(R\) and the covariance matrix \(A\) for those families of problems are as follows:
\[R=\left[\begin{array}{cccc}1&&&&\\ -1/K&1&&\\ \vdots&&\ddots&\\ -1/K&&&1\end{array}\right]\text{ and }A=\left[\begin{array}{cccc}1&0& \cdots&\cdots&0\\ 0&1&-\frac{1}{K^{2}}&\cdots&-\frac{1}{K^{2}}\\ \vdots&-\frac{1}{K^{2}}&\ddots&&\vdots\\ \vdots&\vdots&&\ddots&-\frac{1}{K^{2}}\\ 0&-\frac{1}{K^{2}}&\cdots&\cdots&1\end{array}\right].\]
However, as stated previously in Section 6.5, the reflection matrix and covariance matrix for our parallel-server test problems are \(R=I_{d\times d}\) and \(A=I_{d\times d}\). Problems in that third class have \(K=d\) buffers indexed by \(k=1,\ldots,K\), and we set \(h_{1}=2\) and \(h_{k}=1.9\) for \(k=2,\ldots,K\).
### Main test problem with linear cost of control
For our main test problem with linear cost of control (Section 6.1), we take
\[\underline{\theta}_{k}=0\text{ and }\overline{\theta}_{k}=b\in\{2,10\}\text{ for all }k.\]
Figure 2: A decomposable parallel-server queueing network.
Also, interest rates \(r=0.1\) and \(r=0.01\) will be considered in the discounted case.
To begin, let us consider the simple case where \(K=0\) (that is, there are no downstream buffers in the queueing network interpretation of the problem) and hence \(d=1\). In this case, one can solve the HJB equation analytically; see Appendix D for details. For the discounted formulation with \(r=0.1\), Figure 3 compares the derivative of the value function computed using the analytical solution, shown in blue, with the neural network approximation for it that we computed using our method, shown in red. (The comparisons for \(r=0.01\) and the ergodic control case are similar.)
Combining Figure 3 with Equation (76), one sees that the policy derived using our method is close to the optimal policy. Table 1 reports the simulated performance with standard errors of these two policies based on four million sample paths and using the same discretization of time as in our computational method. Specifically, we report the long-run average cost under each policy in the ergodic control case, and report the simulated value \(V(0)\) in the discounted case. To repeat, the benchmark policy in this case is the optimal policy determined analytically, but not accounting for the discretization of the time scale. Of course, all the performance figures reported in the table are subject to simulation errors. Finally, it is worth noting that our method took less than one hour to compute its policy recommendations using a 10-CPU core computer.
Let us consider now the two-dimensional case (\(K=1\)), where the optimal policy is unknown. Therefore, we compare our method with the best benchmark we could find: the linear boundary policy described in Section 6.3. In the two-dimensional case, the linear boundary policy reduces to the following:
\[\theta_{0}(z)=b\mathbb{I}\left\{\beta_{0}^{\top}z\geq 1\right\}\;\text{and} \;\;\theta_{1}(z)=b\mathbb{I}\left\{\beta_{1}^{\top}z\geq 1\right\}.\]
Through simulation, we perform a brute-force search to identify the best values of \(\beta_{0}\) and \(\beta_{1}\). The policies for \(b=2\) and \(b=10\) are shown in Figures 4 and 5, respectively, for the discounted control
Figure 3: Comparison between the derivative \(G_{w}(\cdot)\) learned from neural networks and the derivative of the optimal value function for the case of \(d=1\) and \(r=0.1\). The dotted line indicates the cost \(c_{0}=1\). When the value function gradient is above this dotted line, the optimal control is \(\theta=b\), and otherwise it is \(\theta=0\).
case with \(r=0.1\). Our proposed policy sets the drift to \(b\) in the red region and to zero in the blue region, whereas the best-linear boundary policy is represented by the white-dotted line. That is, the benchmark policy sets the drift to \(b\) in the region above and to the right of the dotted line, and sets it to zero below and left of the line. Table 2 presents the costs with standard errors of the benchmark policy and our proposed policy obtained in a simulation study. The two policies have similar performance. Our method takes about one hour to compute policy recommendations using a 10-CPU core computer.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & & Ergodic & \(r=0.01\) & \(r=0.1\) \\ \hline \multirow{2}{*}{\(b=2\)} & Our policy & 1.455 \(\pm\) 0.0006 & 145.3 \(\pm\) 0.05 & 14.29 \(\pm\) 0.004 \\ & Benchmark & 1.456 \(\pm\) 0.0006 & 145.3 \(\pm\) 0.05 & 14.29 \(\pm\) 0.004 \\ \hline \multirow{2}{*}{\(b=10\)} & Our policy & 1.375 \(\pm\) 0.0007 & 137.2 \(\pm\) 0.06 & 13.56 \(\pm\) 0.005 \\ & Benchmark & 1.374 \(\pm\) 0.0007 & 137.2 \(\pm\) 0.06 & 13.56 \(\pm\) 0.005 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance comparison of our proposed policy with the benchmark policy in the one-dimensional case (\(K=0\)).
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & & Ergodic & \(r=0.01\) & \(r=0.1\) \\ \hline \multirow{2}{*}{\(b=2\)} & Our policy & 2.471 \(\pm\) 0.0008 & 246.6 \(\pm\) 0.08 & 24.28 \(\pm\) 0.006 \\ & Benchmark & 2.473 \(\pm\) 0.0008 & 246.8 \(\pm\) 0.08 & 24.29 \(\pm\) 0.006 \\ \hline \multirow{2}{*}{\(b=10\)} & Our policy & 2.338 \(\pm\) 0.0009 & 233.3 \(\pm\) 0.09 & 23.10 \(\pm\) 0.006 \\ & Benchmark & 2.338 \(\pm\) 0.0009 & 233.6 \(\pm\) 0.09 & 23.10 \(\pm\) 0.006 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance comparison of our proposed policy with the benchmark policy in the two-dimensional case (\(K=1\)).
Finally, let us consider the six-dimensional case (\(K=5\)), where the linear boundary policy reduces to
\[\theta_{i}(z)=b\mathbb{I}\left\{\beta_{i}^{\top}z\geq 1\right\}\text{ for }i=0,1,2,\ldots,5.\]
Although there appear to be 36 parameters to be tuned, recall that we reduced the number of parameters to five in Section 6.4 by exploiting symmetry. This makes the brute-force search computationally feasible. Table 3 compares the performance with standard errors of our proposed policies with the benchmark policies. They have similar performance. In this case, the running time for our method is several hours using a 10-CPU computer.
Figure 4: Graphical representation of the policy learned from neural networks and the benchmark policy for the case \(b=2,d=2\) and \(r=0.1\)
Figure 5: Graphical representation of the policy learned from neural networks and the benchmark policy for the case \(b=10,d=2\) and \(r=0.1\)
### Test problems with quadratic cost of control
In this section, we consider the test problem introduced in Section 6.3, for which we set \(\alpha_{k}=1\) and \(\underline{\theta}_{k}=1\) for all \(k\). As in the previous treatment of our main test example, we report results for the cases of \(d=1,2,6\) in Tables 4, 5, 6, respectively, where the benchmark policies are the affine-rate policies discussed in Section 6.4, with policy parameters optimized via simulation and brute-force search. We observe that our proposed policies outperform the best affine-rate policies by very small margins in all cases.
In the one-dimensional ergodic control case (\(K=0\)), we obtain analytical solutions to the RBM control problem in closed form by solving the HJB equation directly, which reduces to a first-order ordinary differential equation in this case; see Appendix D for details. Figure 6 compares the derivative of the optimal value function (derived in closed form) with its approximation via neural networks in the ergodic case. Combining Figure 6 with Equation (77) reveals that our proposed policy is close to the optimal policy.
\begin{table}
\begin{tabular}{c c c c} \hline \hline & Ergodic & \(r=0.01\) & \(r=0.1\) \\ \hline Our policy & 0.757 \(\pm\) 0.0004 & 75.53 \(\pm\) 0.03 & 7.415 \(\pm\) 0.003 \\ Benchmark & 0.758 \(\pm\) 0.0004 & 75.67 \(\pm\) 0.03 & 7.427 \(\pm\) 0.003 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance comparison of our proposed policy with the benchmark policy in the case of quadratic cost of control and \(d=1\).
\begin{table}
\begin{tabular}{c l c c} \hline \hline & & Ergodic & \(r=0.01\) & \(r=0.1\) \\ \hline \multirow{2}{*}{\(b=2\)} & Our policy & 7.927 \(\pm\) 0.001 & 791.0 \(\pm\) 0.1 & 77.83 \(\pm\) 0.01 \\ & Benchmark & 7.927 \(\pm\) 0.001 & 791.3 \(\pm\) 0.1 & 77.83 \(\pm\) 0.01 \\ \hline \multirow{2}{*}{\(b=10\)} & Our policy & 7.565 \(\pm\) 0.0016 & 754.8 \(\pm\) 0.15 & 74.61 \(\pm\) 0.01 \\ & Benchmark & 7.525 \(\pm\) 0.0016 & 751.7 \(\pm\) 0.15 & 74.32 \(\pm\) 0.01 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance comparison of our proposed policy with the benchmark policy in the six-dimensional case \(d=6\) (\(K=5\)).
\begin{table}
\begin{tabular}{c c c c} \hline \hline & Ergodic & \(r=0.01\) & \(r=0.1\) \\ \hline Our policy & 1.216 \(\pm\) 0.0005 & 121.3 \(\pm\) 0.04 & 11.94 \(\pm\) 0.003 \\ Benchmark & 1.219 \(\pm\) 0.0005 & 121.7 \(\pm\) 0.05 & 11.96 \(\pm\) 0.003 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Performance comparison of our proposed policy with the benchmark policy in the case of quadratic cost of control and \(d=2\) (\(K=1\))
In the two-dimensional case, our proposed policy is shown in Figure 7 for the ergodic case, with contour lines showing the state vectors \((z_{0},z_{1})\) for which the policy chooses successively higher drift rates. The white dotted lines similarly show the states \((z_{0},z_{1})\) for which our benchmark policy (that is, the best affine-rate policy) chooses the drift rate \(\theta_{k}=1.5\) (for \(k=0\) in the left panel and \(k=1\) in the right panel).
\begin{table}
\begin{tabular}{l c c c} \hline \hline & Ergodic & \(r=0.01\) & \(r=0.1\) \\ \hline Our policy & 3.863 \(\pm\) 0.0008 & 385.7 \(\pm\) 0.08 & 37.92 \(\pm\) 0.006 \\ Benchmark & 3.874 \(\pm\) 0.0008 & 386.9 \(\pm\) 0.08 & 38.04 \(\pm\) 0.006 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Performance comparison of our proposed policy with the benchmark policy in the case of quadratic cost of control and \(d=6\) (\(K=5\))
Figure 6: Comparison of the gradient approximation \(G_{w}(\cdot)\) learned from neural networks with the derivative of the optimal value function for the ergodic control case with quadratic cost of control in the one-dimensional case (\(d=1\)).
Figure 7: Graphical representation of the policy learned from neural networks and the benchmark policy for the ergodic case with \(d=2\).
### Parallel-server test problems
This section focuses on parallel-server test problems (see Section 6.5) to demonstrate our method's scalability. As illustrated in Figure 2, the parallel-server networks are essentially \(K\) independent copies of the one-dimensional case. We present the results in Table 7 for \(d=30\) and linear cost of control. When \(b=2\), our policies perform almost equally well as the optimal policy, while for \(b=10\), our policies perform within \(1\%\) of the optimal policy. The run-time for our method is about one day in this case using a 20-CPU core computer.
For quadratic cost of control, we are able to solve the test problems up to at least 100 dimensions. The results for \(d=100\) are given in Table 8, where the benchmark policies are the best affine-rate policies (see Section 6.4). The performance of our policy is within \(1\%\) of the benchmark performance. The run-time for our method is several days in this case using a 30-CPU core computer.
## 8 Concluding remarks
Consider the general drift control problem formulated in Section 3, assuming specifically that the instantaneous cost rate \(c(z,\theta)\) is linear in \(\theta\), and further assuming that the set of available drift vectors is a rectangle \(\Theta=[0,b_{1}]\times\cdots\times[0,b_{d}]\). If one relaxes such a problem by letting \(b_{i}\uparrow\infty\) for one or more \(i\), then one obtains what is called a _singular_ control problem, cf. Kushner and Martins [31]. Optimal policies for such problems typically involve the imposition of _endogenous_ reflecting barriers, that is, reflecting barriers imposed by the system controller in order to minimize cost, in addition to exogenous reflecting barriers that may be imposed to represent physical constraints in
\begin{table}
\begin{tabular}{l l l l l} \hline \hline & & Ergodic & r = 0.01 & r = 0.1 \\ \hline \multirow{2}{*}{\(b=2\)} & Our policy & 42.56 \(\pm\) 0.003 & 4247 \(\pm\) 0.3 & 417.3 \(\pm\) 0.02 \\ & Benchmark & 42.52 \(\pm\) 0.003 & 4244 \(\pm\) 0.3 & 417.2 \(\pm\) 0.02 \\ \hline \multirow{2}{*}{\(b=10\)} & Our policy & 40.53 \(\pm\) 0.004 & 4054 \(\pm\) 0.4 & 399.4 \(\pm\) 0.026 \\ & Benchmark & 40.23 \(\pm\) 0.004 & 4018 \(\pm\) 0.4 & 396.7 \(\pm\) 0.024 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Performance comparison between our proposed policy and the benchmark policy for 30-dimensional parallel-server test problems with linear cost of control.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & Ergodic & \(r=0.01\) & \(r=0.1\) \\ \hline Our policy & 72.74 \(\pm\) 0.003 & 7258.3 \(\pm\) 0.3 & 712.4 \(\pm\) 0.02 \\ Benchmark & 72.53 \(\pm\) 0.003 & 7237.3 \(\pm\) 0.3 & 710.2 \(\pm\) 0.02 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Performance comparison between our proposed policy and the benchmark policy for 100-dimensional parallel-server test problems with quadratic cost of control.
the motivating application.
There are many examples of queueing network control problems whose natural heavy traffic approximations involve singular control; see, for example, Krichagina and Taksar [30], Martins and Kushner [34], and Martins et al. [33]. Given that motivation, we intend to show in future work how the computational method developed in this paper for drift control can be extended in a natural way to treat singular control, and to illustrate that extension by means of queueing network applications.
Separately, the following are three desirable generalizations of the problem formulations propounded in Section 3 of this paper. Each of them is straightforward in principle, and we expect to see these extensions implemented in future work, perhaps in combination with mild additional restrictions on problem data. (a) Instead of requiring that the reflection matrix \(R\) have the Minkowski form (1), require only that \(R\) be a completely-\(\mathcal{S}\) matrix, which Taylor and Williams [41] showed is a necessary and sufficient condition for an RBM to be well defined. (b) Allow a more general state space for the controlled process \(Z\), such as the convex polyhedrons characterized by Dai and Williams [13]. (c) Remove the requirement that the action space \(\Theta\) be bounded.
## Appendix A Proof of Proposition 1
Proof.: Let \(f:\mathbb{R}_{+}\rightarrow\mathbb{R}^{d}\) be right continuous with left limits (rcll). Following Williams [46], we define the oscillation of \(f\) over an interval \([t_{1},t_{2}]\) as follows:
\[Osc(f,[t_{1},t_{2}])=\sup\left\{\left|f(t)-f(s)\right|:t_{1}\leq s<t\leq t_{2} \right\}, \tag{78}\]
where \(\left|a\right|=\max_{i=1....,d}\left|a_{i}\right|\) for any \(a\in\mathbb{R}^{d}\). Then for two rcll functions \(f,g\), the following holds:
\[Osc(f+g)\leq Osc(f)+Osc(g). \tag{79}\]
Also recall that the controlled RBM \(Z\) satisfies \(Z(t)=X(t)+RY(t)\), where
\[X(t)=W(t)-\int_{0}^{t}\theta(s)ds,\ t\geq 0. \tag{80}\]
Then it follows from Theorem 5.1 of Williams [46] that
\[Osc(Z,[0,t]) \leq C\,Osc(X,[0,t])\] \[\leq C\,Osc(W,[0,t])+C\bar{\theta}t\]
for some \(C>0\), where \(\bar{\theta}=\sum_{l=1}^{d}\left(\bar{\theta}_{l}-\underline{\theta}_{l}\right)\) and \(\underline{\theta}_{l},\bar{\theta}_{l}\) are the minimal and maximal values on each dimension, and the second inequality follows from (79).
Let \(\mathcal{O}=Ocs(W,[0,t])\) and recall that we are interested in bounding the expectation \(\mathbb{E}\left[|Z(t)|^{n}\right]\). To that end, note that
\[\left|Z(t)-Z(0)\right|^{n} \leq C^{n}\left(\mathcal{O}+\bar{\theta}t\right)^{n} \tag{81}\] \[= C^{n}\,\sum_{k=0}^{n}\binom{n}{k}\mathcal{O}^{k}\bar{\theta}^{n -k}t^{n-k}.\]
To bound \(\mathbb{E}\left[\mathcal{O}^{k}\right],\) note that
\[\mathcal{O} = \sup\left\{\left|W(t_{2})-W(t_{1})\right|:0\leq t_{1}<t_{2}\leq t\right\}\] \[\leq \sup\left\{W(s):0\leq s\leq t\right\}-\inf\left\{W(s):0\leq s\leq t\right\}\] \[\leq 2\sup\left\{\left|W(s)\right|:0\leq s\leq t\right\}\] \[\leq 2\sum_{l=1}^{d}\sup\left\{\left|W_{l}(s)\right|:0\leq s\leq t \right\}.\]
So, by the union bound, we write
\[\mathbb{P}\left(\mathcal{O}>x\right) \leq \sum_{l=1}^{d}\mathbb{P}\left(\sup_{0\leq s\leq t}W_{l}(s)>\frac{x} {2d}\right)+\sum_{l=1}^{d}\mathbb{P}\left(\inf_{0\leq s\leq t}W_{l}(s)<-\frac{x }{2d}\right)\] \[\leq 4\sum_{l=1}^{d}\mathbb{P}\left(W_{l}(t)>\frac{x}{2d}\right),\]
where the last inequality follows from the reflection principle.
Thus,
\[\mathbb{E}[\mathcal{O}^{k}]=\int_{0}^{\infty}x^{k-1}\mathbb{P}\left(\mathcal{ O}>x\right)\mathrm{d}x\leq 4\sum_{l=1}^{d}\int_{0}^{\infty}x^{k-1}\mathbb{P} \left(W_{l}(t)>\frac{x}{2d}\right)\mathrm{d}x.\]
By change of variable \(y=x/d\), we write
\[\mathbb{E}[\mathcal{O}^{k}] \leq 4\sum_{l=1}^{d}\left(2d\right)^{k}\int_{0}^{\infty}y^{k-1} \mathbb{P}\left(W_{l}(t)>y\right)\mathrm{d}y\] \[= 4\sum_{l=1}^{d}\left(2d\right)^{k}\mathbb{E}[|W_{l}(t)|^{k}]\] \[= \frac{4\left(2d\right)^{k}2^{k/2}t^{k/2}\Gamma\left(\frac{k+1}{2 }\right)}{\sqrt{\pi}}\sum_{l=1}^{d}\sigma_{ll}^{k},\]
where \(\Gamma\) is the Gamma function, and the last equality is a well-known result; see, for example, Equation (12) in [47]. Substituting this into (81) gives the following:
\[\mathbb{E}[|Z(t)-Z(0)|^{n}] \leq C^{n}\sum_{k=0}^{n}\frac{4\left(2d\right)^{k}\binom{n}{k}2^{k/2} t^{k/2}\Gamma\left(\frac{k+1}{2}\right)}{\sqrt{\pi}}\bar{\theta}^{n-k}t^{n-k} \left(\sum_{l=1}^{d}\sigma_{ll}^{k}\right) \tag{82}\] \[\leq \tilde{C}_{n}(t^{n}+1).\]
Letting \(z=Z(0).\) We write
\[|Z(t)|^{n} = |Z(t)-z+z|^{n}\leq(|Z(t)-z|+|z|)^{n}\] \[\leq \sum_{k=0}^{n}\binom{n}{k}\left|Z(t)-z|^{k}\left|z\right|^{n-k}.\]
Using (82), we can therefore write
\[\mathbb{E}\left[|Z(t)|^{n}\right] \leq \sum_{k=0}^{n}\binom{n}{k}\tilde{C}_{k}|z|^{n-k}\left(t^{k}+1\right)\] \[\leq \hat{C}_{n}(1+t^{n}).\]
## Appendix B Validity of HJB Equations
### Discounted Control
**Proposition 6**.: _Let \(u\in\mathcal{U}\) be an admissible policy and \(V^{u}\) a \(C^{2}\) solution of the associated PDE (14)-(15). If both \(V^{u}\) and its gradient have polynomial growth, then \(V^{u}\) satisfies (12)._
Proof.: Applying Ito's formula to \(e^{-rt}\,V^{u}(Z^{u}(t))\) and using Equation (7), we write
\[e^{-rT}\,V^{u}(Z^{u}(T))-V^{u}(z)= \int_{0}^{T}e^{-rt}\,(\mathcal{L}V^{u}(Z^{u}(t))-u(Z^{u}(t))\cdot V ^{u}(Z^{u}(t))-rV^{u}(Z^{u}(t)))\,dt\] \[+\int_{0}^{T}e^{-rt}\mathcal{D}\,V^{u}(Z^{u}(t))\cdot dY(t)+\int_ {0}^{T}e^{-rt}\,\nabla V^{u}(Z^{u}(t))\cdot dW(t).\]
Then using (3)-(4) and (14)-(15), we arrive at the following:
\[e^{-rT}\,V^{u}(Z^{u}(T))-V^{u}(z)=-\int_{0}^{T}e^{-rt}\,c(Z^{u}(t ),u(Z^{u}(t)))\,dt+\int_{0}^{T}e^{-rt}\,\nabla V^{u}(Z^{u}(t))\cdot dW(t). \tag{83}\]
Because \(\nabla V^{u}\) has polynomial growth and the action space \(\Theta\) is bounded, we have that
\[\mathbb{E}_{z}\left[\int_{0}^{T}e^{-rt}\nabla\,V^{u}(Z^{u}(t)) \cdot dW(t)\right]=0;\]
see, for example, Theorem 3.2.1 of Oksendal [35]. Thus, taking the expectation of both sides of (83) yields
\[V^{u}(z)=\mathbb{E}_{z}\left[\int_{0}^{T}e^{-rt}\,c(Z^{u}(t),u(Z ^{u}(t)))\,dt\right]+e^{-rT}\,\mathbb{E}_{z}\left[V^{u}(Z^{u}(T))\right].\]
Because \(V^{u}\) has polynomial growth and \(\Theta\) is bounded, the second term on the right-hand side vanishes by as \(T\to\infty\). Then, because \(c\) has polynomial growth and \(\Theta\) is bounded, passing to the limit as \(T\to\infty\) completes the proof.
**Proposition 7**.: _If \(V\) is a \(C^{2}\) solution of the HJB equation (16)-(17), and if both \(V\) and its gradient have polynomial growth, then \(V\) satisfies (13)._
Proof.: First, consider an arbitrary admissible policy \(u\) and let \(V^{u}\) denote the solution of the associated PDE (14)-(15). By Proposition 6, we have that
\[V^{u}(z)=\mathbb{E}_{z}\left[\int_{0}^{\infty}e^{-rt}\,c(Z^{u}( t),u(Z^{u}(t))\,dt\right],\ z\in\mathbb{R}_{+}^{d}. \tag{84}\]
On the other hand, because \(V\) solves (16)-(17) and
\[u(z)\cdot\nabla V(z)-c(z,u(z))\leq\max_{\theta\in\Theta}\left\{ \theta\cdot\nabla V(z)-c(z,\theta\right\},\ z\in\mathbb{R}_{+}^{d},\]
we conclude that
\[\mathcal{L}\,V(z)-u(z)\cdot\nabla V(z)+c(z,u(z))\geq r\,V(z). \tag{85}\]
Now applying Ito's formula to \(e^{-rt}\,V(Z^{u}(t))\) and using Equation (7) yields
\[e^{-rT}\,V(Z^{u}(T))-V(z)= \int_{0}^{T}\left(\mathcal{L}V(Z^{u}(t))-u(Z^{u}(t))\cdot\nabla V( Z^{u}(t))-rV(Z^{u}(t))\right)dt\] \[+\int_{0}^{T}\mathcal{D}\,V(Z^{u}(t))\cdot dY(t)+\int_{0}^{T}e^{- rt}\nabla V(Z^{u}(t))\cdot dW(t).\]
Combining this with Equations (3)-(4), (16)-(17) and (85) gives
\[e^{-rT}\,V(Z^{u}(t))-V(z)\geq-\int_{0}^{T}e^{-rt}\,c(Z^{u}(t),u(Z^ {u}(t)))\,dt+\int_{0}^{T}e^{-rt}\nabla V(Z^{u}(t))\cdot dW(t). \tag{86}\]
Because \(\nabla V\) has polynomial growth and the action space \(\Theta\) is bounded, we have that
\[\mathbb{E}_{z}\left[\int_{0}^{T}e^{-rt}\,\nabla V(Z^{u}(t))\cdot dW (t)\right]=0;\]
see, for example, Theorem 3.2.1 of Oksendal [35]. Using this and taking the expectation of both sides of Equation (86) yields
\[V(z)\leq\mathbb{E}_{z}\left[\int_{0}^{T}e^{-rt}\,c(Z^{u}(t),u(Z^ {u}(t)))\,dt\right]+e^{-rT}\,\mathbb{E}\left[V(Z^{u}(T))\right].\]
Because \(V\) has polynomial growth and \(\Theta\) is bounded, the second term on the right-hand side vanishes as \(T\to\infty\). Then, because \(c\) has polynomial growth and \(\Theta\) is bounded, passing to the limit yields
\[V(z)\leq\mathbb{E}_{z}\left[\int_{0}^{T}e^{-rt}\,c(Z^{u}(t),u(Z^ {u}(t)))\,dt\right]=V^{u}(z), \tag{87}\]
where the equality holds by Equation (84).
Now, consider the optimal policy \(u^{*}\), where \(u^{*}(z)=\arg\max_{\theta\in\Theta}\left\{\theta\cdot\nabla V(z)-c(z,\theta)\right\}\). For notational brevity, let \(Z^{*}=Z^{u^{*}}\) denote the RBM under policy \(u^{*}\). Note from Equation (16) that
\[\mathcal{L}V(z)-u^{*}(z)\cdot\nabla V(z)+c(z,u^{*}(z))=rV(z),\ z \in\mathbb{R}_{+}^{d}. \tag{88}\]
Repeating the preceding steps with \(u^{*}\) in place of \(u\) and replacing the inequality with an equality, cf. Equations (85) and (88), we conclude that
\[V(z)=\mathbb{E}_{z}\left[\int_{0}^{\infty}e^{-rt}\,c(Z^{*}(t),u^ {*}(Z^{*}(T)))\,dt\right]=V^{u^{*}}(z).\]
Combining this with Equation (87) yields (13).
### Ergodic Control
**Proposition 8**.: _Let \(u\in\mathcal{U}\) be an admissible policy and \((\tilde{\xi},v^{u})\) a \(C^{2}\) solution of the associated PDE (23)-(24). Further assume that \(v^{u}\) and its gradient have polynomial growth. Then_
\[\tilde{\xi}=\xi^{u}=\int_{\mathbb{R}^{d}_{+}}c(z,u(z))\,\pi^{u}(dz).\]
Proof.: Let \(\pi^{u}\) denote the stationary distribution of RBM under policy \(u\), and let \(Z^{u}\) denote the RBM under policy \(u\) that is initiated with \(\pi^{u}\). That is,
\[\mathbb{P}(Z^{u}(0)\in B)=\pi^{u}(B)\ \ \text{for}\ B\subset\mathbb{R}^{d}_{+}.\]
Then applying Ito's formula to \(v^{u}(Z^{u}(t))\) and using Equation (7) yields
\[v^{u}(Z^{u}(t))-v^{u}(Z^{u}(0))= \int_{0}^{T}(\mathcal{L}v^{u}(Z^{u}(t))-u(Z^{u}(t))\cdot\nabla v^ {u}(Z^{u}(t)))\,dt\] \[+\int_{0}^{T}\mathcal{D}v^{u}(Z^{u}(t))\cdot dY(t)+\int_{0}^{T} \nabla v^{u}(Z^{u}(t))\cdot dW(t).\]
Then using Equations (3)-(4) and (23)-(24), we arrive at the following:
\[v^{u}(Z^{u}(T))-v^{u}(Z^{u}(0))=\int_{0}^{T}\left[\tilde{\xi}-c(Z^{u}(t),u(Z^{ u}(t)))\right]dt+\int_{0}^{T}\nabla v^{u}(Z^{u}(t))\cdot dW(t). \tag{89}\]
Note that the marginal distribution of \(Z^{u}(t)\) is \(\pi^{u}\) for all \(t\geq 0\). Thus, we have that
\[\mathbb{E}_{\pi^{u}}[v^{u}(Z^{u}(T))]=\mathbb{E}_{\pi^{u}}[v^{u}(Z^{u}(0))].\]
Moreover, using Equation (21) and the polynomial growth of \(\nabla v^{u}\), we conclude that
\[\mathbb{E}_{\pi^{u}}\left[\int_{0}^{T}\left|\nabla v^{u}(Z^{u}(t))\right|^{2} \,dt\right]=T\,\int_{\mathbb{R}^{d}_{+}}\left|\nabla v^{u}(z)\right|^{2}\,\pi ^{u}(dz)<\infty.\]
Consequently, we have that \(\mathbb{E}\left[\int_{0}^{T}\nabla v^{u}(Z^{u}(t))\cdot dW(t)\right]=0\); see, for example, Theorem 3.2.1 of Oksendal [35]. Combining these and taking the expectation of both sides of (89) gives
\[\tilde{\xi}=\frac{1}{T}\int_{0}^{T}\mathbb{E}_{\pi^{u}}\left[c(Z^{u}(t),u(Z^{ u}(t)))\right]\,dt=\int_{\mathbb{R}^{d}_{+}}c(z,u(z))\,\pi_{u}(dz)=\xi^{u}.\]
**Proposition 9**.: _Let \((v,\xi)\) be a \(C^{2}\) solution of the HJB equation (25)-(26), and further assume that both \(v\) and its gradient have polynomial growth. Then (27) holds, and moreover, \(\xi=\xi^{u^{*}}\) where the optimal policy \(u^{*}\) is defined by (28)._
Proof.: First, consider an arbitrary policy \(u\) and note that
\[\xi^{u}=\int_{\mathbb{R}^{d}_{+}}c(z,u(z))\,\pi^{u}(dz),\]
where \(\pi^{u}\) is the stationary distribution of RBM under policy \(u\). Let \(Z^{u}\) denote the RBM under policy \(u\) that is initiated with the stationary distribution \(\pi^{u}\). That is,
\[\mathbb{P}(Z^{u}(0)\in B)=\pi^{u}(B),\ \ B\subset\mathbb{R}^{d}_{+}.\]
On the other hand, because \((v,\xi)\) solves the HJB equation and
\[u(z)\cdot\nabla v(z)-c(z,u(z))\leq\max_{\theta\,\in\,\Theta}\left\{\theta \cdot\nabla v(z)-c(z,\theta)\right\},\]
we have that
\[\mathcal{L}v(z)-u(z)\cdot\nabla v(z)+c(z,u(z)))\geq\xi \tag{90}\]
Now, we apply Ito's formula to \(v(Z^{u}(t))\) and use Equation (7) to get
\[v(Z^{u}(T))-v(Z^{u}(0))= \int_{0}^{T}(\mathcal{L}v(Z^{u}(t)))-u(Z^{u}(t))\cdot\nabla v(Z^ {u}(t))))\,dt\] \[+\int_{0}^{T}\nabla v(Z^{u}(t))\cdot dY(t)+\int_{0}^{T}\nabla v(Z ^{u}(t))\cdot dW(t).\]
Combining this with Equations (3)-(4), (26) and (90) gives
\[v(Z^{u}(T))-v(Z^{u}(0))\geq\int_{0}^{T}(\xi-c(Z^{u}(t),u(Z^{u}(t)))\,dt+\int_{ 0}^{T}\nabla v(Z^{u}(t))\cdot dW(t). \tag{91}\]
Note that the marginal distribution of \(Z^{u}(t)\) is \(\pi^{u}\) for all \(t\geq 0\). Thus, we have that
\[\mathbb{E}_{\pi^{u}}[v(Z^{u}(T))]=\mathbb{E}_{\pi^{u}}[v(Z^{u}(0))].\]
Moreover, using Equation (21) and the polynomial growth of \(\nabla v\), we conclude that
\[\mathbb{E}_{\pi^{u}}\left[\int_{0}^{T}\left|\nabla v(Z^{u}(t)) \right|^{2}\,dt\right]=T\,\int_{\mathbb{R}^{d}_{+}}\left|\nabla v(z)\right|^{2 }\,\pi^{u}(dz)<\infty.\]
Consequently, we have that \(\mathbb{E}\left[\int_{0}^{T}\nabla v(Z^{u}(t))\,dW(t)\right]=0\); see for example Theorem 3.2.1 of Oksendal [35]. Combining these and taking the expectation of both sides of (91) give
\[\xi\,T\leq\int_{0}^{T}\mathbb{E}_{\pi^{u}}\left[c(Z^{u}(t),u(Z^{u}(t))\right] \,dt=T\,\mathbb{E}_{\pi^{u}}\left[c(Z^{u}(0),u(Z^{u}(0)))\right],\]
which yields
\[\xi\leq\mathbb{E}_{\pi^{u}}\left[c(Z^{u}(0),u(Z^{u}(0)))\right]=\int_{\mathbb{R}_{ +}^{d}}c(z,u(z))\,\pi^{u}(dz)=\xi^{u}. \tag{92}\]
Now, consider policy \(u^{*}\). For notational brevity, let \(Z^{*}(t)=Z^{u^{*}}(t)\) denote the RBM under policy \(u^{*}\) that is initiated with the stationary distribution \(\pi^{u^{*}}\). In addition, note from (25) that
\[\mathcal{L}v(z)-u^{*}(z)\cdot\nabla v(z)+c(z,u^{*}(z))=\xi,\ \ z\in\mathbb{R}_{+}^{d}. \tag{93}\]
Repeating the preceding steps with \(u^{*}\) in place of \(u\) and replacing the inequality with an equality, cf. Equations (90) and (93), we conclude
\[\xi=\mathbb{E}_{\pi^{u^{*}}}\left[c(Z^{*}(0),u(Z^{*}(0)))\right]=\int_{ \mathbb{R}_{+}^{d}}c(z,u^{*}(z))\,\pi^{u^{*}}(dz)=\xi^{u^{*}}.\]
Combining this with Equation (92) completes the proof.
## Appendix C Derivation of the covariance matrix of the feed-forward examples
By the functional central limit theorem for the renewal process [9], we have
\[\hat{E}^{n}(\cdot)\Rightarrow W_{E}\left(\cdot\right),\]
where \(W_{E}\left(\cdot\right)\) is an one-dimensional Brownian motion with drift zero and variance \(\lambda a^{2}=\mu_{0}a^{2}\). Furthermore, we have
\[\hat{S}_{k}^{n}(t)\Rightarrow W_{k}\left(\cdot\right),\ \text{for}\ k=1,2, \ldots,K,\]
where \(W_{k}\left(\cdot\right)\) is an one-dimensional Brownian motion with drift zero and variance \(\mu_{0}p_{k}s_{k}^{2}.\) Now, we turn to \(\hat{S}_{0}^{n}(t)\) and \(\hat{\Phi}^{n}(t).\) By Harrison [18], we have
\[\text{Cov}\left(\left[\begin{array}{c}\hat{S}_{0}^{n}(t)\\ \hat{\Phi}^{n}(t)\end{array}\right]\right)=\mu_{0}\Omega^{0}+\mu_{0}s_{0}^{2}R ^{0}\left(R^{0}\right)^{\top},\]
where \(\Omega_{kl}^{0}=p_{k}(\mathbb{I}\{k=l\}-p_{l})\) for,\(k,l=0,\ldots,K\) and \(R^{0}=[1,-p_{1},\ldots,-p_{K}]^{\top}.\) Therefore, we have
\[\text{Cov}\left(\left[\begin{array}{c}\hat{S}_{0}^{n}(t)\\ \hat{\Phi}^{n}(t)\end{array}\right]\right)=\mu_{0}\left[\begin{array}{cccc}s_ {0}^{2}&-p_{1}s_{0}^{2}&\ldots&\ldots&-p_{K}s_{0}^{2}\\ -p_{1}s_{0}^{2}&p_{1}(1-p_{1})+p_{1}^{2}s_{0}^{2}&p_{1}p_{2}\left(s_{0}^{2}-1 \right)&\ldots&p_{1}p_{K}\left(s_{0}^{2}-1\right)\\ \vdots&p_{1}p_{2}\left(s_{0}^{2}-1\right)&\ddots&&\vdots\\ \vdots&\vdots&&\ddots&p_{K-1}p_{K}\left(s_{0}^{2}-1\right)\\ -p_{K}s_{0}^{2}&p_{1}p_{K}\left(s_{0}^{2}-1\right)&\ldots&\ldots&p_{K}(1-p_{K} )+p_{K}^{2}s_{0}^{2}\end{array}\right].\]
Therefore, the variance of \(\chi\) is
\[A = diag(\lambda a^{2},\mu_{1}s_{1}^{2},\ldots,\mu_{K}s_{K}^{2})+\] \[\mu_{0}\left[\begin{array}{cccc}s_{0}^{2}&-p_{1}s_{0}^{2}&\ldots& \ldots&-p_{K}s_{0}^{2}\\ -p_{1}s_{0}^{2}&p_{1}(1-p_{1})+p_{1}^{2}s_{0}^{2}&p_{1}p_{2}\left(s_{0}^{2}-1 \right)&\cdots&p_{1}p_{K}\left(s_{0}^{2}-1\right)\\ \vdots&p_{1}p_{2}\left(s_{0}^{2}-1\right)&\ddots&&\vdots\\ \vdots&\vdots&&\ddots&p_{K-1}p_{K}\left(s_{0}^{2}-1\right)\\ -p_{K}s_{0}^{2}&p_{1}p_{K}\left(s_{0}^{2}-1\right)&\cdots&\cdots&p_{K}(1-p_{K} )+p_{K}^{2}s_{0}^{2}\end{array}\right]\] \[= \mu_{0}\left[\begin{array}{cccc}s_{0}^{2}+a^{2}&-p_{1}s_{0}^{2 }&\ldots&\ldots&-p_{K}s_{0}^{2}\\ -p_{1}s_{0}^{2}&p_{1}(1-p_{1})+p_{1}^{2}s_{0}^{2}+p_{1}s_{1}^{2}&p_{1}p_{2} \left(s_{0}^{2}-1\right)&\cdots&p_{1}p_{K}\left(s_{0}^{2}-1\right)\\ \vdots&p_{1}p_{2}\left(s_{0}^{2}-1\right)&\ddots&&\vdots\\ \vdots&\vdots&&\ddots&p_{K-1}p_{K}\left(s_{0}^{2}-1\right)\\ -p_{K}s_{0}^{2}&p_{1}p_{K}\left(s_{0}^{2}-1\right)&\cdots&\cdots&p_{K}(1-p_{K} )+p_{K}^{2}s_{0}^{2}+p_{K}s_{K}^{2}\end{array}\right].\]
In particular, if the arrival and service processes are Poisson processes, we have \(a=1\) and \(s_{k}=1\) for \(k=0,1,2,\ldots,K.\) Then, we have
\[A_{\text{Poisson}}=\mu_{0}\left[\begin{array}{ccccc}2&-p_{1}&\cdots&\cdots&-p _{K}\\ -p_{1}&2p_{1}&&\\ \vdots&&\ddots&&\\ \vdots&&\ddots&\\ -p_{K}&&&2p_{K}\end{array}\right]\]
Furthermore, if the service time for server zero is deterministic, i.e., \(s_{0}=0\), we have
\[A_{\text{deterministic}}=\mu_{0}\left[\begin{array}{ccccc}a^{2}&0&\cdots& \cdots&0\\ 0&p_{1}(1-p_{1})+p_{1}s_{1}^{2}&-p_{1}p_{2}&\cdots&-p_{1}p_{K}\\ \vdots&-p_{1}p_{2}&\ddots&&\vdots\\ \vdots&\vdots&&\ddots&-p_{K-1}p_{K}\\ 0&-p_{1}p_{K}&\cdots&\cdots&p_{K}(1-p_{K})+p_{K}s_{K}^{2}\end{array}\right]\]
## Appendix D Analytical solution of one-dimensional test problems
### Ergodic control formulation with linear cost of control
We consider the one-dimensional control problem with the cost function
\[c(z,\theta)=hz+c\theta\text{ for }z\in\mathbb{R}_{+}\text{ and }\theta\in\Theta=[0,b].\]
In the ergodic control case, the HJB equation (25) - (26) is
\[\frac{a}{2}v^{\prime\prime}(z)-\max_{\theta\in[0,b]}\left\{\theta \cdot v^{\prime}(z)-hz-c\theta\right\}=\xi,\text{ and} \tag{94}\] \[v^{\prime}(0)=0\text{ and }v^{\prime}(z)\text{ having polynomial growth rate}, \tag{95}\]
where the covariance matrix \(A=a\) in this one-dimensional case. The HJB equation (94) - (95) is equivalent to
\[\frac{a}{2}v^{\prime\prime}(z)+hz-(v^{\prime}(z)-c)^{+}b=\xi,\]
and the solution is
\[\left(v^{*}\right)^{\prime}(z)=\left\{\begin{array}{cc}\frac{2}{\sqrt{a}} \sqrt{ch+\frac{ah^{2}}{4b^{2}}}z-\frac{h}{a}z^{2}&\text{ if }z<z^{*}\\ \frac{h}{b}z+\frac{ha}{2b^{2}}-\frac{\sqrt{a}}{b}\sqrt{ch+\frac{ah^{2}}{4b^{2} }}+c&\text{ if }z\geq z^{*}\end{array}\right.,\text{ with}\]
\[z^{*}=\frac{1}{h}\sqrt{a\left(ch+\frac{ah^{2}}{4b^{2}}\right)}-\frac{a}{2b} \text{ and }\xi^{*}=\sqrt{a\left(ch+\frac{ah^{2}}{4b^{2}}\right)},\]
and the optimal control is
\[\theta^{*}(z)=\left\{\begin{array}{cc}0&\text{ if }z<z^{*},\\ b&\text{ if }z\geq z^{*}.\end{array}\right.\]
### Discounted formulation with linear cost of control
The cost function is still
\[c(z,\theta)=hz+c\theta\text{ for }z\in\mathbb{R}_{+}\text{ and }\theta\in\Theta=[0,b],\]
and in the discounted control case, the HJB equation (16) - (17) is
\[\frac{a}{2}V^{\prime\prime}(z)+hz-(V^{\prime}(z)-c)^{+}b = rV(z),\] \[V^{\prime}(0) = 0.\]
The solution is
\[V^{*}(z)=\left\{\begin{array}{cc}V_{1}(z)&\text{ if }z<z^{*}\\ V_{2}(z)&\text{ if }z\geq z^{*}\end{array}\right.,\]
and the optimal control is
\[\theta^{*}(z)=\left\{\begin{array}{cc}0&\text{ if }z<z^{*}\\ b&\text{ if }z\geq z^{*}\end{array}\right.,\]
where
\[V_{1}(z)=\frac{h\sqrt{a}e^{-\frac{\sqrt{2}\sqrt{z}}{\sqrt{a}}}}{\sqrt{2}r^{3/ 2}}+\frac{hz}{r}+C_{1}e^{\frac{\sqrt{2}\sqrt{r}z}{\sqrt{a}}}+C_{1}e^{-\frac{ \sqrt{2}\sqrt{r}z}{\sqrt{a}}},\text{ and}\]
\[V_{2}(z)=\frac{-bh+brc+hrz}{r^{2}}+C_{2}e^{z\left(\frac{b}{a}\frac{\sqrt{b^{2}+2 r\alpha}}{a}\right)},\]
for some parameters \(z^{*},C_{1},C_{2},\) to be determined later.
**Case 1:**\(h\leq rc.\) Note that if \(C_{1}=0,\) then we have
\[V_{1}^{\prime}(z)=\frac{h}{r}\left(1-e^{-\frac{\sqrt{2}\sqrt{r}z}{\sqrt{a}}} \right)<\frac{h}{r}\leq c.\]
Therefore, we have
\[V^{*}(z)=\frac{h\sqrt{a}e^{-\frac{\sqrt{2}\sqrt{r}z}{\sqrt{a}}}}{\sqrt{2}r^{3/ 2}}+\frac{hz}{r}\]
for the case \(h\leq rc\) and the optimal control is always to set \(\theta^{*}(z)=0.\)
**Case 2:**\(h>rc.\) We have
\[z^{*} = \frac{a\log\left(\frac{(h-rc)a}{C_{2}\lambda\left(\sqrt{b^{2}+2 ra}-b\right)}\right)}{b-\sqrt{b^{2}+2ra}},\] \[V_{2}^{\prime}(z^{*}) = c,\text{ and }\] \[V_{2}^{\prime\prime}(z^{*}) = \frac{(h-rc)\left(\sqrt{b^{2}+2\lambda a}-b\right)}{ra}.\]
At point \(z^{*},\) we must have
\[V_{1}^{\prime}(z^{*})=V_{2}^{\prime}(z^{*})\text{ and }V_{1}^{\prime\prime}(z^{ *})=V_{2}^{\prime\prime}(z^{*}).\]
Then we can numerically solve for \(C_{1}\) and \(C_{2}\) using the following equations:
\[V_{1}^{\prime}(z^{*}) = c,\] \[V_{1}^{\prime\prime}(z^{*}) = \frac{(h-rc)\left(\sqrt{b^{2}+2ra}-b\right)}{ra}.\]
Table 9 presents numerical values of \(z^{*}\) for different parameter combinations.
\begin{table}
\begin{tabular}{c c c c} \hline \hline & & \(r=0.01\) & \(r=0.1\) \\ \hline \multirow{2}{*}{\(b=2\)} & \(h=2\) & 0.501671 & 0.517133 \\ & \(h=1.9\) & 0.519136 & 0.535753 \\ \hline \multirow{2}{*}{\(b=10\)} & \(h=2\) & 0.660354 & 0.674135 \\ & \(h=1.9\) & 0.678797 & 0.693707 \\ \hline \hline \end{tabular}
\end{table}
Table 9: The numerical values of \(z^{*}\) for different parameter combinations (\(a=c=1\)).
### Ergodic control formulation with quadratic cost of control
We consider the cost function
\[c(\theta,z)=\alpha(\theta-\underline{\theta})^{2}+hz.\]
The HJB equation (25) - (26) then becomes
\[\frac{a}{2}v^{\prime\prime}(z)-\max_{\theta}\left\{\theta\cdot v^{ \prime}(z)-hz-\alpha(\theta-\underline{\theta})^{2}\right\}=\xi,\text{ and} \tag{96}\] \[v^{\prime}(z)=0\text{ and }v^{\prime}(z)\text{ having polynomial growth rate,} \tag{97}\]
which is equivalent to
\[hz+\frac{a}{2}v^{\prime\prime}(z)-\frac{1}{4\alpha}\left(v^{ \prime}(z)\right)^{2}-\underline{\theta}v^{\prime}(z) = \xi,\] \[v^{\prime}(0) = 0.\]
Let \(f(z)=v^{\prime}(z)\) with \(f(0)=0.\) Then we have
\[\xi=hz+\frac{a}{2}f^{\prime}(z)-\frac{1}{4\alpha}\left(f(z)\right)^{2}- \underline{\theta}f(z),f(0)=0,\]
which is a Riccati equation. One can solve this equation numerically to find \(\xi\) such that \(f(\cdot)\) has polynomial growth. For example, if \(\alpha=\underline{\theta}=a=1,\) and \(h=2,\) we have \(\xi^{*}=0.8017.\)
## Appendix E Implementation Details of Our Method
**Neural network architecture.** We used a three or four-layer, fully connected neural network with 20 - 1000 neurons in each layer; see Tables 10 and 11 for details.
**Common hyperparameters.** Batch size \(B=256\); time horizon \(T=0.1\), discretization step-size \(0.1/64\); see Tables 10 and 11 for details.
**Learning rate.** The learning rate starts from \(0.0005\), and decays to \(0.0003\) and \(0.0001\) with a rate detailed in Tables 10 and 11.
**Optimizer.** We used the Adam optimizer [29].
**Reference policy.** The reference policy sets \(\tilde{\theta}=1\).
**Activation function.** We use the 'elu' action function [38].
**Code.** Our code structure follows from that of Han et al. [17] and Zhou et al. [50]. We implement two major changes: First, we have separated the data generation and training processes to facilitate data reuse. Second, we have conducted the RBM simulation. We have also integrated all the features discussed in this section.
### Decay loss in the test example with linear cost of control
Recall in our main test example with linear cost of control, the cost function is
\[c(z,\theta)=h^{\top}z+c^{\top}\theta.\]
In the discounted cost formulation, substituting this cost function into the \(F\) function defined in Equation (30) gives the following:
\[F(\tilde{Z}(t),G_{w_{2}}(\tilde{Z}(t)))=\tilde{\theta}\cdot x+h^{\top}z-b\sum_{ i=1}^{d}\max(G_{w_{2}}(\tilde{Z}(t))_{i}-c,0). \tag{98}\]
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline Hyperparameters & \begin{tabular}{l} 1-dimensional \\ b=2 \\ \end{tabular} & \begin{tabular}{l} 2-dimensional \\ b=10 \\ \end{tabular} & \begin{tabular}{l} 6-dimensional \\ b=2 \\ \end{tabular} &
\begin{tabular}{l} 100-dimensional \\ b=10 \\ \end{tabular} \\ \hline \#Iterations & 6000 & 6000 & 6000 & 6000 & 6000 & \\ \#Epoches & 13 17 & 15 & 19 & 23 & 27 & 111 & 115 \\ \multirow{4}{*}{Learning rate scheme} & 0.0005 (0,2000) & 0.0005 (0,3000) & 0.0005 (0,3000) & 0.0005 (0,9500) \\ & 0.0003 (2000,4000) & 0.0003 (3000,6000) & 0.0003 (3000,6000) & 0.0003 (9500,22000) \\ & 0.0001 (4000,\(\infty\)) & 0.0001 (6000,\(\infty\)) & 0.0001 (6000,\(\infty\)) & 0.0001 (22000,\(\infty\)) \\ \#Hidden layers & 4 & 4 & 4 & 3 & \\ \#Neurons in each layer & 50 & 50 & 50 & 400 & \\ \(\tilde{c}_{0}\) & & 0.4 & 7 & 0.4 & 7 & 0.4 & 7 \\ \(\tilde{c}_{1}\) & & 800 & 2400 & & 16000 & \\ \hline \hline \end{tabular}
\end{table}
Table 10: Hyperparameters used in the test problems with linear costs
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Hyperparameters & 1-dimensional & 2-dimensional & 6-dimensional & 100-dimensional \\ \hline \#Iterations & 6000 & 6000 & 6000 & 12000 \\ \#Epoches & 12 & 14 & 22 & 110 \\ \multirow{4}{*}{Learning rate scheme} & 0.0005 (0,3000) & 0.0005 (0,3000) & 0.0005 (0,3000) & 0.0005 (0,9500) \\ & 0.0003 (3000,6000) & 0.0003 (3000,6000) & 0.0003 (3000,6000) & 0.0003 (9500,22000) \\ & 0.0001 (6000,\(\infty\)) & 0.0001 (6000,\(\infty\)) & 0.0001 (6000,\(\infty\)) & 0.0001 (22000,\(\infty\)) \\ \#Hidden layers & 3 & 4 & 4 & 3 \\ \#Neurons in each layer & 20 & 50 & 50 & 1000 \\ \hline \hline \end{tabular}
\end{table}
Table 11: Hyperparameters used in the test problems with quadratic costs
Note that if \(G_{w_{2}}(\tilde{Z}(t))<c\), we have
\[\frac{\partial F(\tilde{Z}(t),G_{w_{2}}(\tilde{Z}(t)))}{w_{2}}=0,\]
which suggests that the algorithm may suffer from the gradient vanishing problem [25], well-known in the deep learning literature. To overcome this difficulty, we propose an alternative \(F\) function
\[\tilde{F}(\tilde{Z}(t),G_{w_{2}}(\tilde{Z}(t)))=\tilde{\theta}\cdot x+h^{\top}z -b\sum_{i=1}^{d}\max(G_{w_{2}}(\tilde{Z}(t))_{i}-c,0)-\tilde{b}\sum_{i=1}^{d} \min(G_{w_{2}}(\tilde{Z}(t))_{i}-c,0) \tag{99}\]
where \(\tilde{b}\) is a decaying function with respect to the training iteration. Specifically, we propose
\[\tilde{b}=\left(\tilde{c}_{0}-\frac{\text{iteration}}{\tilde{c}_{1}}\right)^{ +},\]
for some positive constants \(\tilde{c}_{0}\) and \(\tilde{c}_{1}\). The specific choices of \(\tilde{c}_{0}\) and \(\tilde{c}_{1}\) are shown in Table 10.
We proceed similarly in the ergodic cost case using the function \(f\) defined in Equation (40).
### Variance loss function in discounted control
Let us parametrize the value function as \(V_{w_{1}}(z)=\tilde{V}_{w_{1}}(z)+\xi.\) Note that \(\partial\tilde{V}_{w_{1}}(z)/\partial z\)=\(\partial V_{w_{1}}(z)/\partial z\). Therefore, we can rewrite the loss function (52)
\[\ell(w_{1},w_{2}) = \mathbb{E}\left[\left(e^{-rT}\left(\tilde{V}_{w_{1}}(\tilde{Z}( T))+\xi\right)-(\tilde{V}_{w_{1}}(\tilde{Z}(0))+\xi\right)\right.\] \[\left.-\int_{0}^{T}e^{-rt}G_{w_{2}}(\tilde{Z}(t))\cdot\mathrm{d}W (t)+\int_{0}^{T}e^{-rt}F(\tilde{Z}(t),G_{w_{2}}(\tilde{Z}(t)))\,\mathrm{d}t \right)^{2}\right],\]
By optimizing \(\xi\) first, we obtain the following variance loss function:
\[\tilde{\ell}(w_{1},w_{2}) = \mathrm{Var}\left[e^{-rT}\,\tilde{V}_{w_{1}}(\tilde{Z}(T))- \tilde{V}_{w_{1}}(\tilde{Z}(0))\right.\] \[\left.-\int_{0}^{T}e^{-rt}G_{w_{2}}(\tilde{Z}(t))\cdot\mathrm{d}W (t)+\int_{0}^{T}e^{-rt}F(\tilde{Z}(t),G_{w_{2}}(\tilde{Z}(t)))\,\mathrm{d}t \ \right].\]
We observe that this trick could accelerate the training speed when \(r\) is small. Because when \(r>0\) is small, \(\xi\) is of the order \(O(1/r)\) and \(\tilde{V}_{w_{1}}(\cdot),G_{w_{2}}(\cdot)\) are of the order \(O(1)\).
|
2309.08300 | Subsystem symmetries, critical Bose surface, and immobile excitations in
an extended compass model | We propose an extended compass model that hosts subsystem symmetries and has
potential experimental relevance with 3d transition metal compounds. The
subsystem symmetries strongly constrain the mobility of spin excitations and
lead to profound consequences. At the quantum critical point we find the
presence of "critical Bose surface" along the entire $k_x$ and $k_y$ axis.
Across which we find a nodal-line spin liquid that undergoes nematic
instability at low temperatures. In the ferro-quadrupole phase, we find that
one excitation is immobile individually analogous to "fractons". | Zhidan Li, Chun-Jiong Huang, Changle Liu, Hai-Zhou Lu | 2023-09-15T10:42:27Z | http://arxiv.org/abs/2309.08300v3 | # Subsystem symmetries, critical Bose surface and immobile excitations in an extended compass model
###### Abstract
We propose an extended compass model that hosts subsystem symmetries and has potential experimental relevance with 3d transition metal compounds. The subsystem symmetries strongly constrain the mobility of spin excitations and lead to profound consequences. At the quantum critical point we find the presence of "critical Bose surface" along the entire \(k_{x}\) and \(k_{y}\) axis. Across which we find a nodal-line spin liquid that undergoes nematic instability at low temperatures. In the ferro-quadrupole phase, we find that one excitation is immobile individually analogous to "fractons".
_Introduction.--_ Symmetries lie at the heart of the fundamental principles in condensed matter physics. For example, global symmetries play an essential role in classification of matters and critical behaviors within and beyond the Landau paradigm [1; 2; 3; 4; 5; 6; 7; 8], while local symmetries are responsible for various emergent gauge structures with fractionalization in spin liquids [9; 10; 11; 12; 13; 14; 15] and fractional quantum Hall systems [16; 17]. Recently, there has been intense interest in symmetries that interpolate between global and local ones. These symmetries are called "subsystem symmetries" (or "quasi-local symmetries"), where symmetry operations are implemented only on subsets of the system [18; 19]. A well-known example is the "Bose metal" [20; 18; 21], that preserves \(U(1)\) boson number conservation within each row and each column. These subsystem symmetries strongly constrain the boson dynamics, resulting a peculiar critical phase where bosons are neither gapped nor condensed. More generally, subsystem symmetries have been shown to be indispensable in fracton topological orders [22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35] and certain higher-order symmetry-protected topological phases [36; 37]. More intriguingly, they lead to exotic physical behaviors such as dimensional reduction [38] and UV-IR mixing [39; 40] that even challenge the renormalization group paradigm. However, concrete microscopic models with such symmetries are not common, and most of them contain multiple spin interactions [41; 42; 43; 44], which makes them difficult to realize in experiments.
In this Letter we propose an extended compass model [45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68] that hosts subsystem symmetries within each row and column. This model only contain bilinear spin interactions and single-ion anisotropy, and is potentially relevant with 3d transition metal compounds. We demonstrate that these subsystem symmetries strongly constrain the quantum dynamics and impose profound physical consequence: At the quantum critical point the system exhibits "critical Bose surface" excitations located along the entire \(k_{x}\) and \(k_{y}\) axis in the reciprocal space. Across the transition we find a peculiar liquid phase with the spin structural factor peaked along the entire \(k_{x}\) and \(k_{y}\) axis, which we dubbed as "nodal-line spin liquid". At low temperatures, the strong spin fluctuations further lead to nematic instabilities via order-by-disorder mechanism [69; 70; 71; 72; 73; 74; 75]. In addition, in the ferro-quadrupole phase we f
Figure 1: Mobility of excitations within (a)(b): the ferro-quadrupole and (c)(d): the quantum paramagnetic phase. In particular, the immobility of the \(\beta_{1}\) excitation in the ferro-quadrupolar phase (a) resembles “fractons”.
that is completely immobile individually but mobile once formed in pairs, with the mobility in analogue of "fractons". Finally, we discuss the relevance of our model with transition metal oxide systems.
_Extended compass model.--_ We propose an extended spin-1 compass model on the two-dimensional square lattice
\[\mathcal{H}=\sum_{\mathbf{r}}\left[J(S_{\mathbf{r}}^{x}S_{\mathbf{r}+\hat{e}_{x }}^{x}+S_{\mathbf{r}}^{y}S_{\mathbf{r}+\hat{e}_{y}}^{y})-D(S_{\mathbf{r}}^{z}) ^{2}\right] \tag{1}\]
where \(S_{\mathbf{r}}^{\alpha}\) (\(\alpha=x,y,z\)) denotes the spin-1 operator at site \(\mathbf{r}\), \(\mathbf{J}\) term represents the compass exchange coupling, and \(D\) is the single-ion anisotropy. We find that the \(\mathbb{Z}_{2}\) operation \(\mathcal{G}=\exp[\sum_{\mathbf{r}}i(\mathbf{M}\cdot\mathbf{r})S_{\mathbf{r}}^ {z}]\) with \(\mathbf{M}=(\pi,\pi)\) changes the sign of \(J\) while leaves \(D\) invariant. Without loss of generality we set a ferro-magnetic \(J=-1\) as the energy unit.
The extended compass model Eq. (1) host remarkable Ising subsystem symmetries defined on each row and column [55]: for each row \(j\) we define \(\mathcal{P}_{j}\) as \(\pi\)-rotation about the \(y\) axis acting on this row,
\[\mathcal{P}_{j}=\prod_{\mathbf{r}^{\prime}\in j}e^{-i\pi S_{\mathbf{r}^{\prime }}^{y}}. \tag{2}\]
Similarly, for each column \(l\) we define \(\mathcal{Q}_{l}\) as \(\pi\)-rotation about the \(x\) axis acting on this column,
\[\mathcal{Q}_{l}=\prod_{\mathbf{r}^{\prime}\in l}e^{-i\pi S_{\mathbf{r}^{\prime }}^{x}}. \tag{3}\]
Note that \(\mathcal{P}\)'s and \(\mathcal{Q}\)'s are Ising symmetries of the Hamiltonian \([\mathcal{P}_{j},\mathcal{H}]=[\mathcal{Q}_{l},\mathcal{H}]=0\), \(\mathcal{P}_{j}^{2}=\mathcal{Q}_{l}^{2}=1\), and are mutually commutative \([\mathcal{P}_{j},\mathcal{P}_{j^{\prime}}]=[\mathcal{Q}_{l},\mathcal{Q}_{l^{ \prime}}]=[\mathcal{P}_{j},\mathcal{Q}_{l}]=0\). Moreover, this system host time-reversal symmetry \(\Theta\) and a spin-orbit-coupled \(C_{4}\) rotational symmetry:
\[C_{4}:S_{\mathbf{r}}^{x}\to S_{\mathbf{r}^{\prime}}^{y},S_{\mathbf{r}}^{y} \to-S_{\mathbf{r}^{\prime}}^{x},S_{\mathbf{r}}^{z}\to S_{\mathbf{r}^{\prime}} ^{z}, \tag{4}\]
where \(\mathbf{r}^{\prime}\) is the image of \(\mathbf{r}\) under the \(C_{4}\) rotation. Note that the subsystem symmetries Eqs. (2)(3) were first discovered in Ref. [55] for the pure compass model \(D=0\), and we point out that they still hold with finite \(D\).
_Semi-classical phase diagram.--_ The presence of the \(D\) term allows quantum tuning of the model Eq. (1) while keeping all the symmetries intact, hence one can keep track of the effects of subsystem symmetries with varying tuning parameters. We first tackle the phase diagram with the semi-classical approximation [77], with details described in Supplemental Material (SM) [76]. This semi-classical treatment is a powerful tool that can faithfully describe various quadrupole orders of spin-1 systems. We simulate the finite-temperature phase diagram with Monte Carlo simulations and the result is shown in Fig. 2. We start with \(T\to 0\) case qualitatively. In the \(D\to-\infty\) limit where single-ion \(D\) term dominates, the system sits in the so-called "large-\(D\) quantum paramagnetic" phase, a trivial product state of \(S_{\mathbf{r}}^{z}=0\) at each site. The quantum paramagnetic state is protected by an energy gap \(D\), hence it remains as the ground state with finite \(J\). For the \(D\to+\infty\) limit the semi-classical approximation fails to predict the correct ground state due to ignorance of quantum entanglement [76]. However, degenerate perturbation theory predicts a two-fold ferro-quadrupole order at low temperature. In the intermediate \(D\) regime we find a nematic liquid [60; 61; 54; 66] which preserves time-reversal but breaks the spin-orbit-coupled \(C_{4}\) symmetry down to \(C_{2}\) [Fig. 4(c)]. The \(C_{4}\) symmetry is restored upon increasing temperatures via a phase transition, and in a temperature window we find the "nodal-line spin liquid" regime where the spin structural factors are sharply peaked along the entire \(k_{x}\) and \(k_{y}\) axis [Fig. 4(b)], much analogous to the spiral surface in "spiral spin liquids" [78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88].
_Restricted mobility excitations in the quantum paramagnetic phase.--_ To understand the implications of the subsystem symmetries, we first investigate the spin excitations of the quantum paramagnetic phase with the flavor-wave theory [89; 90; 91]. The details of the flavor-wave theory are shown in SM [76]. By rewriting the spin Hamiltonian of Eq. (1) in terms of flavor bosons \(\beta_{1}\) and \(\beta_{1}\) and expand up to the quadratic order, we obtain the
Figure 2: Semi-classical phase diagram of the extended compass model Eq. (1). Red solid lines correspond to continuous phase transitions while the green solid lines correspond to the first-order transitions. The blue-triangle points denote the crossover between nodal-line spin liquid and the paramagnet phase determined by the peak of \(\chi_{\mathcal{O}_{1}}\) (see SM [76]). The orange solid lines denote the schematic Ising phase transitions to the ferro-quadrupole order.
linear flavor-wave Hamiltonian
\[\mathcal{H}_{\rm QP} = \frac{1}{2}\!\sum_{\bf k}\psi^{\dagger}_{{\bf k},1}\!\left(\!\! \begin{array}{cc}-D+2J\cos k_{x}&2J\cos k_{x}\\ 2J\cos k_{x}&-D+2J\cos k_{x}\end{array}\!\right)\!\!\psi_{{\bf k},1}\] \[+ \frac{1}{2}\!\sum_{\bf k}\psi^{\dagger}_{{\bf k},1}\!\left(\!\! \begin{array}{cc}-D+2J\cos k_{y}&-2J\cos k_{y}\\ -2J\cos k_{y}&-D+2J\cos k_{y}\end{array}\!\right)\!\!\psi_{{\bf k},\bar{1}},\]
where we denote \(\psi^{\dagger}_{{\bf k},m}=(\beta^{\dagger}_{{\bf k},m},\beta_{-{\bf k},m})\) for \(m=\bar{1},1\). We find that in Eq. (II) the \(\beta_{1}\) and \(\beta_{\bar{1}}\) branches are decoupled at the quadratic level for the reason that will be discussed later. The dispersions of the \(\beta_{1}\) and \(\beta_{\bar{1}}\) excitations can be directly obtained from Bogoliubov transformation: \(E_{{\bf k},1}=\sqrt{D^{2}-4DJ\cos k_{x}}\) and \(E_{{\bf k},\bar{1}}=\sqrt{D^{2}-4DJ\cos k_{y}}\), see Fig. 3(a).
The excitations acquire a gap \(\Delta=\sqrt{D^{2}-4|DJ|}\), dictating the discrete symmetries of the model. For the ferromagnetic \(J<0\) case the band minimum of the two modes are located at the entire \(k_{x}=0\) and \(k_{y}=0\) lines in the Brillouin zone, respectively, in contrast with usual models where the minimum locates only at some discrete points. Moreover, we note that both flavor-wave excitation become dispersionless along particular direction, which indicates that the excitations are mobile only along one direction, and becomes immobile along the other [Fig. 1(c)(d)]. We point out that this feature is not an artifact of the linear flavor-wave approximation, but deeply rooted in the subsystem symmetries of this system.
All flavor-wave excitations have definite parities under subsystem symmetries Eqs. (2)(3). Through symmetry analysis [76], it turns out that the \(\beta_{{\bf r},1}\) (\(\beta_{{\bf r},\bar{1}}\)) excitation is even under all subsystem symmetries except the \(\mathcal{P}\) at the same row \(\mathcal{P}_{\tau_{y}}\) (the \(\mathcal{Q}\) at the same column \(\mathcal{Q}_{\tau_{x}}\)). Now the immobility nature of flavor-wave excitations becomes clear: Subsystem symmetries strongly constrain the linear mixing of flavor bosons, as only bosons carrying exactly the same symmetry representations are allowed to hop or pair. We can see that all \(\beta_{1}\) along the same row (and all \(\beta_{\bar{1}}\) along the same column) carry exactly the same representation hence can be mixed linearly, while all other combinations are disallowed. This is precisely reflected in the flavor-wave Hamiltonian Eq. (II): the \(\beta_{1}\) excitation is mobile along the \(x\) direction and becomes immobile along the \(y\) direction; Similarly, the \(\beta_{\bar{1}}\) excitation is mobile along the \(y\) direction and becomes immobile along the \(x\) direction. As a result, a single flavor-wave excitation is effective one-dimensional that can only propagate along one direction. However, a pair of \(\beta_{1}\) excitations at the same row (or a pair of \(\beta_{\bar{1}}\) at the same column) commute with all subsystem symmetries hence can cooperatively propagate throughout the 2D plane, see Fig. 1(c)(d).
_Critical Bose surface and nodal-line spin liquid.--_ The symmetry protected immobility of excitations have profound implications on the nature of criticality and the proximate phase. Here we analyze the magnetic instabilities of the quantum paramagnetic state. As we turn on larger \(|J|/D\) in the quantum paramagnetic phase, the bands become more dispersive. Until we reach a critical value of \(D_{c}\) the excitations become gapless, and the transition occurs. At the quantum critical point \(D_{c}\), the gapless modes constitute the "critical Bose surface" along the entire \(k_{x}\) and \(k_{y}\) axis [Fig. 3(b)], and is protected by the subsystem symmetries as well as the \(C_{4}\) symmetry. The existence of the Bose surface can be further illustrated by measuring the spin structural factor
\[\mathcal{S}({\bf Q})=\frac{1}{N}\sum_{{\bf rr^{\prime}}}\langle{\bf S}_{\bf r} \cdot{\bf S}_{\bf r^{\prime}}\rangle e^{i{\bf Q}\cdot({\bf r}-{\bf r^{\prime }})}. \tag{6}\]
From Fig. 4(a) we see that \(\mathcal{S}({\bf Q})\) is clearly peaked along the entire \(k_{x}\) and \(k_{y}\) axis, consistent with the Bose surface scenario. The direction-dependent immobility renders the excitations 1D-like, and implies the specific heat scaling \(C_{v}\sim T\ln(1/T)\) at low tempertures. More discussions about the nature of this transition will be given in the follow-up work [92].
The intermediate phase can be understood from proliferation of \(\beta_{1}\) and \(\beta_{\bar{1}}\) excitations in the quantum paramagnetic phase. Due to the nodal-line degeneracy, the structural factor should be peaked along the entire
Figure 3: Dispersions of flavor-wave excitations (a) in the quantum paramagnetic phase \(D=-5.0\), (b) at the quantum critical point \(D_{c}=-4.0\) and (c) in the ferro-quadrupole phase \(D=5.0\).
and \(k_{y}\) axis, which signifies absence of magnetic long-range order. In fact, the above scenario only holds at a finite temperature window as shown by the "nodal-line spin liquid" regime in Fig. 2, with spin structural factor shown in Fig. 3(b). Upon decreasing temperatures, the strong spin fluctuations spontaneously lift the degeneracy between the \(\beta_{1}\) and \(\beta_{\bar{1}}\) bands and develop an Ising nematic order. The nematic order parameter takes the form
\[\hat{\mathcal{O}}_{N}=\frac{1}{N}\sum_{\mathbf{r}}(S^{x}_{\mathbf{r}}S^{x}_{ \mathbf{r}+\hat{e}_{x}}-S^{y}_{\mathbf{r}}S^{y}_{\mathbf{r}+\hat{e}_{y}}) \tag{7}\]
that breaks the \(C_{4}\) symmetry down to \(C_{2}\). From the structural factor inside the nematic phase [Fig. 4(c)], we observe that spins are completely uncorrelated along the \(x\) or \(y\) direction, hence can be regarded as decoupled 1d chains.
_Fracton-like excitations above the ferro-quadrupole state.--_ Here we discuss the ferro-quadrupole phase which can be well understood from the limit of \(D\to+\infty\). In the large-\(D\) limit the \(S^{z}_{\mathbf{r}}=0\) state has a large energy penalty of \(D\) and the low energy subspace is spanned by \(S^{z}_{\mathbf{r}}=\pm 1\) states. One can thus define effective spin-1/2 operator \(\tau_{\mathbf{r}}\) acting on the \(S^{z}_{\mathbf{r}}=\pm 1\) subspace
\[\tau^{z}_{\mathbf{r}} =\frac{1}{2}\mathcal{P}_{\mathbf{r}}S^{z}_{\mathbf{r}}\mathcal{ P}_{\mathbf{r}}, \tag{8}\] \[\tau^{\pm}_{\mathbf{r}} =\frac{1}{2}\mathcal{P}_{\mathbf{r}}(S^{\pm}_{\mathbf{r}})^{2} \mathcal{P}_{\mathbf{r}}, \tag{9}\]
where \(\mathcal{P}_{\mathbf{r}}\) is a projection operator onto the low energy \(S^{z}_{\mathbf{r}}=\pm 1\) subspace.
The low-energy effective Hamiltonian can be obtained from second-order perturbation theory in the limit \(D\gg|J|\). With straightforward calculations, we find that it turns out to be a ferromagnetic Ising model of the \(\tau\) variables [76],
\[\mathcal{H}^{(2)}_{\text{eff}}=-\frac{J^{2}}{2D}\sum_{\mathbf{r}}(\tau^{x}_{ \mathbf{r}}\tau^{x}_{\mathbf{r}+\hat{e}_{x}}+\tau^{x}_{\mathbf{r}}\tau^{x}_{ \mathbf{r}+\hat{e}_{y}}). \tag{10}\]
Therefore, the ground-state should be \(\tau^{x}\sim(S^{x})^{2}-(S^{y})^{2}\) ferro-quadrupole ordered. The two-fold ferro-quadrupole order breaks the same symmetry as the Ising nematic phase, but described by an on-site order parameter \(\tau^{x}\).
Here we discuss the excitations above the ferro-quadrupole order. Without losing generality we choose the \(\tau^{x}=-1/2\) ground-state for our calculations. The resulting linear flavor-wave Hamiltonian of ferro-quadrupole order takes the form [76]:
\[\mathcal{H}_{\text{FQ}} =\frac{1}{2}\sum_{\mathbf{k}}\psi^{\dagger}_{\mathbf{k},0}\begin{pmatrix} M(\mathbf{k})&-2J\cos k_{y}\\ -2J\cos k_{y}&M(\mathbf{k})\end{pmatrix}\psi_{\mathbf{k},0}\] \[+\frac{J^{2}}{D}\sum_{\mathbf{k}}\beta^{\dagger}_{\mathbf{k},1} \beta_{\mathbf{k},1}, \tag{11}\]
where \(\psi^{\dagger}_{\mathbf{k},0}=(\beta^{\dagger}_{\mathbf{k},0},\beta_{-\mathbf{ k},0})\) and \(M(\mathbf{k})=D+\frac{J^{2}}{2D}+2J\cos k_{y}\). From the energy dispersions [Fig. 3(c)], we find that the excitations are spatially anisotropic, reflecting the nematic nature of the ferro-quadrupole phase. Surprisingly, we find that the \(\beta_{1}\) band is completely flat, indicating that a single \(\beta_{1}\) excitation is completely immobile individually. Such immobility feature, again, can be understood from the symmetries [76]. We find that a single \(\beta_{\mathbf{r},1}\) excitation is odd under the \(\mathcal{P}\) the same row \(\mathcal{P}_{r_{y}}\) and the \(\mathcal{Q}\) at the same column \(\mathcal{Q}_{r_{x}}\), while it commutes with all other subsystem symmetries. This means that the \(\beta_{1}\) excitation at different sites carry different representation under subsystem symmetries Eq. (2)(3), hence could not hop or pair between different site and become completely immobile. However, a pair of \(\beta_{1}\) excitations at the same row (or at the same column) can propagate along the direction transverse to the row (or column). The mobility of the \(\beta_{1}\) excitation is much analogous to
Figure 4: Spin structural factors measured (a) at the transition point \(D=D_{c}\) with the temperature \(T/|J|=0.01\), (b) inside the nodal-line spin liquid, \((D,T/|J|)=(-0.5,0.2049)\), and (c) inside the Ising nematic phase, \((D,T/|J|)=(-0.5,0.1016)\). The system parameters are \(L_{x}=L_{y}=48\).
the "type-I fractons". However, it is different from fractons since it belongs to non-topological excitations that can be created / annihilated individually.
_Discussions.--_In this paper, we propose an extended compass model that hosts subsystem symmetries on each row and column. The single-ion anisotropy \(D\) term offers extra tunability to the original compass model while respecting the subsystem symmetries, and lead to interesting physical consequences such as excitation immobility, critical Bose surface, and fracton-like excitations. Subsystem symmetries have been regarded indispensable to many interesting physical phenomena such as Bose metal and fracton topological order. We hope that our work can shed light on experimental realization of subsystem symmetries in cold atom and condensed matter systems.
Our model Eq. (1) is relevant with 3d transition metal compounds. Consider a layered perovskite structure where transition metals are arranged a layered square lattice, and each transition metal ion is surrounded by a distorted octahedra of O\({}^{2-}\) (like La\({}_{2}\)CuO\({}_{4}\)). Assume that the each \(t_{2g}\) orbital is filled with \(1/2/4/5\) electrons, so that the orbital angular momentum is active and can be described by an effective spin-1 operator \(\mathbf{S}\). The single-ion anisotropy \(D\) term arises from the energy splitting between the \(xy\) and \(xz/yz\) orbitals. Meanwhile, the compass term \(J\) would arise as the effective orbital-orbital interaction [76; 45]. Although the complete effective model [76; 45] also contains additional terms such as quadrupole-quadrupole orbital interactions, these terms do not violate all subsystem symmetries discussed here. Hence it will be interesting to investigate the implications of subsystem symmetries in such systems [92].
We thank Rong Yu for valuable discussions. This work was supported by the National Key R&D Program of China (2022YFA1403700), Innovation Program for Quantum Science and Technology (2021ZD0302400), the National Natural Science Foundation of China (11925402), Guangdong province (2020KCXTD001 and 2016ZT06D348), and the Science, Technology and Innovation Commission of Shenzhen Municipality (ZDSYS20170303165926217, JAY20170412152620376, and KYTDPPT20181011104202253). The numerical simulations were supported by Center for Computational Science and Engineering of SUSTech and Tianhe-2. Z.L. gratefully acknowledge research support from the National Natural Science Foundation of China (Grants No. 12104313, 12034014), Shenzhen Natural Science Fund (the Stable Support Plan Program 20220810161616001) and Foundation from Department of Science and Technology of Guangdong Province (No. 2021QN02L820). C.J.H. gratefully acknowledge research support from Gang Chen by the Research Grants Council of Hong Kong with General Research Fund Grant No. 17306520.
|
2301.13664 | Ambient FSK Backscatter Communications using LTE Cell Specific Reference
Signals | Long Term Evolution (LTE) signal is ubiquitously present in electromagnetic
(EM) background environment, which make it an attractive signal source for the
ambient backscatter communications (AmBC). In this paper, we propose a system,
in which a backscatter device (BD) introduces artificial Doppler shift to the
channel which is larger than the natural Doppler but still small enough such
that it can be tracked by the channel estimator at the User Equipment (UE).
Channel estimation is done using the downlink cell specific reference signals
(CRS) that are present regardless the UE being attached to the network or not.
FSK was selected due to its robust operation in a fading channel. We describe
the whole AmBC system, use two receivers. Finally, numerical simulations and
measurements are provided to validate the proposed FSK AmBC performance. | Jingyi Liao, Xiyu Wang, Kalle Ruttik, Riku Jantti, Phan-Huy Dinh-Thuy | 2023-01-31T14:32:23Z | http://arxiv.org/abs/2301.13664v1 | # Ambient FSK Backscatter Communications using LTE Cell Specific Reference Signals
###### Abstract
Long Term Evolution (LTE) signal is ubiquitously present in electromagnetic (EM) background environment, which make it an attractive signal source for the ambient backscatter communications (AmBC). In this paper, we propose a system, in which a backscatter device (BD) introduces artificial Doppler shift to the channel which is larger than the natural Doppler but still small enough such that it can be tracked by the channel estimator at the User Equipment (UE). Channel estimation is done using the downlink cell specific reference signals (CRS) that are present regardless the UE being attached to the network or not. FSK was selected due to its robust operation in a fading channel. We describe the whole AmBC system, use two receivers. Finally, numerical simulations and measurements are provided to validate the proposed FSK AmBC performance.
Ambient Backscatter Communications, LTE Cell Specific Reference Signals, Channel Estimation
## I Introduction
The introduction of ambient backscatter communications (AmBC) [1] in mobile networks [2] has recently been proposed for the sustainable development of asset tracking services [3], and to overcome the limitation of radio frequency identification (RFID) based solutions.
In RFID-based asset tracking [4] an energy-autonomous and passive RFID tag is illuminated by an RFID reader, with a radio frequency (RF) carrier-wave [5]. The tag reflects (backscatters) the wave in a modulated way to send its message, and the reader detects the tag's message in the variations of the backscattered signal. As the tag harvests RF energy to power itself, the reader-to-tag range is limited by the reader transmit power to several meters. Tags can therefore be tracked only in places where manual readers or portals are deployed. The short communication range would need to be compensated by increasing the number of readers of portals, but a massive deployment of such devices is not sustainable.
In comparison, AmBC systems [1] involve three communication nodes instead of only two: an ambient source of RF signals, a backscatter device (BD) and an AmBC receiver device. The BD is similar to a tag. The AmBC receiver reads the BD's message, without having to generate any RF carrier wave, as the BD is already illuminated by the ambient source. In practice, a BD can be implemented with an antenna connected to various matching impedances, through an RF switch driven by a micro-controller. The BD switches between impedances to modulate the reflection according to the message to be transmitted. In [3], it is proposed to use a cellular base station (BS) as an ambient source, and to use a user equipment (UE) as AmBC receiver, to develop a service of asset tracking with ubiquitous coverage. It is almost "out-of-thin-air": i.e. without generating additional waves, without additional energy, and without deploying massively new equipment such as portals. An energy-autonomous BD harvesting solar energy, called crowd-detectable zero-energy-devices (CD-ZED) is put on the asset to be tracked. Each time the BD (or CD-ZED) gets within few meters of a UE (connected to the cellular network and geo-localised), the BD is detected by the UE and this contact event is reported to the network. Thanks to the anonymous participation of the crowd of UEs, the localisation of the BD is tracked over the cellular network coverage area. Such CD-ZED concept is one example of the more general category of energy-autonomous devices called zero-energy devices (ZED) [6]. Such asset tracking service is one example of ambient internet of things (AIoT) applications, currently being discussed in standardisation for cellular networks [7]. Finally, ZED is one of the key technologies identified for the building of a future and sustainable 6G [8].
The CD-ZED concept is applicable to all generations of mobile networks. Ambient backscatters in 5G networks has been studied in [2] where it was shown that BD can be detected by a UE as long as the UE is in the BS coverage and the tag is close to the UE. This is confirmed by successful experiments of ambient backscattering communications conducted with ambient signals from a commercial 4th generation (4G), 5th generation (5G) networks in [3], in very few test locations, far from the BS. The previous works [9, 10] used power detector based receivers that have limited performance due to the high variability of the mobile downlink signals. Very recently, to improve 4G AmBC performance, [11] proposed to use knowledge about pilots of the ambient source (i.e. the BS) at the AmBC receiver (i.e. UE) side. Previously similar approach has been utilized in the context of Wi-Fi standard [12]. Unfortunately Wi-Fi pilot transmission is sporadic and thus sub-optimal for reading BD signals. In comparison, LTE pilot signals called cell specific reference signals (CRS) [11] are always broadcasted by LTE base station. The CRS structure is standardised and known and used by the UE to estimate the downlink channel. In this paper, we propose to use the UE channel estimator as a receiver for the BD messages.
Performance of AmBC receivers using LTE CRS knowledge is affected by BD modulation method. In [11], on-off keying (OOK) modulation was used. That is, BD switched between two load impedances. Unfortunately, a simple OOK signal
occupies frequencies where Doppler components from all the channel paths are also present making it difficult to separate scattered path from the direct path causing so called direct path interference [13]. In addition, the symbol duration (i.e. switching period) used by BD tend to be long compared to the channel coherence time making the BD signal vulnerable to fast fading.
**Contributions.** In this paper, we propose to use FSK type modulation in AmBC system. For FSK backscatter signal, backscattered path is separated from the direct path in frequency domain, so that the direct path interference can be cancelled. BD introduced artificial frequency shift, which we referred to as _frequency key_, is selected to be higher than Doppler effect, and to be lower than channel estimation tracking speed. Also, the fact that CRS signals are presented only at certain orthogonal frequency division multiplex (OFDM) symbols, limited the frequency key selection. Moreover, FSK allows for noncoherent reception that does not depend on the channel parameters. The contributions are listed as follows.
* The BD signal is generated by the same OOK modulator as in [11], but the generated waveform is selected to approximate FSK. We also discuss square wave FSK that uses rectangular pulses instead of sinusoidal signals. We investigate the impact of non-uniform CRS sampling frequency. The FSK frequency keys are carefully selected to cooperate with non-uniform LTE CRS.
* The AmBC receiver directly utilizes the channel estimates obtained from LTE CRS pilot signal, instead of full channel state information. Two types of receivers, coherent and noncoherent methods are proposed. The simulation proves that coherent method outperforms energy detector.
* Finally, the proposed system is validated by a proof-of-concept implementation and corresponding measurements.
The paper is structured as follows: Section I introduces AmBC, motivation and contribution of this work. Section II describes components of the proposed system. Section III derives the signal model of the backscatter signal. Section IV outlines the channel estimation algorithm at the AmBC receiver. Section V presents proposed receiver structure. Section VI simulates this AmBC system performance and designs a measurement to validate it. Finally, a conclusion is draw in Section VII.
## II System description
We consider an AmBC system consisted of LTE BS (also referred to as NodeB) acting as an ambient source, a UE acting as an AmBC receiver, and a BD as illustrated in Fig. 1. UE uses the primary and secondary synchronization signals transmitted by NodeB to achieve radio frame, subframe, and symbol synchronicity with the BS. It also identifies the center of the channel bandwidth and deduces the Physical Channel Identity (PCI). After this initial acquisition process, UE uses downlink CRS to estimate the downlink channel. Fig. 2 illustrates CRS for antenna ports 0 to 3. As can be seen from the Fig. 2, two channel estimations are obtained per 0.5 ms slot leading to 4 kHz non-uniform channel sampling rate.
BD does not have a pulse shaping filter so it is limited to square pulses which will cause it to have very wide bandwidth. The square wave frequency shift keying is illustrated as subplot in Fig. 1. Since our receiver has narrowband, aliasing is unavoidable which causes further challenge for the receiver operation. Furthermore, since we did not implement clock drift compensation, we needed to use noncoherent FSK receiver for the backscatter symbols. A synchronization header is prefixed at the beginning of data packet.
The impulse response of the channel from BS to the AmBC receiver is
\[\begin{split} h[\tau;t]&=x(t)\sum_{k\in\mathcal{K} _{0}}a_{k}[t]\delta[\tau-\tau_{k}(t)]\\ &+\sum_{k\in\mathcal{K}_{1}}a_{k}[t]\delta[\tau-\tau_{k}(t)], \end{split} \tag{1}\]
where \(a_{k}(t)\), \(\tau_{k}(t)\) are the time varying amplitude and delay of the \(k^{\text{th}}\) multipath component and \(\delta(\tau)\) denotes the Dirac's
Fig. 1: High level structure of the proposed system.
Fig. 2: LTE Release 8 Cell Specific Reference Signal for antenna ports 0,1,2, and 3.
delta function. The bandwidth of the channel tap gain \(a_{k}(t)\) is defined by the Doppler \(f_{D}\) frequency shift in the channel. In fig 1, the direct path components \(\mathcal{K}_{1}\) from an LTE NodeB to UE is indicated by the thick arrow. The thin arrow from BS to UE via BD represents \(\mathcal{K}_{0}\) BD modulated scattered components.
## III Back scattered signal
The BD performs load modulation on the incident signal illuminating its antenna, in Fig. 3. That is, it varies its complex antenna load impedance between two states \(Z_{0}\) and \(Z_{1}\). The BD reflection coefficient is given by
\[\Gamma_{x}=\frac{Z_{x}-Z_{a}^{*}}{Z_{x}+Z_{a}^{*}}\]
where \(Z_{x}\) denotes the load impedance in state \(x\in\{0,1\}\) and \(Z_{a}\) is the antenna impedance. In on-off keying case, we would in ideal case have \(Z_{0}=Z_{a}^{*}\) and \(Z_{1}=0\) resulting in \(\Gamma_{0}=0\) and \(\Gamma_{1}=-1\). In practical implementation, the load impedance is switched by a diode as control circuit [14]. In Fig. 3, a micro controller unit (MCU) controls that diode switch, which introduces artificial Doppler.
In the previous work [11], OOK method was introduced for backscatter communication. The impulse response of channel \(h[\tau;t]\) are different when BD in on (\(x(t)=1\)) or off (\(x(t)=0\)) status in Eq. 1.
\[h_{\text{on}}[\tau;t] =\sum_{k\in\mathcal{K}_{0}}a_{k}[t]\delta[\tau-\tau_{k}(t)]+\sum_ {k\in\mathcal{K}_{1}}a_{k}[t]\delta[\tau-\tau_{k}(t)]\] \[h_{\text{off}}[\tau;t] =\sum_{k\in\mathcal{K}_{0}}a_{k}[t]\delta[\tau-\tau_{k}(t)]\]
For OOK, ambient signal and Doppler effect strongly influence the channel estimation. Compared with OOK, FSK shifts the frequency in spectrum and avoids the influence of ambient LTE signal. FSK shifts RF ambient signal on different frequency key. It is a good feature so that the frequency keys could be specially designed to compromise Doppler effect and avoid influence of ambient signal.
The BD aims at causing artificial Doppler that is higher than the natural Doppler in the channel such that the receiver would be able to distinguish between the direct path components (multipath components in \(\mathcal{K}_{1}\)) and BD modulated scattered components (multipath component in \(\mathcal{K}_{0}\)). BD does this by generating periodic rectangular wave \(\tilde{x}_{k}(t)=\tilde{x}_{k}(t+T_{k})\):
\[\tilde{x}_{k}(t)=\sum_{n=-\infty}^{\infty}\text{rect}\left[\frac{2(t-nT_{k})} {T_{k}}\right],\quad k=0,1\]
where \(\text{rect}(t)\) is the unit rectangular pulse and the index \(k\) indicates whether bit 0 or 1 was transmitted. \(\tilde{x}_{0}(t)\) and \(\tilde{x}_{1}(t)\) are sparse or tight square waves with infinite extension. However, the BD symbol duration is \(T_{BC}\), which is the red line in the subplot of Fig. 1. Hence the generated BD pulse is
\[x_{k}(t)=\text{rect}\left(\frac{t}{T_{BC}}\right)\tilde{x}_{k}(t),\quad k=0,1\]
In time domain, \(x_{0}(t)\) or \(x_{1}(t)\) look like blue line segments in the subplot of Fig. 1. The Fourier transform of the BD symbol is given by
\[X_{k}(f)=\frac{T_{BC}}{2}\sum_{l=-\infty}^{\infty}\mathrm{sinc}\left(\frac{1}{ 2}l\right)\mathrm{sinc}\left[\left(f-\frac{l}{T_{k}}\right)T_{BC}\right]\]
where \(\mathrm{sinc}(x)=\sin(\pi x)/(\pi x)\). The harmonics of the rectangular wave nominal frequency \(l\frac{1}{T_{k}}\), \(l=3,5,...\) attenuate slowly implying that the square wave has a wide bandwidth.
## IV LTE channel estimation
In the LTE system, the BS transmits CRS in every subframe. Fig. 2 illustrates the CRS allocation for antenna ports 0 to 3 when the system is using normal cyclic prefix. LTE networks may operate using different CRS configurations. In a non-shifted CRS configuration, all cells use the same CRS time and frequency resources. In a shifted CRS configuration, different cells transmit CRSs on resources that are shifted in frequency. The non-shifted configuration avoids CRSs interference on data transmissions, but is also associated with a systematic CSI estimation error; especially noticeable at low traffic. Using the shifted configuration, the CRSs interfere with data transmissions, but the CSI estimation error is smaller [15]. In this paper, we consider the case of shifted CRS, and the AmBC receiver uses pilots transmitted from antenna port 0.
In case of normal cyclic prefix, the OFDM symbol duration in LTE is \(T_{s}=71.4\)\(\mu\)s except for the symbol 0 that has longer prefix. Also since pilots are only present in symbols 0 and 4, we get irregular sampling of the channel. Let \(T_{slot}=0.5\)\(\mathrm{ms}\) denote the slot length and let \(\Delta T=4T_{s}-\frac{T_{slot}}{2}=35.6\)\(\mu\)s denote the offset of the second channel sampling instant compared to regular sampling interval \(T_{r}=T_{slot}/2\). If samples in regular sampling interval \(T_{r}=0.25\)\(\mathrm{ms}\), that would lead to 4 kHz sampling frequency.
From the transmitted CRS, we obtain frequency domain channel estimates \(\hat{H}[n;t]\) for time instants \(t\in\{0,T_{slot}/2+\Delta T,T_{slot},3/2T_{slot}+\Delta T,\cdots\}\) during which pilots were send. \(n\) is discrete frequency. Assuming that the channel stays approximately constant during the transmission of single OFDM symbol, inverse Fast Fourier Transform of \(H[n;t]\) at time instants \(t\) yields the following channel taps
\[\hat{h}[l;t] = x(t)\sum_{k\in\mathcal{K}_{0}}a_{k}^{b}(t)\text{sinc}\left(l- \tau_{k}(t)W\right)\] \[+ \sum_{k\in\mathcal{K}_{1}}a_{k}^{b}(t)\text{sinc}\left(l-\tau_{k}( t)W\right)+z_{l}(t),\]
where \(W=\frac{1}{T_{s}}\) denotes the utilized bandwidth, \(a_{k}^{b}(t)=e^{-i2\pi f_{c}\tau_{k}(t)}a_{k}(t)\) denotes the baseband equivalent channel tap of the \(k^{\text{th}}\) multipath component, \(f_{c}\) is the carrier frequency, and \(z_{l}(t)\) denotes the the estimation noise, AmBC signal \(x(t)\) could be \(x_{0}(t)\) or \(x_{1}(t)\). The LTE system is synchronized
Fig. 3: Circuit diagram of the BD.
to the shortest path which appears in the first channel tap. Backscattered signal component is likely to be much smaller than the direct path component leading to very small signal-to-noise ratio (SNR). As a consequence, the distance between BD and receiver is short in most practical deployments and thus most of the BD scattered power would be in the first channel tap. Hence, in the receiver it is sufficient to find just
\[\hat{h}[0;t]=x(t)h_{0}(t)+h_{1}(t),\]
where \(h_{0}(t)\) and \(h_{1}(t)\) contain the scattered and direct path components that appear in the first channel tap after sampling.
Let \(s(t)=\delta(t)+\delta(t-T_{slot}/2-\Delta T)\) be periodic sampling signal \(s(t+T_{slot})=s(t)\) where \(T_{slot}\) denotes the slot length and \(\Delta T\) denotes the time offset of the second pilot in the slot compared to half of the slot time \(T_{slot}/2.\) Since \(s(t)\) is periodic, we can express it in terms of Fourier-series as
\[s(t)=\sum_{l=\infty}^{\infty}s_{l}e^{i2\pi\frac{t}{T_{slot}}l}\]
where the Fourier series coefficient are given by
\[s_{l} = \frac{1}{T_{slot}}\int_{0}^{T_{slot}}s(t)e^{-i2\pi\frac{t}{T_{slot }}l}dt\] \[= \frac{1}{T_{slot}}\left(1+e^{-i\pi\left(1+2\pi\frac{\Delta T}{T_{ slot}}\right)l}\right)\] \[= \frac{1}{T_{slot}}\left(1+(-1)^{l}e^{-i2\pi\frac{\Delta T}{T_{ slot}}l}\right)\] \[= \frac{2}{T_{slot}}\frac{1+(-1)^{l}}{2}+\frac{1}{T_{slot}}(-1)^{l }\left(e^{-i2\pi\frac{\Delta T}{T_{slot}}l}-1.\right)\]
The sampled channel is \(h_{s}(t)=\hat{h}[0;t]s(t).\) Now using the Fourier series representation of \(s(t)\) and taking the Fourier-transform of \(h_{s}(t)\), we obtain the Discrete Time Fourier Transform (DTFT) of the sampled channel response:
\[H_{s}(f) = \frac{2}{T_{slot}}\sum_{l=-\infty}^{\infty}H\left(f-\frac{2l}{T_ {slot}}\right)\] \[+ \frac{1}{T_{slot}}\sum_{l=-\infty}^{\infty}\varepsilon_{l}H\left( f-\frac{l}{T_{slot}}\right)\]
where \(\varepsilon_{l}=(-1)^{l}\left(e^{-i2\pi\frac{\Delta T}{T_{slot}}l}-1\right)\).
The first (upper) sum corresponds to the spectrum of the channel sampled at rate \(\frac{T}{T_{slot}}=4\) kHz and the second (lower) sum contains additional aliased components due to the irregularity of the sampling \(\Delta T.\) The figure 5 shows that after sampling, the spectrum contains the desired FSK signal, its harmonic components as well as aliased harmonics.
Even with 4 kHz sampling frequency, we would experience severe aliasing of the harmonic components of the square waves. Due to irregular sampling, we will see additional aliased components, but they are attenuated by the factor \(|\varepsilon_{l}|\). To be on the safe side, we select the square wave nominal frequencies be on the range \(f_{k}\in[200,1000]\) Hz. The lower limit is selected to be larger than the natural Doppler in the channel such that the direct path \(h_{1}(t)\) can be filtered away using high-pass filter. The upper frequency is selected to be small enough to avoid additional aliasing due to irregular sampling.
Even if the two backscatter symbols \(x_{0}(t)\) and \(x_{1}(t)\) would have been selected to be orthogonal, after sampling they will interfere with each other. Due to aliasing, it turns out that orthogonal choices \(f_{1}=Kf_{0}\) for integer \(K\) leads to high interference from aliased harmonics hitting the other symbol. It is thus seems advantageous to take \(K\) not to be integer.
## V Receiver structure
The flow chart in Fig. 4 shows the algorithm steps of the proposed backscatter receiver. In this section, the purposes of some steps in receiver are elaborated on.
### _Band-pass Filter_
It easy to assume BD symbol keeps frame synchronicity when received and demodulated by UE. \(m\in\mathbb{N}\) is defined as the \(m\)-th symbol backscatter sending at time \(t=mT_{BC}.\)
Channel phase of scatter path \(\arg\{g_{0}(mT_{BC})\}\) is ambiguous due to synchronization. Power of the first channel tap \(l=0\) does not contain the phase uncertainty. The receiver only operates with channel tap power \(y[m]=|\hat{h}[0;m]|^{2}.\)
If we don't consider noise in the channel, the channel power approximately satisfied the following relationship
\[y[m]\approx x[m]\beta[m]+\alpha[m],\]
where \(x[m]\) is the BD signal, \(\alpha[m]=|h_{1}(mT_{BC})|^{2}\), and \(\beta[m]=|h_{0}(mT_{BC})|^{2}+2\mathrm{Re}\{h_{1}^{*}(mT_{BC})h_{0}(mT_{BC})\}\), considering the fact that \(x^{2}[m]=x[m].\)
To separate this two channels, a high-pass filter and a low-pass filter is required. High pass is designed to block \(\alpha[m]\). And low-pass aims to constrain the harmonic frequency and other interference. In practice, they can be combined as a band-pass filter (BPF). Considering the frequency keys of FSK are only several hundreds Hz, Doppler effect is the principle threat of propose backscatter receiver. Doppler effect and frequency drift of BS and UE contribute to the channel change of \(\alpha[m]\) and \(\beta[m]\) in a small time scale. By switching the BD at a higher frequency than the maximum Doppler in the channel, the Doppler frequency is restrained by filter. With help of a high-pass filter on \(y[m]\) to remove the direct path interference \(\alpha[m]\), BD modulated path \(\beta[m]\) component is distinguished in frequency domain. Thus, two frequency keys of the BD FSK symbols left. In the base band, FSK symbol is designed two frequency keys, namely \(f_{0}\) and \(f_{1}\).
\[f_{k}=1/T_{k},\quad k=0,1\]
Fig. 4: Flow chart of the proposed backscatter receiver.
### _FSK Demodulator_
After passing a high-pass filter and low-pass filter, received 2-FSK signal \(y_{f}[m]\) is demodulated. We propose both coherent and noncoherent, power detection based method for the task in Fig. 6.
Both of them share one step, that is, filtering \(y_{f}[m]\) at \(f_{0}\) and \(f_{1}\) firstly. There are aliasing effect of two FSK as Fig. 5 illustrated. Harmonic components of one FSK key could unfortunately hit another FSK key. A BPF is applied to exclude others frequency leakage and to constrain interference frequency. Denote \(y_{f}[m]\) pass BPF at center frequency \(f_{0}\), as \(y_{l0}[m]\), and \(y_{f}[m]\) pass BPF at center frequency \(f_{1}\), as \(y_{l1}[m]\).
Energy detector compares the power of spectrum \(f_{0}\pm\Delta f\) and power of spectrum \(f_{1}\pm\Delta f\). An FSK symbol is decided based on the frequency spectrum which contains higher power. For FSK symbol \(x[m]\), hypothesis testing \(\mathcal{H}_{0}\) denotes that backscatter device sends \(x[m]\) symbol 0. Similarly, hypothesis testing \(\mathcal{H}_{1}\) refers that \(x[m]\) is symbol 1. So for energy detector,
\[E\left[\left|y_{l1}[m](t)\right|^{2}\right]\overset{\mathcal{H}_{0}}{ \underset{\mathcal{H}_{1}}{\overset{\mathcal{H}_{1}}{\overset{\mathcal{H}_{1} }{\overset{\mathcal{H}_{1}}{\overset{\mathcal{H}_{1}}{\overset{\mathcal{H}_{1} }{\overset{\mathcal{H}_{1}}{\overset{\mathcal{H}_{1}}{\overset{\mathcal{H}_{1} }{\overset{\mathcal{H}_{1}}{\overset{\mathcal{H}_{1}}{\overset{\mathcal{H}_{1} }{\overset{\mathcal{H}_{1}}{\overset{\mathcal{H}_{1}}{\overset{\mathcal{H}_{1} }{\overset{\mathcal{H}_{1}}{\overset{\mathcal{H}_{1}}{\overset{\mathcal{H}_{1} }{\overset{\mathcal{H}_{1}}{\overset{\mathcal{H}_{1}}{\overset{\mathcal{H}_{1} }{\overset{\mathcal{H}_{1}}{\overset{\mathcal{H}_{1}}{\overset{\mathcal{H}_{1} }{\overset{\mathcal{H}_{1}}{\overset{\mathcal{H}_{1}}{\overset{\mathcal{H}_{1} }{\overset{\mathcal{H}_{1}}{\overset{\mathcal{H}_{1}}{\overset{\mathcal{H}_{1} {\overset{\mathcal{H}_{1}}{\overset{\mathcal{H}_{1}}{\overset{\mathcal{H}_{1} {\overset{\mathcal{H}_{1}}{\overset{\mathcal{H}_{1}}{\overset{\mathcal{H}_{1} {\overset{\mathcal{H}}_{1}{\overset{\mathcal{H}}{\overset{\mathcal{H}}{\overset{ \mathcal{H}}{\overset{\mathcal{H}}{\overset{\mathcal{H}}{\overset{\mathcal{H}}{ \overset{\mathcal{H}}{\overset{\mathcal{H}}{\overset{\mathcal{H}}{\overset{\mathcal{H} {\overset{\mathcal{H}}{\overset{\mathcal{H}}{\overset{\mathcal{H}}{\overset{ \mathcal{H}}{\overset{\mathcal{H}}{\overset{\mathcal{H}}{\overset{\mathcal{H}}{ \overset{\mathcal{H}}{\overset{\mathcal{H}}{\overset{\mathcal{H}}{\overset{ \mathcal{H}}{\overset{\mathcal{H}}{\overset{\mathcal{H}}{\overset{\mathcal{H} {\overset{\mathcal{H}}{\overset{\mathcal{H}}{\overset{\mathcal{H}}{\overset{ \mathcal{H}{\overset{\mathcal{H}}{\overset{\mathcal{H}}{\overset{ \mathcal{H}{\overset{\mathcal{H}}{\overset{\mathcal{H}}{\overset{ \mathcal{H}}{\overset{\mathcal{H}{\left}{\overset{\mathcal{H}}{\overset{ \mathcal{H}}{\overset{\mathcal{H}}{\overset{\mathcal{H}}{\overset{\mathcal{H} {\left}{\overset{\mathcal{H}}{\overset{\mathcal{H}{\overset{\mathcal{H}}{\overset{ \leftleftleft({\mathcal{H}}{\overset{\mathcal{H}}{\overset{\mathcal{H}}{ \overset{\mathcal{H}}{\overset{\mathcal{H}{\overset{\mathcal{H}}{\overset{ \leftleftleft({\leftleft({\leftleftleft({\leftleftleftleftleft({\leftleftleftleftleftleft( \leftleftleftleftleftleftleft({{ \leftleftleftleftleft({{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{ \leftleft} \leftleft\text{{{{ \left} \leftleft({\leftleftleftleftleftleftleft {{{}}^{\leftleftleftleftleftleftleftleft({{ } {{}}^{{ {}^{}^{ {}^{}^{ {}^{ \leftleftleftleftleftleftleft({{}}{ {}^{}^{{}^{ {}^{}^{{}^{ \leftleftleftleftleftleftleft({}}}}}}}}}}}}}}}}}}}}}}{}\)\)\))))))) is a}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\}\}\)\)\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}
To model non-idealities, we assume that the BD attenuates the reflected signal power by \(-20\log_{10}(R_{\text{on}})=6\) dB.
In Fig. 7, there are two SNR defined. Red and blue x-axis are both SNR in dB scale. The only difference is the definition of signal power in SNR. \(n(t)\sim\mathcal{CN}(0,\sigma_{n}^{2})\). The power of noise is \(P_{n}=\sigma_{n}^{2}\).
The blue x-axis on the bottom is based on CRS power and the red axis at the top is for the received BD modulated signal power.
The power of the CRS is
\[P_{s1}=E\left[|h_{0}(t)s(t)+R_{\text{on}}h_{1}(t)s(t)x(t)|^{2}\right]\]
which corresponds to the LTE Reference Signal Received Power (RSRP) in the absence of noise.
Hence for the blue x-axis, we have
\[\text{SNR}_{1}=\frac{P_{s1}}{P_{n}}=\frac{E\left[|h_{0}(t)s(t)+R_{\text{on}}h_{ 1}(t)s(t)x(t)|^{2}\right]}{\sigma_{n}^{2}}.\]
The red x-axis on the top treats backscatter FSK signal from backscatter as the signal of interest.The backscatter signal SNR of AmBC is defined as
\[\text{SNR}_{2}=\frac{E\left[|R_{\text{on}}h_{1}(t)s(t)x(t)|^{2}\right]}{ \sigma_{n}^{2}}.\]
As Eq. (3) illustrating, the power of two path difference is exactly
\[10\log{(\Delta L)} =10\log{E\left[|h_{0}(t)|^{2}\right]}-10\log{E\left[|h_{1}(t)|^{2 }R_{\text{on}}^{2}\right]}\] \[=10\log{L_{0}}-10\log{L_{1}}-20\log{R_{\text{on}}}.\]
Using MATLAB LTE toolbox, LTE fading channel model is applied. Doppler frequency shift is not considered in this simulation, although Doppler effect influences a lot in practice. MIMO channel propagation is also not setup, because transmitter and receiver antenna is assumed to be SISO. LTE downlink channel estimator estimates the channel based on that CRS signal. None OFDM symbol is interpolated between CRS pilots.
The coherent detector algorithm and energy detector algorithm are discussed in subsection V-B FSK demodulator. To smooth the simulation BER curve, we simulate various times of experiments. High BER points (\(\text{BER}>0.01\)) simulate 10000 times Monte Carlo experiments. Low BER points (\(\text{BER}\leq 0.01\)) simulate 100000 times Monte Carlo experiments.
Simulation uses backscatter communication parameters as subsection VI-A Parameters mentioned. Fig. 7 is the results of simulation. The energy detector is always worse than coherent detector. Under low SNR, such as -3 dB, the two methods have similar performance. The BER difference of two FSK demodulators is small. But the performance at SNR = 5 dB, coherent detector is better one order of magnitude than energy detector.
### _Backscatter signal synchronization_
A special backscatter frame structure is designed to find out the beginning of a backscatter signal. Backscatter signal is synchronized by three sequences of 7-Barker code. As Fig. 8 showing, there are two parts in one backscatter frame, synchronization header and data packet. At the beginning of one packet, two continuous sequences of 7-bit Barker code ('0000110') followed by an inverse 7-bit Barker code ('1111001') compose a synchronization header. Then data packet attaches the synchronize header.
Between two backscatter packets, there is a short period that no FSK symbol is sent, called sleep period. During sleep period, ambient signal is not shifted and BD kept 'off' state.
Compared to previous work [11], clock signal to synchronize is canceled in the proposed method. Sometimes, backscatter packets would be synchronized incorrectly. In that case, the data bit error could be incredibly high (over than one third). The synchronization header bits is already known, as a part of backscatter communication protocol. By comparing known synchronization bits with demodulated bits, we can evaluate the quality of Rx received data and decide whether synchronization was successful or not. If the measurement indicates that the synchronization failed, we discard the whole packet.
### _Measurements_
This measurement is a validation of aforementioned simulation. Parameters are same as subsection VI-A Parameters mentioned.
The transmitter is Rhode&Schwarz (R&D) SMBV100 signal generator with LTE signal generator packet. The generator emits standard LTE signal with 50 resource block (7.68 MHz bandwidth) at 486 MHz carrier frequency and transmission power is 15 dBm. The frame structure is for SISO system with corresponding synchronization signal and pilots for cell ID 3.
The BD node is an in house design BD as Fig. 3. Control signal from MCU is driven by RaspberryPi nano, an RP2040-based microcontroller board.
Fig. 8: Backscatter frame format of the proposed backscatter.
Fig. 7: Theoretical BER of AmBC signal, based on coherent and energy detector.
The receiver is a universal software radio peripheral (USRP), connected to a laptop. Some post signal processing is executed on that laptop, using MATLAB.
#### V-A1 Wired measurement
This experiment measures over the cables in the absence of direct path component \(h_{1}(t)\). A circulator routes signal from LTE signal generator to BD and then from BD to USRP. Fig. 9 and 10 simply give the spectrum and spectrogram of USRP receiver, respectively.
The two symbols are clearly visible in the spectrum as well as their aliased harmonic components. In addition there is a strong DC component and a component at 2 kHz corresponding to the uniform sampling frequency \(1/T_{slot}\), which arrows \(f_{\text{ Nyquist}}\) points to in Fig. 9. The spectrum was obtained using Fast Fourier Transform directly on the measured channel samples \(\hat{h}[0,t_{s}]\) without compensating for the irregularity of the underlying sampling process. From the spectrogram that illustrates how the spectrum changes in time, we can clearly see the transmitted symbol sequence '11001010' by observing the power at the frequencies \(f_{0}\) and \(f_{1}\). As illustrated in Fig. 9 around 0.5 s, there is a 100 ms sleep period, presenting a peak at direct current (DC) component.
The specterogram of square-wave FSK is illustrated as Fig 10. There are two frequency keys \(f_{0}\) (300 Hz) and \(f_{1}\) (650 Hz) appear alternately. The peak of 2 kHz is caused by uniform sampling frequency \(1/T_{slot}\), same as Fig. 9. Other peaks in the spectrum, such as 900 Hz, is caused by alias from other components on the spectrum or harmonic frequency.
#### V-A2 Wireless measurement
Fig. 11 presents the measurement environment in Maarintie 8 in Aalto University campus. The transmitter antenna R&S HK300 is located at the left lower corner in the Figure. The BD node is in the middle of the corridor next to the measurement point 9 (at 9 m from the wall). The receiver is USRP B205 unit and laptop running in house created C++ implementation of a LTE receiver, connected to UDP port with MATLAB implementation of AmBC signal detection. The receiver implements real time decoding of received AmBC signal.
Differently from results reported in Fig. 5 in the previous work [11], no external clock is distributed between transmitter,
Fig. 11: Wireless measurement devices and site floor plan.
Fig. 12: Wireless measured performance of AmBC in SNR and BER.
Fig. 10: Wired measurement of a FSK modulated BD signal in spectrogram.
Fig. 9: Wired measurement of a FSK modulated BD signal in spectrum.
BD, and receiver.
In particular, the measurement system cannot detect weak signal which is lower than analog digital converter (ADC) in the environment. In Fig. 13, a simulation is given for received power of weak backscatter signal as a supplement. Simulation is based on the backscatter pathloss mode [16] and completely ignores the impact of walls. It can be thus treated as an upper bound for the actual backcattered power.
Data at some positions are lost, due to the high BER. If BER is too high, a possible explanation is that backscatter frame is not correctly synchronized by three 7-barker bits header. As subsection VI-C said, that data sequence is meaningless if packet is asynchronized. In our measurement, if the BER is higher than one over three, then that backscatter data packet is omitted.
In Fig. 12, the SNR and BER are measured in the Fig. 11 corridor, with step 0.2 m, from 2 m to 21 m. Relationship between BER and SNR roughly satisfies the following relationship: low BER positions are usually under high SNR environments. A sinusoidal tendency appears in SNR via distance. That periodic phenomena in the tunnel corridor (6 m to 16 m) is distinct from other positions. BER shows the same periodic regular pattern with SNR. In the room 2536 (distance less than 6 m), SNR starts to deteriorate. Then at the fork road corner (between 16 m and 18 m), SNR decreases steeply.
Received ambient signal power at each measurement position is estimated based on received LTE signal power. Step length of the LTE ambient signal power measurement is 1 m, which is blue line in Fig. 13. Backscatter signal power is calculated based on Friis transmission equation, which is red line in Fig. 13. We assume transmitter to BD and BD to each measurement position propagation paths are all line of sight. FSPL module is applied in estimation of backscatter signal power, as Eq. (2). But this FSPL module is higher than that of real backscatter signal power received on measurement positions. In practice, the difference between ambient LTE signal power and backscatter signal power is even larger than that in Fig. 13.
Measured LTE signal power (blue line in Fig. 13) for 8 m to 21 m approximately obeys FSPL. Because BD is set at 9 m, simulated backscatter signal power (orange line in Fig. 13) peaks at 9m. It shows a typical FSPL pattern via distance. Comparing the two plots, Fig. 12 and Fig. 13, some noteworthy contrasts is given as following. Around 6 m the LTE signal deteriorates. That ambient signal attenuation also can be observed in Fig. 12. Around 6 m, SNR decreases dramatically with BER jumping to a high level. A steep drop of LTE signal power from 1 m to 2 m is believed to be caused by a metal door between room 2045 and room 2536, which exactly between transmitter and measurement receiver.
## VII Conclusions
In this paper, we proposed a system that uses the LTE cell specific reference signals and channel estimator for receiving backscatter modulated signals. The BD utilized two square waves having different nominal frequencies to perform frequency shift modulation. The proposed receiver was validated using over the air measurements in an indoor environment. Based on our experimental results, we can conclude that the LTE channel estimator offers a great potential to be utilized for receiving backscattered signals in Ambient Internet of Things applications.
## VIII Acknowledgements
This work is in part supported by the European Project Hexa-X under grant 101015956 and Business Finland project eMTC under grant 8028/31/2022.
|
2309.07033 | Human-Robot Co-Creativity: A Scoping Review -- Informing a Research
Agenda for Human-Robot Co-Creativity with Older Adults | This review is the first step in a long-term research project exploring how
social robotics and AI-generated content can contribute to the creative
experiences of older adults, with a focus on collaborative drawing and
painting. We systematically searched and selected literature on human-robot
co-creativity, and analyzed articles to identify methods and strategies for
researching co-creative robotics. We found that none of the studies involved
older adults, which shows the gap in the literature for this often involved
participant group in robotics research. The analyzed literature provides
valuable insights into the design of human-robot co-creativity and informs a
research agenda to further investigate the topic with older adults. We argue
that future research should focus on ecological and developmental perspectives
on creativity, on how system behavior can be aligned with the values of older
adults, and on the system structures that support this best. | Marianne Bossema, Somaya Ben Allouch, Aske Plaat, Rob Saunders | 2023-09-13T15:41:16Z | http://arxiv.org/abs/2309.07033v2 | # Human-Robot Co-Creativity: A Scoping Review
###### Abstract
This review is the first step in a long-term research project exploring how social robotics and AI-generated content can contribute to the creative experiences of older adults, with a focus on collaborative drawing and painting. We systematically searched and selected literature on human-robot co-creativity, and analyzed articles to identify methods and strategies for researching co-creative robotics. We found that none of the studies involved older adults, which shows the gap in the literature for this often involved participant group in robotics research. The analyzed literature provides valuable insights into the design of human-robot co-creativity and informs a research agenda to further investigate the topic with older adults. We argue that future research should focus on ecological and developmental perspectives on creativity, on how system behavior can be aligned with the values of older adults, and on the system structures that support this best.
## I Introduction
The world's population is rapidly aging. According to the United Nations, the number of people aged 60 years or older is expected to more than double by 2050, reaching approximately 2.1 billion [1]. This demographic shift is having significant social, economic, and health implications. In 2019, the World Health Organization published a review of 900 studies, concluding that creative activities can promote health and well-being and help prevent and slow age-related physical and cognitive decline [2]. Here, the term 'creative activities' is referring to forms of personal, everyday creativity, such as making music, drawing, dancing, or crafts. According to Cohen [3], such acts of everyday creativity are fundamental to psychological development and well-being in later life.
Creech et al. [4] present a systematic literature review into creativity and the quality of later life, which highlights the benefits of the collaborative and relational nature of creativity. Co-creativity is also linked to well-being; Zeilig et al. [5] suggest that sharing agency in co-creative activities can empower people with dementia. These studies share the view that co-creativity can foster social connections and create a safe space that facilitates involvement and sharing.
Social robots are playing a growing role in healthcare and well-being [6][7]. There are few examples, however, of creative robot applications for older adults. Social robots offer unique opportunities to support creativity through assistance and social interaction. In addition, technological advancements in generative AI bring new opportunities to suggest tailored content in creative collaborations. There are unanswered questions, however, on how to design appropriate human-robot co-creative systems. A scoping review was conducted, to systematically map the research done in Human-Robot Co-Creativity (HRCC), and to inform a research agenda for HRCC with older adults. We define HRCC as "An interactive system for collaborative creativity between human and robot. It involves the joint effort of two or more embodied and co-present agents, where both take initiative in response to the other(s) and contribute to creative outcomes". This definition is based on the concepts of "Humbots", as described by Lubart et al., [8] and "Mixed-Initiative Creative Interfaces" as introduced by Deterding et al. [9].
In the next section, we begin by presenting theories related to the value-sensitive design of HRCC for older adults. In Section III, we describe the methodology we used for this scoping review. Based on the analysis of selected articles, we document the results in Section IV, followed by a discussion in Section V. In Section VI we provide a conclusion, leading to a research agenda outlined in Section VII.
## II Background
Investigating the prospective role of social robots in co-creative systems bridges the fields of Human-Robot Interaction, Computational Creativity, and Arts & Health. Here, we present theories from these fields in the context of value-sensitive design of HRCC for older adults.
### _Creativity and Values of Older Adults_
Definitions of creativity generally share the common theme that it involves generating something new, valuable, and surprising [10]. There are different approaches, however, to understanding this complex concept. Glaveanu [11] takes an ecological perspective, describing creativity as a phenomenon that emerges through interaction in a social and material environment. Kaufman & Beghetto [12] take a developmental perspective, looking at the individual. Individuals are more likely to be creative when they are given challenging tasks that require new solutions, have a degree of autonomy and control over their work, and can collaborate and communicate effectively with others.
Both ecological and developmental perspectives align with the values that older adults attribute to their creative experiences. In a Dutch study by Groot et al. [13], older participants reported appreciating creative activities for 1) offering an environment where they feel safe, accepted, and free, 2) promoting personal and artistic growth, and 3)
enabling meaningful social interactions (Fig. 1). Based on the study of Groot et al., Liu et al. [14] investigate the relationship between context, mechanisms, and outcomes, and mention 'a welcoming environment' as a consistent underlying mechanism. Liu et al. [14] recommend deepening our understanding of environments and affective atmospheres in art activities with older adults. Groot et al. [13] recommend Participatory Action Design as a research approach to capture the essence of older participants' creative experiences.
In "A Roadmap for Therapeutic Computational Creativity", Pease et al. [15] delve into the connection between Computational Creativity and mental health and well-being. The authors discuss the benefits and risks associated with this connection. They also highlight potential opportunities, such as _casual creators_[16]. Casual creators prioritize the pleasure of the creative process over the end product and offer enjoyable and easily accessible creative experiences that may be valuable for older adults. This kind of experiences may promote "personal and artistic growth", while contributing to "an atmosphere in which people feel safe, accepted, and free" [13]. The roadmap also discusses the concept of the 'third hand', a metaphor for the therapist's role in supporting and encouraging patients' creative processes, without imposing their own ideas or disrupting the patient's autonomy [17]. The researchers recommend collaboration with health professionals, to determine the limitations and possibilities of therapeutic computational creativity [18].
### _Interaction Design for Computational Creativity_
Kaufman & Beghetto [12] mention two main requirements for people to be creative 1) a degree of autonomy and control, and 2) effective communication with others. These requirements present challenges for the design of co-creative systems. While traditional creativity support tools focus on human control, Gemeinboeck & Saunders [19] suggest embodied creative agents that share the world with humans, and act autonomously, beyond their creator's intent. Mixed-Initiative Creative Interfaces [9] are in between, a form of AI-enabled Creativity Support tools, where both humans and the system can take initiative during creative collaboration. This raises questions, e.g., on how agency can be shared and how initiative can be negotiated to support both well-being and mutual creativity.
The requirement of effective communication also poses interaction design challenges. Bray & Bown [20] argue that computational creativity systems are often complex and opaque, limiting visibility and clarity of their conceptual models. Understanding may be improved when users can clearly perceive the system's structure, and develop a mental model of how this structure leads to behavior. This is crucial to facilitate a suitable level of autonomy and control. Dialogues can be expected to contribute to understanding and common ground, either language-based or through creative artifacts. A dialogic approach, as suggested by Bown et al. [21], can enable both human and artificial agents (e.g. social robots) to actively influence the creative process and products, and adapt to the other's behavior.
Social robots offer unique opportunities for embodied interaction, sharing agency, and (non-)verbal communication. They can suggest tailored AI-generated content and support creative exploration. The articles being reviewed shed light on how interaction design challenges may be faced and how solutions may be applied.
## III Method
Six databases were used to conduct the scoping review: ACM, IEEE, Google Scholar, PsycINFO, Pubmed, and Scopus. Keywords were chosen for _'Actors'_ (e.g. human-robot), _'Activities'_ (e.g. co-creativity), and _'Application'_ (e.g. Creativity Support), see Table I. Only conference and journal articles, published in English were included. The search results (n=827) were collected in February 2023 and imported in Rayyan [22], where duplicates (n=100) were removed, and labels were assigned. Searching and selecting articles was done systematically using PRISMA guidelines [23], currently with a single reviewer (the first author). Based on a first screening of titles and abstracts, articles (n=432) were excluded when they a) did not involve human subjects in evaluating a robotic system, b) described a distinctive context (e.g., business, innovation, teaching, or product development) when they were conference workshop calls or proposals, or c) were found to be duplicates. In a second screening,
Fig. 1: Values that older adults attribute to creative activities, based on a Dutch nationwide study by Groot et al. [13]. Figure adapted with permission.
papers (n=157) were removed that did not show evidence of a 'co-creative agent', here defined as a computational actor involved in building shared creative artifacts, in co-presence with one or more human collaborators [8]. In the last step of the screening, studies that did not involve an embodied intelligent agent were removed (n=113). Doing the exclusion selection in separate steps allowed for acquiring a broader view, and offered the opportunity to also keep studies with non-robotic agents in mind. Two papers were added through forward and backward citation searches, and a final set of 27 articles was used for analysis.
### _Analysis of Design Research_
Studies in HRCC are a form of design research, focused on understanding specific interaction design problems. We used the Function-Behavior-Structure (FBS) ontology, as described by Gero & Kannengiesser [24], for analyzing and comparing this kind of study. The ontology is based on the notion that all designs can be represented in a uniform way, and that design systems can be conceptualized in three ontological categories. Function (F) is about 'what the system is for', Behavior (B) covers 'what it does', and Structure (S) describes the components and their relationships, or 'what it consists of'. In addition, we applied a layered framework for interactions between creative collaborators, proposed by Kantosalo et al. [25], to the 'Behavior' of co-creative systems into interaction layers of modalities, styles, and strategies to provide a finer-grained view of 'what a co-creative system does'.
## IV Results
The FBS ontology [24] and Kantosalo's Interaction Framework for Human-Computer Co-Creativity [25] were used for the analysis of reviewed articles. An overview is presented in Table II.
### _Function_
When looking at the reviewed articles presented in Table II, we can distinguish four categories based on their research focus: 1) Creativity Support 2) Creative Collaboration 3) Art Therapy, and 4) Artistic Work. _Creativity Support_ forms the largest group, with studies investigating factors of social robot behavior that affect human creativity. Studies of _Creative Collaboration_ explored the interaction dynamics, and how the process of collaboration can be facilitated. In the category of _Art Therapy_, the focus was on specific therapeutic requirements, and how to design responsive systems for affective and assistive collaborative painting and drawing. In the category of _Artistic Work_, the focus is on creative encounters between humans and machines, and studies are carried out in the context of the researchers' own artistic practice, mostly in performances involving the audience. Regarding participants and target groups, we found that studies on _Creativity Support_ involved mostly children, while adults and professional artists and designers participated in the _Creative Collaboration_ studies. In the category of _Artistic Work_, the artists themselves also played an important role, as well as the audience.
### _Behavior: Strategies, Styles, Modalities_
All studies in the category of _Creativity Support_ propose the strategy of stimulating human creativity, through various social behaviors of a robot. Creativity demonstration was used to stimulate creativity with children (n=4) and with adults (n=1). The robot demonstrated verbal creativity in storytelling applications and figural creativity in a drawing game. It was found that creativity demonstrations and scaffolding (e.g. asking questions, prompting, and suggestions), as well as the promotion behavior of the robot can contribute to higher levels of human creativity. When mirroring or contrasting robot movements were congruent with user input, this positively affected creativity [28][37]. The studies mostly compared conditions of robot behavior, using pre-defined, validated content. For example, in multiple studies the robot demonstrated creativity by selecting pre-defined suggestions with a validated creativity score, dependent on the condition [26][27][35].
In the category of _Creative Collaboration_, two studies explored expressive robot movements to improve non-verbal communication [38][40]. In the context of collaborative drawing, the effects of direct versus indirect motion paths on collaborative interaction were compared, but the results were inconclusive. The researchers recommend further in-the-field experiments, combining qualitative and quantitative methodologies. An arts-led, process-led approach is proposed by Gomez Cubero et al. [41], to explore how co-creativity emerges through human-robot dialogue and improvisation. The researchers developed custom tools to support collaborative drawing with an industrial robot and put these into practice. In a study involving designers, a
Fig. 2: PRISMA flow diagram for the scoping review.
mobile robot was introduced for collaborative sketching and generating ideas through 'conceptual shifts' [39]. Using the Sketch-RNN model and the Google Quick, Draw! API, input sketches were mapped to suggestions with visual and semantic similarity. Results showed that the mobile, embodied agent performed better in provoking exploratory thinking and collaborative ideation, compared to a web-based agent. The alignment of human, robot, and machine learning is suggested by Twomey [42] in a study where the robotic system is trained on audience-specific content in the form of children's drawings.
With _Art Therapy_, the focus is on investigating how a robot can learn to understand and adapt to the creative and emotional expressions of a human interaction partner. Cooney & Menezes [43] propose to generate responsive art for emotion regulation, through robot expressions of either matching or positive emotions. Using wireless electroencephalography (EEG), brain signals were captured and classified based on Russell's valence/arousal model, and then translated into visual features for paintings. Affective image databases were used to train the system and Deep Convolutional Generative Adversarial Networks (DC-GANs) for synthesizing compositions. To balance contingency and aristrity, Cooney & Berck [44] made use of visual metaphors that are responsive to perceived emotions. In a follow-up study, Cooney [45] proposes metaphors that connect more to emotional artistic expressions. Personalization was also facilitated, through the robot's open questions, letting users add their own tags to describe the content. Another implementation of personalization is suggested by Shaik et al. [46], by letting the system adapt sketches based on verbal feedback and explicit directions from users, which were disabled children.
In the category of _Artistic Work_, Sougwen Chung [47] explores human-machine symbiosis, studying concepts of mimicry, memory, collectivity, and spectrality. For example, mimicry is explored with the robot mimicking the artists drawing gestures and for memory, the machine learns the artist's drawing style, with neural nets trained on the artists drawing collection. Saunders & Gemeinboeck [48] investigate how embodied, creative AI can act as a performer by embedding a group of autonomous robots into the walls of a gallery. The robots are programmed as curious agents, driven to explore their world. By punching on the walls and making holes, they make changes to the environment, communicate their presence, and involve the audience. In another study by Saunders & Gemeinboeck [49], professional dancers and non-humanlike robots were brought together in co-embodied explorations of forms and movements. Sola et al. [50] suggest speech-to-AI-art transformations and created an interface that allowed the audience to tell a co-creative system about their dreams. Based on prompts from the input text, the AI-system generated a drawing through latent space navigation using the CLIP model [51]. The industrial robot arm captured the audience's stories in a collective painting that hang down into the atrium of a museum as a cascade of
dreams.
When looking at game-based versus open-ended interaction styles, the studies that compared robot behavior in different conditions generally used game-based interaction, e.g., in drawing and storytelling games. This facilitated experimental control when comparing and measuring the effects of social robot behavior. In other studies, open-ended forms of interaction were used, which allows for investigating processes and dynamics, and contributes to ecological validity when researching creative collaboration.
What stands out when looking at the interaction modalities used in the four categories, is that speech is mostly used in the categories _Creativity Support_ and _Art Therapy_. Robot speech is used for demonstrating verbal creativity, and for scaffolding creativity, and prompting creative reflection [27][34]. For art therapy robots, speech is found to be useful as well. It allows complex information to be conveyed, in a familiar and intuitive way, without requiring a person to look away from art-making and possibly lose concentration [43]. Speech also enables users to give explicit feedback or directions or ask for assistance, as suggested by Shaik et al. [46]. In the categories _Creative Collaboration_ and _Artistic Work_, human-machine dialogues are more often based on non-verbal communication such as expressive movement, or through the work itself. With _Artistic Work_, we see most examples of ambient interfaces, when exploring new forms of human-machine encounters in a spatial setting (Fig. 3).
### _Structure_
Different types of robots and embodiments were used in the reviewed studies (Fig. 3). Human-like robots were used in almost all studies in using a _Stimulating creativity_ strategy, combined with a tablet or a computer screen. With the drawing activities in this category, the robot (Jibo, Nao) was not drawing physically, but virtually on the tablet, with separate canvases on the same tablet [26][52][36]. In studies employing _game-based_ interaction styles, screens were used to present the game world. Alves-Oliveira et al. [28][29][30] used a non-anthropomorphic robot object to stimulate creativity in children; YOLO serves as a toy character in a storytelling game. The robot interacts through lights, colors, and movements, while the shape of the robot set realistic expectations for the robot's capabilities. In the category of _Creative Collaboration_, collaborative robot (cobot) arms were mostly used, together with physical drawing tools. Physical drawing tools were also used with art therapy robots. The Baxter robot used for art therapy [43] can be considered a human-like cobot, with two arms and a screen that can display a face and facial expressions, facilitating non-verbal social communication. With Artistic Work, robot arms were used next to custom-made robotic objects, mostly in multi-agent settings. The stage is shared between humans and robots, mostly in performances. In an art installation by Sola et al. [50], the industrial robot arm is behind glass, while the audience can communicate with the robot through a speech interface. In Accomplice, Saunders & Gemeinboeck [48] install robots in their own space behind a wall in a gallery, which they breaking through as they use the wall as their canvas. Saunders & Gemeinboeck [49] used robotic cubes to explore how human and non-human forms of embodiment can be mapped through movement, and how non-humanlike robotic objects can be perceived as affective agents.
Fig. 3: Per column: 1) Jibo, scaffolding creativity in a construction task [27]; YOLO, a robot toy for storytelling [28]; 2) The robot is present, an arts-led, process-led approach for investigating human-robot dialogues in improvisation [41]; Cobbie, a drawbot for conceptual sketching with designers [39]; 3) Baxter robot used in Art Therapy; The valence/arousal model for expressing matching emotions; 4) Accomplice - Creative robotics and embodied computational creativity [48]; Dream Painter - Bridging Audience Interaction, Robotics, and Creative AI [50]
## V Discussion
This review has several limitations. An important limitation is the fact that the review selection was carried out by a single reviewer, due to time constraints. In addition, there are opportunities to further work out the analysis, for example to explore how machine learning techniques, as part of the structure of a system, align with behavior and function. It turned out that our search results did not include any studies involving the elderly. This shows that future research on HRCC with older adults is important. However, it is also a limitation, as we cannot learn from previous findings in HRCC with the target group. We are planning an extended version of this review, with a search query that does include the target group, f.e., looking at gerontechnology for creativity support. This allows us to involve more reviewers and to address aspects that have been underexposed so far.
## VI Conclusion
Selected articles were structured using the FBS ontology [24] and the Interaction Framework for Human-Computer Co-creativity [25]. The search and selection process (see Section III) resulted in a heterogeneous set of studies, describing robotic systems with various functions, behaviors, and structures.
### _Function_
Studies in the categories of _Creativity Support_ and _Art Therapy_ take a developmental perspective, with the goal to a) stimulate human creativity and b) support art therapy through responsiveness and personalization. Studies in the categories _Creative Collaboration_ and _Artistic Work_ take an ecological perspective, investigating how creativity emerges through interaction. This is a more process-led approach, involving end users and taking into account the social and material environment. As set out in Section II, both ecological and developmental perspectives align with values that older adults attribute to their creative experience, and must be taken into account when defining the functions of HRCC for older adults. An important finding regarding participants is that older adults did not engage in any of the reviewed studies. While Cooney & Menezes [43] thank older adults in their acknowledgments for providing input, they evaluated their system with younger adults. It is not clear why older adults have not yet been involved in HRCC research. Robots and AI-generated content offer opportunities that can be beneficial for this specific target group, which is growing worldwide, and there are specific needs and wishes to be taken into account. That is why we are making a case for investigating HRCC for, and with, the target group of older adults.
### _Behavior_
Evidence shows that robots are capable of demonstrating creativity, and that this social behavior can be designed to stimulate human creativity. Other social behaviors are found to be effective as well, such as mirroring and contrasting user input to promote divergent and convergent thinking. Studies on robots in _Art Therapy_ provide valuable insights into the importance of recognizing, modeling, and synthesizing emotions in drawings and paintings. Here, the emphasis is on tailoring and balancing content to user needs e.g., using personalized visual metaphors. These ideas on how an art therapy robot could behave as a 'third hand' also inform future research in HRCC for older adults. Studies in the category of _Creativity Support_ often used games to structure the interaction, which contributes to experimental control when measuring the effects of robot behavior. However, the majority of studies used open-ended forms of interaction, investigating how dialogues and collaborations develop.
The modality of speech is considered an important channel for transparency and effective communication, promoting autonomy and control. This is emphasized in the categories
Fig. 4: Values that older adults attributed to their creative activities in a nationwide Dutch study [13], connected to robot behaviors suggested in the reviewed studies, with the corresponding categories (Table II). These values (e.g. meaningful connections) were attributed in the context of human-human interactions. We suggest to investigate if and how human-robot co-creative interactions can be valuable to older adults as well. We propose this mapping of values and behaviors as part of our research agenda (Section VII).
Creativity Support_ and _Art Therapy_. Robot speech is used, e.g., to demonstrate verbal creativity, scaffold creativity, and promote creative reflection. User speech input is suggested as a means for explicit feedback, requesting assistance, and personalizing suggested content. The research projects in the category of _Artistic Work_ place creative robots in spatial settings, sometimes with multiple agents, and letting artists and audiences contribute to a physical shared space that fosters creativity. Both speech and embodied, spatial interactions are of interest for HRCC with older adults, to contribute to an environment where people feel free and safe.
### _Structure_
Results show that in the categories _Creativity Support_ and _Art Therapy_, mostly human-like robots were used. The robot YOLO is an exception, an abstract robotic object that serves as a toy, while the shape of the robot sets realistic expectations for the robot's capabilities [29]. A shared stage for humans and robots, as explored in studies on _Artistic Work_, could be interesting for older adults as well, when designed as an environment fostering creativity, and where people feel free and safe.
## VII Research Agenda
We propose a participatory, value-sensitive design approach for investigating HRCC with older adults. Older adults must be involved throughout the entire process, in identifying opportunities and requirements, developing HRCC activities, and testing hypotheses in both controlled experiments and in-the-field settings. When investigating the design of the system, we propose considering of the following aspects, aligned with the the FBS framework:
**Function:** Consider both ecological and developmental perspectives on creativity when defining functional requirements for the target group.
**Behavior:** Align values that older adults attribute to creative activities with the opportunities of HRCC (Fig. 4) to investigate how:
1. A robot's social behavior can support and enhance creative experiences for older adults;
2. AI-generated content can be tailored and responsive to specific needs and desires; and,
3. Intuitive dialogues (verbal, non-verbal, through artifacts) can support co-creativity.
**Structure:** Investigate what types of robot and devices fit best and provide opportunities for:
1. Social interaction with older adults;
2. Creative support and exploration; and,
3. Shared creative experiences and spaces where older adults feel free and safe.
## Acknowledgment
This publication is part of the project 'Social robotics and generative AI to support and enhance creative experiences for older adults', with project number 023.019.021 of the research program Doctoral Grant for Teachers which is financed by the Dutch Research Council (NWO).
|
2308.16571 | Document Layout Analysis on BaDLAD Dataset: A Comprehensive MViTv2 Based
Approach | In the rapidly evolving digital era, the analysis of document layouts plays a
pivotal role in automated information extraction and interpretation. In our
work, we have trained MViTv2 transformer model architecture with cascaded mask
R-CNN on BaDLAD dataset to extract text box, paragraphs, images and tables from
a document. After training on 20365 document images for 36 epochs in a 3 phase
cycle, we achieved a training loss of 0.2125 and a mask loss of 0.19. Our work
extends beyond training, delving into the exploration of potential enhancement
avenues. We investigate the impact of rotation and flip augmentation, the
effectiveness of slicing input images pre-inference, the implications of
varying the resolution of the transformer backbone, and the potential of
employing a dual-pass inference to uncover missed text-boxes. Through these
explorations, we observe a spectrum of outcomes, where some modifications
result in tangible performance improvements, while others offer unique insights
for future endeavors. | Ashrafur Rahman Khan, Asif Azad | 2023-08-31T09:12:34Z | http://arxiv.org/abs/2308.16571v1 | # Document Layout Analysis on BaDLAD Dataset: A Comprehensive MViTv2 Based Approach
###### Abstract
In the rapidly evolving digital era, the analysis of document layouts plays a pivotal role in automated information extraction and interpretation. In our work, we have trained MViTv2 transformer model architecture with cascaded mask R-CNN on BaDLAD dataset to extract text box, paragraphs, images and tables from a document. After training on 20365 document images for 36 epochs in a 3 phase cycle, we achieved a training loss of 0.2125 and a mask loss of 0.19. Our work extends beyond training, delving into the exploration of potential enhancement avenues. We investigate the impact of rotation and flip augmentation, the effectiveness of slicing input images pre-inference, the implications of varying the resolution of the transformer backbone, and the potential of employing a dual-pass inference to uncover missed text-boxes. Through these explorations, we observe a spectrum of outcomes, where some modifications result in tangible performance improvements, while others offer unique insights for future endeavors.
Keywords: Document Layout Analysis, ViT, MViTv2, Bengali documents, Data augmentation, Transformer architecture.
## I Introduction
In today's digital age, the exponential growth of textual information has underscored the critical importance of document processing and understanding. Document Layout Analysis (DLA), a fundamental task in the field of computer vision, seeks to unravel the complex structure of documents, enabling automated information extraction and interpretation. At its core, Document Layout Analysis involves decomposing a document into its constituent elements, such as text blocks, paragraphs, images and tables.
However, despite the remarkable strides made in document analysis across various languages, the Bengali language has remained relatively unexplored in this realm. The scarcity of labeled data and dedicated resources has hindered significant advancements in Document Layout Analysis specifically tailored for Bengali documents. In this respect, BaDLAD (Bengali Document Layout Analysis Dataset) is a pioneering effort that has the potential to bridge this very gap and drive the evolution of Bengali document analysis. [1] As we navigate this landscape, we cast our gaze upon various model architectures that have shaped the field of Document Layout Analysis.
Traditionally, the realm of computer vision has seen the application of Convolutional Neural Networks (CNNs) in document layout analysis. These networks excel at feature extraction from image data, enabling the identification of key structural elements within documents. This approach has yielded valuable insights and paved the way for subsequent advancements.
Region-based Convolutional Neural Networks (R-CNNs) extended the capabilities of CNNs by introducing the concept of region proposals. This enabled more precise localization of elements, a crucial aspect in Document Layout Analysis. R-CNNs offered improved accuracy by focusing on the regions of interest, thereby enhancing the understanding of document structure.
While these approaches demonstrated commendable performance, recent years have witnessed a paradigm shift in the field, with the ascent of the transformer architecture. Originally designed for natural language processing tasks, transformers have proven their versatility by redefining various domains, including Image Segmentation and Document Layout Analysis. The transformer architecture's inherent ability to capture contextual information and long-range dependencies has revolutionized image segmentation. This architectural evolution has spurred the development of models such as the MViT transformer, which has emerged as a state-of-the-art solution in the realm of instance segmentation as well as a potential candidate for Document Layout Analysis tasks.
In the context of BaDLAD, a multi-domain Bengali Document Layout Analysis Dataset, the MViT architecture holds exceptional promise. BaDLAD's diverse collection of documents, spanning multiple domains and layouts, aligns seamlessly with MViT's capacity to capture intricate structural nuances. This marriage of data and architecture positions the MViT transformer as a natural fit for the comprehensive analysis and interpretation of Bengali documents.
## II Methodology
### _Model Selection_
Our pursuit of a robust and adept architecture for DLA led us to select the MViTv2-B variant, which has exhibited outstanding performance on the highly regarded COCO dataset. This particular variant, pretrained on IN1k and coupled with the Cascade Mask R-CNN framework, has yielded compelling
results. Notably, it achieves a remarkable mask Average Precision (AP) score of 47.4 when evaluated on COCO, following a rigorous training regimen spanning 36 epochs.
The MViTv2-B variant outperforms the Swin-B model by a significant margin, boasting a marked increase of +2.5 and +2.3 in \(AP^{box}\) and \(AP^{mask}\), respectively. [2] This remarkable performance enhancement is achieved alongside lower computational demands and a more compact model size, signifying an optimal balance between accuracy and resource efficiency. Hence,we have chosen the MViTv2-B model architecture as the cornerstone of our Document Layout Analysis framework.
### _Preprocessing_
Our preprocessing workflow encompassed several crucial steps to ensure the quality and consistency of input data for our Document Layout Analysis framework. Initially, we implemented color normalization using configuration parameters by aligning pixel mean and standard deviation. To establish uniformity in image dimensions, we conducted image resizing and padding to a 1024\(\times\)1024 resolution. However, we refrained from binary colorization to preserve the diverse color palettes essential for accurate image element detection.
### _Augmentation_
To increase robustness of training we apply several augmentations on the image inputs. We use random brightness, contrast, saturation and rotation on the train set. These variations simulated real-world scenarios and variations in the dataset. Collectively, these preprocessing steps optimized input data quality, contributing to the reliability and versatility of our Document Layout Analysis across a broad spectrum of document layouts and content.
### _Training_
Our training methodology was designed meticulously, employing a multi-cycle approach to fully harness the potential of the MViTv2-B model for Document Layout Analysis (DLA). Leveraging the Detectron2 framework, we meticulously fine-tuned critical hyperparameters and processes to attain peak performance. [3]
Our training was strategically organized into three consecutive cycles, each spanning 12 epochs for a total of 36 epochs. The model's parameters were efficiently transferred between training cycles, with the final parameters of each 12-epoch run serving as the starting point for the subsequent run. This approach enabled us to explore different training and model parameters and compare them between training phases.
Optimization was achieved through the AdamW optimizer, with a initial learning rate of \(8*10^{-5}\). The learning rate multiplier scheduler was meticulously tailored to guide the model's convergence over different iteration ranges. During the initial 50 iterations a warmup phase was employed with a warmup factor of \(10^{-3}\), starting with a learning rate of \(8*10^{-8}\) and gradually upto \(8*10^{-5}\) after the warmup phase. We started with a slow learning rate with the intention to avoid overshooting and promote model stability. Moreover, the learning rate was further decreased by \(0.1\) times and \(0.01\) times at about \(88\%\) and \(97\%\) of the total iterations respectively to make small adjustments in the final iterations and aid the model's convergence.
We structured data loading and processing using a batch size of 16 for training. During training, crucial metrics were logged every 20 iterations, and model checkpoints were saved every 2000 iterations to facilitate potential model recovery.
## III Results
After a total training of 36 epochs in 3 phases, our model scored a DICE score of 0.90095 on the public test set and a total loss of 0.2125 and a mask loss of 0.19. The loss metrics gradually decreased throughout the training.
From figure 1 we can see that the mask loss reached about 0.2 after the first 12 epochs. However, the training loss slowly continued to improve over the training period.
Fig. 1: Loss mask vs Epoch
Fig. 2: Total loss vs Epoch
Table 3 summarizes the scores obtained after each phase of training.
## IV Kaggle Competition Results
In the Kaggle competition "DL Sprint 2.0," which focused on Document Layout Analysis using the BaDLAD Dataset, a total of 94 teams participated. Our team, named "Black Quad," demonstrated exceptional performance and secured the championship.
Our approach, titled "Document Layout Analysis on BaDLAD Dataset: A Comprehensive MViTv2 Based Approach," yielded remarkable results. We achieved the highest Dice score in the competition of 0.90396, showcasing the effectiveness of our method in accurately segmenting document layouts.
Furthermore, our team achieved a mean Average Precision (mAP) score of 56.381, which stands out as the highest among all participating teams. This accomplishment underscores the robustness and generalization of our approach across various document layouts and scenarios.
## V Discussion
### _Effect of Rotation and Flip_
We observed that both the training set and test set contains rotated images. Initially, in order to accommodate for this case, we introduced random discreet rotations from the set \(\{0^{\circ},90^{\circ},180^{\circ},270^{\circ}\}\). However we observed that this augmentation as well as horizontal or vertical flips lead to poorer results. Since rotated images are rare in the test set, we chose to train our model to primarily focus on upright images. A more comprehensive approach would be to train a separate model to recognize rotations and preprocess them during inference.
However, We noticed a much more prevalent pattern of rotations in the dataset. Since much of the documents are scanned documents, they are slightly tilted at a small angle. To incorporate this variation we augmented our images using a random rotation in the range \([-5^{\circ},5^{\circ}]\).
### _Sliced Inference_
During our evaluation of the inference of our model both on the train and public test set we observed that many small features, primarily text-boxes and paragraphs weren't recognized with adequate confidence by our model. This observation initially led us to believe that slicing the input images into overlapping windows and running inference on each of the slices would improve our results furthermore.
However, despite a two to four fold increase in inference time, we didn't observe any noticeable improvement of the model's capability of recognizing smaller instances. Slicing also introduced the problem of recognising larger features such as tables or images in multiple segments.
### _Resolution of Transformer Backbone_
The multiscale vision transformer backbone of the MViTv2 model can work in different resolutions. [2] The pretrained models are trained on image resolution of \(224\times 224\). We first trained 12 epochs on the pretrained MViTV2-B model with \(224\times 224\) resolution. We then trained the model at \(384\times 384\) resolution, initializing weights from the fine tuned \(224\times 224\) model. We repeated this process for resolution \(512\times 512\), using the weights from the previous step. We observed that with same similar training the \(224\times 224\) resolution model performs better.
### _Two pass Inference to Detect Missed Text-boxes_
In an attempt to recognize the smaller text-boxes and paragraphs missed during inference, we designed a two pass inference approach. We noted that overcrowding of instances played a big factor in the model's failure to detect some features. Therefore, to identify those features, we ran inference twice. The recognised text-boxes and paragraphs, from the first inference, which didn't overlap with any images were erased (i.e. replaced with the background colour). The resulting image was then fed to the model for a second inference. The text-boxes recognized in this pass were added to the model's prediction result. However, this approach was found to perform worse than the single pass inference on the public test set.
### _Effectiveness of Transformer Model_
While most other solutions explored traditional CNN based models including R50-FPN and yolov8, we explored several transformer based architectures including maskDINO and MViTv2. We found both models show improved performance after similar training iterations. The MViTv2-B model gave us a balance of performance and resource efficiency as noted earlier, increasing accuracy compared to traditional approaches while not severely increasing inference time or memory consumption.
## VI Conclusion and Future Work
We present an effective scheme to fine tune the transformer based MViTv2 model for Bengali Document Layout Analysis. Previously traditional CNN based models were applied on the dataset. [1] Our work demonstrates the viability of the transformer based Vision Transformer models such as MViTv2 for Bengali Document Layout Analysis.
As noted before, handling rotation explicitly by training a separate lightweight model to recognize rotations will increase the model's capacity to handle arbitrarily rotated images. Another limitation of the model is it's failure to detect small objects, mainly text-boxes. Copy-paste augmentation techniques have proven to significantly improve a model's capacity in these regards [4, 5]. We couldn't train our model with copy-paste augmentation due to shortage of time but we believe incorporating this augmentation will further strengthen our model and increase it's robustness.
Fig. 3: Test Set Scores |
2309.15351 | Resonant contribution of the three-body decay process $\bar
B_{s}\rightarrow K^{+}K^{-} P$ in perturbation QCD | We investigate the CP violation in the decay process $\bar B_{s} \rightarrow
\phi(\rho,\omega) P \rightarrow K^{+}K^{-}P$ by considering the interference
effects of $\phi\rightarrow K^{+}K^{-}$, $\rho\rightarrow K^{+}K^{-}$ and
$\omega\rightarrow K^{+}K^{-}$ within the framework of perturbative QCD method
(P refers to $\pi$, K, $\eta$ and $\eta'$ pseudoscalar mesons, respectively).
We analyse the mixings of $\phi-\rho^{0}$, $\phi-\omega$ and $\omega-\rho^{0}$
and provide the amplitudes of the quasi-two-body decay processes. The CP
violation for $\bar B_{s} \rightarrow K^{+}K^{-} P$ decay process is obvious at
the ranges of the three vector mesons interferences. Meanwhile, the localised
CP violation can be found for comparing with the experiment results from
three-body decay process at the LHC in the near future. | Gang Lü, Chang Chang Zhang, Yan-Lin Zhao, Li-Ying Zhang | 2023-09-27T01:47:44Z | http://arxiv.org/abs/2309.15351v1 | Resonant contribution of the three-body decay process \(\bar{B}_{s}\to K^{+}K^{-}P\) in perturbation QCD
###### Abstract
We investigate the CP violation in the decay process \(\bar{B}_{s}\to\phi(\rho,\omega)P\to K^{+}K^{-}P\) by considering the interference effects of \(\phi\to K^{+}K^{-}\), \(\rho\to K^{+}K^{-}\) and \(\omega\to K^{+}K^{-}\) within the framework of perturbative QCD method (P refers to \(\pi\), K, \(\eta\) and \(\eta^{\prime}\) pseudoscalar mesons, respectively). We analyse the mixings of \(\phi-\rho^{0}\), \(\phi-\omega\) and \(\omega-\rho^{0}\) and provide the amplitudes of the quasi-two-body decay processes. The CP violation for \(\bar{B}_{s}\to K^{+}K^{-}P\) decay process is obvious at the ranges of the three vector mesons interferences. Meanwhile, the localised CP violation can be found for comparing with the experiment results from three-body decay process at the LHC in the near future.
## I Introduction
CP violation is a fascinating phenomenon in particle physics that has puzzled us for decades. The Standard Model (SM) of particle physics provides a framework for understanding CP violation, but there are still many unanswered questions [1]. One area of research focuses on the search for new sources of CP violation beyond the Cabibbo-Kobayashi-Maskawa (CKM) matrix. This involves studying rare decays and interactions between particles to look for deviations from the predictions of the Standard Model. Another approach is to study CP violation in different types of particles, such as neutrinos or mesons. Despite these efforts, much remains unknown about CP violation.
As early as 2012, LHCb Collaboration confirmed the existence of CP violation in some three-body decay studies of B mesons and found that the local phase space of \(\bar{B}^{\pm}\to\pi^{+}\pi^{-}\pi^{\pm}\) decay channels had large direct CP violation, which was an interesting phenomenon at the time [2; 3]. This phenomenon was later found to be explained by intermediate state resonances between different isospin mesons. As the \(\bar{B}^{\pm}\to\pi^{+}\pi^{-}\pi^{\pm}\) decay process was studied using \(\rho-\omega\) mixed resonance and found significant CP violation in the invariant mass m(\(\pi^{+}\pi^{-}\))=0.77GeV, which coincides with the position and degree of local CP violation [4]. There is no doubt that the three-body decay of heavy mesons is more complex than the two-body case, and one of the reasons is that they receive both resonant and non-resonant contributions during the decay process. The existing experimental results show that CP asymmetry in some local regions of phase space may be more obvious. Just like the LHCb observed large asymmetries in local regions in \(B^{\pm}\to K^{\pm}\pi^{+}\pi^{-}\) and \(B^{\pm}\to K^{\pm}K^{+}K^{-}\). Their invariant mass spectra of \(B^{\pm}\to K^{\pm}\pi^{+}\pi^{-}\) decays in the region \(0.08<m_{\pi^{+}\pi^{-}}^{2}<0.66\mbox{GeV}^{2}/c^{4}\) and \(m_{K^{\pm}\pi^{\mp}}^{2}<15\mbox{GeV}^{2}/c^{4}\), and \(B^{\pm}\to K^{\pm}K^{+}K^{-}\) decays in the region
\(1.2<m_{K^{+}K^{-}\rm low}^{2}<2.0\mbox{GeV}^{2}/c^{4}\) and \(m_{K^{+}K^{-}\rm high}^{2}<15\mbox{GeV}^{2}/c^{4}\)[5]. These local apparent CP asymmetries are interesting. Currently, the phenomenon of CP asymmetry in the three-body decay process of \(B_{s}\) mesons remains relatively unexplored, with limited research from both theoretical and experimental perspectives.
This paper aims to calculate the CP violation of \(\bar{B}_{s}\to K^{+}K^{-}P\) decay process under the perturbative QCD method (PQCD). The reason is that the Sudakov factor in PQCD effectively depresses the non-perturbative contribution and absorbs the non-perturbative part into the universal hadronic wave function [6]. Besides, this method is self-consistent in the two-body non-leptonic decay process of B meson and has been proved to consisted with the large CP violation found in experiment [7]. Indeed, the corresponding two-body decay process of the B meson has been well-established and developed into various of three-body decay process which we can treat three-body decay process with the method of quasi-two body decay process [8; 9]. In recent years, an increasing number of analysis about precious measurements of the branching ratio and CP violation in the three-body decay process have been carried out by BaBar [10], Belle II [11], CLEO [12], and LHCb [13], which provides a great platform to test the standard Model (SM) and search the new physical signals. In this paper, we take the method of quasi-two-body decay process to calculate the CP violation of \(\bar{B}_{s}\to K^{+}K^{-}P\) process under the mixing mechanism of \(\phi\to K^{+}K^{-}\), \(\rho^{0}\to K^{+}K^{-}\) and \(\omega\to K^{+}K^{-}\). The reasons to explore the resonance effect among the three particles arises from the adjacent masses of \(\phi(1020)\), \(\omega(782)\) and \(\rho^{0}(770)\). By incorporating information on \(K^{+}K^{-}\) production and taking into account the constraints imposed by isospin symmetry, quark model and OZI rule, it becomes feasible to disentangle amplitudes with isospin \(I=1\) and \(I=0\) components. The \(\phi(1020)\) and \(\omega(782)\) match the isospin \(I=0\) component. The \(I=1\) component derives from \(\rho^{0}(770)\). The ideal field of intermediate states is transformed into a computable physical field through the application of a unitary matrix in this paper. Additionally, we investigate localized CP violation within the hybrid resonance range to facilitate meaningful future comparisons with experimental results.
We present our work in six distinct parts. The mechanism of three vector mesons mixing is introduced in section 2. In Section 3, we initially investigate CP violation arising from the involvement of the mixing mechanism in the decay process \(\bar{B}_{s}\to\phi\) (\(\rho^{0},\omega\)) \(P\to K^{+}K^{-}P\). Subsequently, we present a formalism for local CP violation. In Section 4, we introduce the amplitude formalism within the framework of perturbative QCD (PQCD) method, along with the fundamental functions and associated parameters. Additionally, we provide an evaluation of both the magnitude and integrated form of CP violation. The analysis of data results can be found in Section 5. Finally, we engage in a comprehensive discussion and provide a concise summary of our findings.
## II The mechanism of three vector mesons mixing
The positive and negative electrons annihilate into photons and then they are polarized in a vacuum to form the mesons of \(\phi(1020)\), \(\rho^{0}(770)\) and \(\omega(782)\), which can also decay into \(K^{+}K^{-}\) pair. Meanwhile, the momentum can also be passed through the VMD model [14; 15]. Since the intermediate state particle is an un-physical state, we need convert it into a physical field from an isospin field through the matrix R [16]. Then we can obtain the physical state of \(\phi\), \(\rho^{0}\) and \(\omega\). What deserved to mentioned is that there is no \(\phi-\rho^{0}-\omega\) mixing in the physical state and we neglect the contribution of the high-order term [17]. The physical states \(\phi-\rho^{0}-\omega\) can be expressed as linear combinations of the isospin states \(\phi_{I}-\rho_{I}^{0}-\omega_{I}\). The relationship can be represented by the following matrix:
\[\left(\begin{array}{c}\rho^{0}\\ \omega\\ \phi\end{array}\right)=R(s)\left(\begin{array}{c}\rho_{I}^{0}\\ \omega_{I}\\ \phi_{I}\end{array}\right) \tag{1}\]
where
\[R=\left(\begin{array}{ccc}<\rho_{I}|\rho>&<\omega_{I}|\rho>&<\phi_{I}|\rho>\\ <\rho_{I}|\omega>&<\omega_{I}|\omega>&<\phi_{I}|\omega>\\ <\rho_{I}|\phi>&<\omega_{I}|\phi>&<\phi_{I}|\phi>\end{array}\right). \tag{2}\]
The change between the physical field and the isospin field in the intermediate state of the decay process is related by the matrices R. The off-diagonal elements of R present the information of \(\phi-\rho^{0}-\omega\) mixing. Based on the isospin representation of \(\phi_{I}\), \(\rho_{I}\) and \(\omega_{I}\), the isospin vector \(|I,I_{3}>\) can be constructed, where \(I_{3}\) denotes the third component of isospin. The variables i and j are employed to denote the physical state of the particle and the isospin basis vector, respectively. According to the orthogonal normalization relationship, we can derive: \(\sum_{j}|j><j|=\sum_{j_{I}}|j_{I}><j_{I}|=I\), and \(<j\,|i>=<j_{I}|\,i_{I}>=\delta_{ji}\). We use the notation \(F_{V_{i}V_{j}}\) to denote the mixing parameter, where \(V_{i}\) and \(V_{j}\) represent one of the three vector particles. Then, the transformation matrix R can be converted as follows:
\[R=\left(\begin{array}{ccc}1&-F_{\rho\omega}(s)&-F_{\rho\phi}(s)\\ F_{\rho\omega}(s)&1&-F_{\omega\phi}(s)\\ F_{\rho\phi}(s)&F_{\omega\phi}(s)&1\end{array}\right). \tag{3}\]
From the translation of the two representations, the physical states can be written as
\[\phi =F_{\rho\phi}(s)\rho_{I}^{0}+F_{\omega\phi}(s)\omega_{I}+\phi_{I},\] \[\omega =F_{\rho\omega}(s)\rho_{I}^{0}+\omega_{I}-F_{\omega\phi}(s)\phi_{ I}, \tag{4}\] \[\rho^{0} =\rho_{I}^{0}-F_{\rho\omega}(s)\omega_{I}-F_{\rho\phi}(s)\phi_{ I}.\]
The relationship between the mixing parameters \(\Pi_{V_{i}V_{j}}\) and \(F_{V_{i}V_{j}}\) can be deduced from the subsequent equation:
\[F_{\rho\omega} =\frac{\Pi_{\rho\omega}}{s_{\rho}-s_{\omega}},\] \[F_{\rho\phi} =\frac{\Pi_{\rho\phi}}{s_{\rho}-s_{\phi}}, \tag{5}\] \[F_{\omega\phi} =\frac{\Pi_{\omega\phi}}{s_{\omega}-s_{\phi}}.\]
The relationship of \(F_{V_{i}V_{j}}\)=\(-F_{V_{j}V_{i}}\) can be found. The inverse propagator of the vector meson, denoted as \(s_{V}\) (\(V=\phi,\rho\), or \(\omega\)), is defined such that \(s_{V}=s-m_{V}^{2}+\mathrm{i}m_{V}\Gamma_{V}\). The variables \(m_{V}\) and \(\Gamma_{V}\) represent the mass and decay rate of
the vector mesons, respectively. Meanwhile, \(\sqrt{s}\) denotes the invariant mass of the \(K^{+}K^{-}\) pairs.
In this paper, the momentum dependence of the mixing parameters \(\Pi_{V_{i}V_{j}}\) of \(V_{i}V_{j}\) mixing is introduced to obtain the obvious s dependence. The mixing parameter \(\Pi_{\rho\omega}=-4470\pm 250\pm 160-i(5800\pm 2000\pm 1100)\)MeV\({}^{2}\) is obtained near the \(\rho\) meson is recently determined precisely by Wolfe and Maltnan [18; 19; 20]. The mixing parameter \(\Pi_{\omega\phi}=19000+i(2500\pm 300)\)MeV\({}^{2}\) is obtained near the \(\phi\) muon. And the mixing parameter \(\Pi_{\phi\rho}=720\pm 180-i(870\pm 320)\)MeV\({}^{2}\) is obtained near the \(\phi\) meson [21]. Then we define
\[\widetilde{\Pi}_{\rho\omega}=\frac{s_{\rho}\Pi_{\rho\omega}}{s_{\rho}-s_{ \omega}},\ \ \widetilde{\Pi}_{\rho\phi}=\frac{s_{\rho}\Pi_{\rho\phi}}{s_{\rho}-s_{\phi}}, \ \ \widetilde{\Pi}_{\phi\omega}=\frac{s_{\phi}\Pi_{\phi\omega}}{s_{\phi}-s_{ \omega}}. \tag{6}\]
III CP violation in \(\bar{B}_{s}\rightarrow\phi\) (\(\rho^{0}\), \(\omega\)) \(P\to K^{+}K^{-}P\) decay process
### The resonance effect from \(V\to K^{+}K^{-}\)
We present decay diagrams (a)-(i) of the \(\bar{B}_{s}\rightarrow\phi\) (\(\rho^{0}\), \(\omega\)) \(P\to K^{+}K^{-}P\) process in Fig.1, aiming to provide a more comprehensive understanding of the mixing mechanism.
In the above decay diagrams, the decay processes depicted in (a), (d), and (g) represent direct decay modes, where \(K^{+}K^{-}\) are produced through \(\phi\), \(\rho^{0}\), and \(\omega\) respectively. The quasi-two-body approach employed in this study is evident from the aforementioned diagrams. Compared to the direct decay processes depicted in diagrams (a), (d), and (g) of Fig.1, the \(K^{+}K^{-}\) pair can also be generated through a distinct mixing mechanism. The black dots in the figure represent the resonance effect between these two mesons, denoted by the mixing parameter \(\Pi_{V_{i}V_{j}}\). Although the contribution from this mixing mechanism is relatively small compared to other diagrams in Fig.1, it must be taken into consideration.
The amplitude of the \(\bar{B}_{s}\rightarrow\phi\)\(\left(\rho^{0},\omega\right)\)\(P\to K^{+}K^{-}P\) decay channel can be characterized in the following manner:
\[A=\left\langle K^{+}K^{-}P\left|H^{T}\right|\bar{B}_{s}\right\rangle+\left\langle K ^{+}K^{-}P\left|H^{P}\right|\bar{B}_{s}\right\rangle, \tag{7}\]
The quantities \(\left\langle K^{+}K^{-}P\left|H^{P}\right|\bar{B}_{s}\right\rangle\) and \(\left\langle K^{+}K^{-}P\left|H^{T}\right|\bar{B}_{s}\right\rangle\) represent the amplitudes associated with penguin-level and tree-level contributions, respectively. The propagator of the intermediate vector meson can be transformed from the diagonal matrix to the physical state after applying the R matrix transformation. Neglecting higher order terms, the amplitudes can be as demonstrated below:
\[\begin{split}\left\langle K^{+}K^{-}P\left|H^{T}\right|\bar{B}_{ s}\right\rangle=&\frac{g_{\phi}}{s_{\phi}}t_{\phi}+\frac{g_{\rho}}{s_{ \rho}s_{\phi}}\widetilde{\Pi}_{\rho\phi}t_{\phi}+\frac{g_{\omega}}{s_{\omega} s_{\phi}}\widetilde{\Pi}_{\omega\phi}t_{\phi}+\frac{g_{\rho}}{s_{\rho}}t_{ \rho}+\frac{g_{\phi}}{s_{\phi}s_{\rho}}\widetilde{\Pi}_{\phi\rho}t_{\rho}\\ &+\frac{g_{\omega}}{s_{\omega}s_{\rho}}\widetilde{\Pi}_{\omega \rho}t_{\rho}+\frac{g_{\omega}}{s_{\omega}}t_{\omega}+\frac{g_{\phi}}{s_{\phi }s_{\omega}}\widetilde{\Pi}_{\phi\omega}t_{\omega}+\frac{g_{\rho}}{s_{\rho}s_ {\omega}}\widetilde{\Pi}_{\rho\omega}t_{\omega},\end{split} \tag{8}\]
\[\begin{split}\left\langle K^{+}K^{-}P\left|H^{P}\right|\bar{B}_{ s}\right\rangle=&\frac{g_{\phi}}{s_{\phi}}p_{\phi}+\frac{g_{\rho}}{s_{ \rho}s_{\phi}}\widetilde{\Pi}_{\rho\phi}p_{\phi}+\frac{g_{\omega}}{s_{\omega} s_{\phi}}\widetilde{\Pi}_{\omega\phi}p_{\phi}+\frac{g_{\rho}}{s_{\rho}}p_{ \rho}+\frac{g_{\phi}}{s_{\phi}s_{\rho}}\widetilde{\Pi}_{\phi\rho}p_{\rho}\\ &+\frac{g_{\omega}}{s_{\omega s_{\rho}}}\widetilde{\Pi}_{\omega \rho}p_{\rho}+\frac{g_{\omega}}{s_{\omega}}p_{\omega}+\frac{g_{\phi}}{s_{\phi }s_{\omega}}\widetilde{\Pi}_{\phi\omega}p_{\omega}+\frac{g_{\rho}}{s_{\rho}s_ {\omega}}\widetilde{\Pi}_{\rho\omega}p_{\omega},\end{split} \tag{9}\]
where the tree-level (penguin-level) amplitudes \(t_{\rho}\left(p_{\rho}\right)\), \(t_{\omega}\left(p_{\omega}\right)\), and \(t_{\phi}\left(p_{\phi}\right)\) correspond to the decay processes \(\bar{B}_{s}\rightarrow\rho^{0}P\), \(\bar{B}_{s}\rightarrow\omega P\) and \(\bar{B}_{s}\rightarrow\phi P\), respectively. Here, \(s_{V}\) represents the inverse propagator of the vector meson V [22; 23; 24]. Moreover, \(g_{V}\) represents the coupling constant derived from the decay process of \(V\to K^{+}K^{-}\) and can be expressed as \(\sqrt{2}g_{\rho k^{+}k^{-}}=\sqrt{2}g_{\omega k^{+}k^{-}}=-g_{\phi k^{+}k^{-} }=4.54\)[25].
The differential parameter for CP asymmetry can be expressed as follows:
\[A_{CP}=\frac{\left|A\right|^{2}-\left|\overline{A}\right|^{2}}{\left|A\right|^ {2}+\left|\overline{A}\right|^{2}}. \tag{10}\]
### The localised CP violation of \(A_{CP}^{\rm O}\)
In this paper, we perform the integral calculation of A\({}_{CP}\) to facilitate future experimental comparisons. For the decay process \(\bar{B}_{s}\rightarrow\phi P\), the amplitude is given by \(M_{\bar{B}_{s}\rightarrow\phi P}^{\lambda}=\alpha p_{B}\cdot\epsilon^{*}(\lambda)\), where \(p_{\bar{B}_{s}}\) represents the momenta of the \(\bar{B}_{s}\) meson, \(\epsilon\) denotes the polarization vector of \(\phi\) and \(\lambda\) corresponds to its polarization. The parameter \(\alpha\) remains independent of \(\lambda\). Similarly, in the decay process \(\phi\to K^{+}K^{-}\), we can express \(M_{\phi\to K^{-}K^{+}}^{\lambda}=g_{\phi}\epsilon(\lambda)\left(p_{1}-p_{2}\right)\), where \(p_{1}\) and \(p_{2}\) denote the momenta of the produced \(K^{+}\) and \(K^{-}\) particles from \(\phi\), respectively. Here, the parameter \(g_{\phi}\) represents an effective coupling constant for \(\phi\to K^{+}K^{-}\). Regarding the dynamics of meson decay, it is observed that the polarization vector of a vector meson satisfies \(\sum_{\lambda=0,\pm 1}\epsilon_{\mu}^{\lambda}(p)(\epsilon_{\nu}^{\lambda}(p))^{*}=-(g_{ \mu\nu}-p_{\mu}p_{\nu}/m_{V}^{2})\). As a result, we
obtain the total amplitude for the decay process \(\bar{B}_{s}\to\phi P\to K^{+}K^{-}P\)[26; 27; 4]:
\[\begin{split} A&=\alpha p_{\bar{B}_{s}}^{\mu}\frac{ \sum_{\lambda}\epsilon_{\mu}^{\mu}(\lambda)\epsilon_{\nu}(\lambda)}{s_{\phi}}g_ {\phi kk}\left(p_{1}-p_{2}\right)^{\nu}\\ &=\frac{g_{\phi kk}\alpha}{s_{\phi}}\cdot p_{\bar{B}_{s}}^{\mu} \left[g_{\mu\nu}-\frac{\left(p_{1}+p_{2}\right)_{\mu}\left(p_{1}+p_{2}\right)_ {\nu}}{s}\right]\left(p_{1}-p_{2}\right)^{\nu}\\ &=\frac{g_{\phi kk}}{s_{\phi}}\cdot\frac{M_{\bar{B}_{s}\to\phi \pi^{0}}^{\lambda}}{p_{\bar{B}_{s}}\cdot\epsilon^{*}}\cdot\left(\Sigma-s^{ \prime}\right)\\ &=\left(\Sigma-s^{\prime}\right)\cdot\mathcal{A}.\end{split} \tag{11}\]
The high (\(\sqrt{s^{\prime}}\)) and low \(\sqrt{s}\) ranges are defined for calculating the invariant mass of \(K^{-}K^{+}\). By setting a fixed value for \(s\), we can determine an appropriate value for \(s^{\prime}\) that fulfills the equation \(\Sigma=\frac{1}{2}\left(s^{\prime}_{\text{max}}+s^{\prime}_{\text{min}}\right)\), where \(s^{\prime}_{\text{max}}(s^{\prime}_{\text{min}})\) denotes respectively the maximum (minimum) value.
Utilizing the principles of three-body kinematics, we can deduce the local CP asymmetry for the decay \(\bar{B}_{s}\to K^{+}K^{-}P\) within a specific range of invariant mass:
\[A_{CP}^{\Omega}=\frac{\int_{s_{1}}^{s_{2}}\,\,\mathrm{d}s\int_{s_{1}}^{s_{2}^{ \prime}}\mathrm{d}s^{\prime}\left(\Sigma-s^{\prime}\right)^{2}\left(|\mathcal{ A}|^{2}-|\overline{\mathcal{A}}|^{2}\right)}{\int_{s_{1}}^{s_{2}}\,\,\mathrm{d}s \int_{s_{1}}^{s_{2}^{\prime}}\mathrm{d}s^{\prime}\left(\Sigma-s^{\prime}\right) ^{2}\left(|\mathcal{A}|^{2}+|\overline{\mathcal{A}}|^{2}\right)}. \tag{12}\]
Our calculation takes into account the dependence of \(\Sigma=\frac{1}{2}\left(s^{\prime}_{\text{max}}+s^{\prime}_{\text{min}}\right)\) on \(s^{\prime}\). Assuming that \(s^{\prime}_{\text{max}}>s^{\prime}>s^{\prime}_{\text{min}}\) represents an integral interval of high invariant mass for the \(K^{-}K^{+}\) meson pair, and \(\int_{s^{\prime}_{1}}^{s^{\prime}_{2}}\mathrm{d}s^{\prime}(\Sigma-s^{\prime}) ^{2}\) represents a factor dependent on \(s^{\prime}\). The correlation between \(\Sigma\) and \(s^{\prime}\) can be easily determined through kinematic analysis, as \(s^{\prime}\) only varies on a small scale. Therefore, we can consider \(\Sigma\) as a constant. This allows us to cancel out the term \(\int_{s_{1}}^{s^{\prime}_{2}}\mathrm{d}s^{\prime}(\Sigma-s^{\prime})^{2}\) in both the numerator and denominator, resulting in \(A_{CP}^{\Omega}\) no longer depending on the high invariant mass of positive and negative particles.
## IV The amplitudes of quasi-two-body decay processes within the framework of perturbative QCD (PQCD)
### Formulation of calculations
The three-body decay process is accompanied by intricate and multifaceted dynamical mechanisms. The perturbative QCD (PQCD) method is known for its efficacy in handling perturbation corrections, which has been successfully applied to two-body non-light decay processes and holds promise for quasi-two-body decay processes as well. In the framework of PQCD, within the rest frame of a heavy B meson, the decay process involves the production of two light mesons with significantly large momenta that exhibit rapid motion. The dominance of hard interactions in this decay amplitude arises due to insufficient time for exchanging soft gluons with the final-state mesons. Given the high velocity of these final-state mesons, a hard gluon imparts momentum to the light spectator quark within the B meson, resulting in the formation of a rapidly moving final-state meson. Consequently, this hard interaction is described by six quark operators. The nonperturbative dynamics are encapsulated within the meson wave function, which can be extracted through experimental measurements. On the other hand, employing perturbation theory
allows for computation of this aforementioned hard contribution. Quasi-two-body decay can be computed by defining the intermediate state of decay.
By employing the quasi-two-body decay method, the total amplitude of \(\bar{B}_{s}\rightarrow\phi\) (\(\rho^{0}\), \(\omega\)) \(\pi^{0}\to K^{+}K^{-}\pi^{0}\) is composed of two components: \(\bar{B}_{s}\rightarrow\phi\) (\(\rho^{0}\), \(\omega\)) \(\pi^{0}\) and \(\phi\) (\(\rho^{0}\), \(\omega\)) \(\to K^{+}K^{-}\). In this study, we illustrate the methodology of quasi-two-body decay process using the example of \(\bar{B}_{s}\rightarrow\phi\pi^{0}\to K^{+}K^{-}\pi^{0}\), based on the matrix elements involving \(V_{tb}\), \(V_{ts}^{*}\) and \(V_{ub}\),\(V_{ub}^{*}\).
\[\begin{split}\sqrt{2}A\left(\bar{B}_{s}\rightarrow\pi^{0}\phi \left(\phi\to K^{+}K^{-}\right)\right)=&\frac{G_{F}p_{\bar{B}_{ s}}\cdot\sum_{\lambda=0,\pm 1}\epsilon(\lambda)g_{\phi}\epsilon^{*}(\lambda) \cdot\left(p_{k^{+}}-p_{k^{-}}\right)}{\sqrt{2}s_{\phi}}\\ &\times\left\{V_{ub}V_{us}^{*}\left[f_{\pi}F_{\bar{B}_{s} \rightarrow\phi}^{LL}(a_{2})+M_{\bar{B}_{s}\rightarrow\phi}^{LL}(C_{2}) \right]\right.\\ &\left.-V_{tb}V_{ts}^{*}\left[f_{\pi}F_{\bar{B}_{s}\rightarrow\phi }^{LL}\left(\frac{3}{2}a_{9}-\frac{3}{2}a_{7}\right)+M_{\bar{B}_{s}\rightarrow \phi}^{LL}\left(\frac{3}{2}C_{8}+\frac{3}{2}C_{10}\right)\right]\right\},\end{split} \tag{13}\]
where \(P_{\bar{B}_{s}}\), \(p_{k^{+}}\) and \(p_{k^{-}}\) are the momentum of \(\bar{B}_{s}\), \(K^{+}\) and \(K^{-}\), respectively. \(C_{i}\) (\(a_{i}\)) is Wilson coefficient (associated Wilson coefficient), \(\epsilon\) is the polarization of vector meson. \(G_{F}\) is the Fermi constant. \(f_{\pi}\) refers to the decay constants of \(\pi\)[28]. Besides \(F_{\bar{B}_{s}\rightarrow\phi}^{LL}\) and \(M_{\bar{B}_{s}\rightarrow\phi}^{LL}\) represent emission graphs that are factorable and non-factorable. \(F_{ann}^{LL}\) and \(M_{ann}^{LL}\) represent annihilation graphs that are factorable and non-factorable. \(LL\), \(LR\), and \(SP\) correspond to three flow structures [6].
The additional representations of the three-body decay amplitudes that necessitate consideration for calculating CP violation through the mixed mechanism in this paper are as follows:
\[\begin{split} 2A\left(\bar{B}_{s}^{0}\rightarrow\,\rho^{0} \left(\rho^{0}\to K^{+}K^{-}\right)\pi^{0}\right)=&\frac{G_{F}p _{\bar{B}_{s}^{0}}\cdot\sum_{\lambda=0,\pm 1}\epsilon(\lambda)g_{\rho} \epsilon^{*}(\lambda)\cdot\left(p_{k^{+}}-p_{k^{-}}\right)}{\sqrt{2}s_{\rho}} \\ &\times\left\{V_{ub}V_{us}^{*}\left[f_{B_{s}}F_{ann}^{LL}(a_{2})+ M_{ann}^{LL}(C_{2})+f_{B_{s}}F_{ann}^{LL^{\prime}}(a_{2})+M_{ann}^{LL^{\prime}}(C_{2}) \right]\right.\\ &-V_{tb}V_{ts}^{*}\left[f_{B_{s}}F_{ann}^{LL}\left(a_{3}+a_{9} \right)\,-f_{B_{s}}F_{ann}^{LR}\left(a_{5}+a_{7}\right)+M_{ann}^{LL}\left(C_{4 }+C_{10}\right)\right.\\ &-M_{ann}^{SP}\left(C_{6}+C_{8}\right)+\left[\pi^{+}\leftrightarrow \rho^{-}\right]+f_{B_{s}}F_{ann}^{LL^{\prime}}\left(a_{3}+a_{9}\right)-f_{B_{s }}F_{ann}^{LR^{\prime}}\left(a_{5}+a_{7}\right)\\ &\left.\left.+M_{ann}^{LL^{\prime}}\left(C_{4}+C_{10}\right)-M_{ann }^{SP^{\prime}}\left(C_{6}+C_{8}\right)+\left[\rho^{+}\leftrightarrow\pi^{-} \right]\right]\right\}.\end{split} \tag{14}\]
\[\begin{split} 2A\left(\bar{B}_{s}^{0}\rightarrow\pi^{0}\omega \left(\omega\to K^{+}K^{-}\right)\right)=&\frac{G_{F}p _{\bar{B}_{s}^{0}}\cdot\sum_{\lambda=0,\pm 1}\epsilon(\lambda)g_{\omega} \epsilon^{*}(\lambda)\cdot\left(p_{k^{+}}-p_{k^{-}}\right)}{\sqrt{2}s_{\omega} }\\ &\times\left\{V_{ub}V_{us}^{*}M_{ann}^{LL}\left(c_{2}\right)-V_{ tb}V_{ts}^{*}\left[M_{ann}^{LL}\left(\frac{3}{2}c_{10}\right)-M_{ann}^{SP}\left( \frac{3}{2}c_{8}\right)+\left[\pi^{0}\leftrightarrow\omega\right]\right]\right\}. \end{split} \tag{15}\]
\[A\left(\bar{B}_{s}^{0}\to K^{0}\phi\left(\phi\to K^{+}K^{-} \right)\right)= \frac{G_{FPB_{s}^{0}}\cdot\sum_{\lambda=0,\pm 1}\epsilon(\lambda)g_{ \phi}\epsilon^{*}(\lambda)\cdot\left(p_{k^{+}}-p_{k^{-}}\right)}{\sqrt{2}s_{ \phi}} \tag{16}\] \[\times\left\{-V_{tb}V_{td}^{*}\left[f_{\phi}F_{B_{s}\to K}^{ LL}\left(a_{3}+a_{5}-\frac{1}{2}a_{7}-\frac{1}{2}a_{9}\right)+f_{K}F_{B_{s}\to\phi} ^{LL}\left(a_{4}-\frac{1}{2}a_{10}\right)\right.\right.\] \[\left.\left.-f_{K}F_{B_{s}\to\phi}^{SP}\left(a_{6}-\frac{1}{2}a_{8} \right)+M_{B_{s}\to K}^{LL}\left(C_{4}-\frac{1}{2}C_{10}\right)+M_{B_{s}\to \phi}^{LL}\left(C_{3}-\frac{1}{2}C_{9}\right)\right.\right.\] \[\left.\left.-M_{B_{s}\to K}^{SP}\left(C_{6}-\frac{1}{2}C_{8} \right)-M_{B_{s}\to\phi}^{LR}\left(C_{5}-\frac{1}{2}C_{7}\right)+f_{B_{s}}F_{ ann}^{LL}\left(a_{4}-\frac{1}{2}a_{10}\right)\right.\right.\] \[\left.\left.-f_{B_{s}}F_{ann}^{SP}\left(a_{6}-\frac{1}{2}a_{8} \right)+M_{ann}^{LL}\left(C_{3}-\frac{1}{2}C_{9}\right)-M_{ann}^{LR}\left(C_{5 }-\frac{1}{2}C_{7}\right)\right]\right\}.\]
\[\sqrt{2}A\left(\bar{B}_{s}^{0}\to K^{0}\rho\left(\rho\to K^{+}K^{-} \right)\right)= \frac{G_{FP}p_{\bar{B}_{s}^{0}}\cdot\sum_{\lambda=0,\pm 1}\epsilon( \lambda)g_{\phi}\epsilon^{*}(\lambda)\cdot\left(p_{k^{+}}-p_{k^{-}}\right)}{ \sqrt{2}s_{\rho}} \tag{17}\] \[\times\left\{V_{ub}V_{ud}^{*}\left[f_{\rho}F_{B_{s}\to K}^{ LL}\left(a_{2}\right)+M_{B_{s}\to K}^{LL}\left(C_{2}\right)\right]-V_{tb}V_{td}^{*} \left[M_{B_{s}\to K}^{LR}\left(-C_{5}+\frac{1}{2}C_{7}\right)\right.\right.\] \[\left.\left.+f_{\rho}F_{B_{s}\to K}^{LL}\left(-a_{4}+\frac{3}{2}a_{7}+ \frac{1}{2}a_{10}+\frac{3}{2}a_{9}\right)-M_{B_{s}\to K}^{SP}\left(\frac{3}{2} C_{8}\right)\right.\right.\] \[\left.\left.+M_{B_{s}\to K}^{LL}\left(-C_{3}+\frac{1}{2}C_{9}+ \frac{3}{2}C_{10}\right)+f_{B_{s}}F_{ann}^{LL}\left(-a_{4}+\frac{1}{2}a_{10}\right)\right.\right.\] \[\left.\left.+f_{B_{s}}F_{ann}^{SP}\left(-a_{6}+\frac{1}{2}a_{8} \right)+M_{ann}^{LL}\left(-C_{3}+\frac{1}{2}C_{9}\right)+M_{ann}^{LR}\left(-C_ {5}+\frac{1}{2}C_{7}\right)\right]\right\}.\]
\[\sqrt{2}A\left(\bar{B}_{s}^{0}\to K^{0}\omega\left(\omega\to K^{+}K^{-} \right)\right)= \frac{G_{F}p_{\bar{B}_{s}^{0}}\cdot\sum_{\lambda=0,\pm 1}\epsilon( \lambda)g_{\omega}\epsilon^{*}(\lambda)\cdot\left(p_{k^{+}}-p_{k^{-}}\right)}{ \sqrt{2}s_{\omega}} \tag{18}\] \[\times\left\{V_{ub}V_{ud}^{*}\left[f_{\omega}F_{B_{s}\to K}^{ LL}\left(a_{2}\right)+M_{B_{s}\to K}^{LL}\left(C_{2}\right)\right]-V_{tb}V_{td}^{*} \left[M_{B_{s}\to K}^{LR}\left(C_{5}-\frac{1}{2}C_{7}\right)\right.\right.\] \[\left.\left.+f_{\omega}F_{B_{s}\to K}^{LL}\left(2a_{3}+a_{4}+2a_{5}+ \frac{1}{2}a_{7}+\frac{1}{2}a_{9}-\frac{1}{2}a_{10}\right)\right.\right.\] \[\left.\left.+M_{B_{s}\to K}^{LL}\left(C_{3}+2C_{4}-\frac{1}{2}C_{9}+ \frac{1}{2}C_{10}\right)+M_{ann}^{LL}\left(C_{3}-\frac{1}{2}C_{9}\right)\right.\right.\] \[\left.\left.-M_{B_{s}\to K}^{SP}\left(2C_{6}+\frac{1}{2}C_{8} \right)+f_{B_{s}}F_{ann}^{LL}\left(a_{4}-\frac{1}{2}a_{10}\right)\right.\right.\] \[\left.\left.+f_{B_{s}}F_{ann}^{SP}\left(a_{6}-\frac{1}{2}a_{8} \right)+M_{ann}^{LR}\left(C_{5}-\frac{1}{2}C_{7}\right)\right]\right\}.\]
\[A\left(\bar{B}_{s}^{0}\rightarrow\eta\phi\left(\phi\to K^{+}K^{-} \right)\right)= \frac{G_{F}p_{\bar{B}_{s}^{0}}\cdot\sum_{\lambda=0,\pm 1}\epsilon( \lambda)g_{\phi}\epsilon^{*}(\lambda)\cdot\left(p_{k^{+}}-p_{k^{-}}\right)}{ \sqrt{2}s_{\phi}} \tag{19}\] \[\times\left\{\frac{\cos\theta}{\sqrt{2}}\left\{V_{ub}V_{us}^{*} \left[f_{n}F_{B_{s}\rightarrow\phi}^{LL}\left(a_{2}\right)+M_{ann}^{LL}\left( C_{2}\right)\right]\right.\right.\] \[-V_{tb}V_{ts}^{*}\left[M_{ann}^{LL}\left(2C_{4}+\frac{1}{2}C_{10} \right)-M_{ann}^{SP}\left(2C_{6}+\frac{1}{2}C_{8}\right)\right.\] \[+\left.\left.f_{B_{s}}F_{ann}^{LL}\left(2a_{3}-2a_{5}-\frac{1}{2} a_{7}+\frac{1}{2}a_{9}\right)\right]+\left[\eta_{n}\leftrightarrow\omega\right]\right\}\] \[-\frac{\sin\theta}{\sqrt{2}}\left\{V_{ub}V_{us}^{*}\left[f_{ \omega}F_{B_{s}\rightarrow\eta_{n}}^{LL^{\prime}}\left(a_{2}\right)+M_{B_{s} \rightarrow\eta_{n}}^{LL^{\prime}}\left(C_{2}\right)\right]\right.\] \[-V_{tb}V_{ts}^{*}\left[f_{\omega}F_{B_{s}\rightarrow\eta_{n}}^{ LL^{\prime}}\left(2a_{3}+2a_{5}+\frac{1}{2}a_{7}+\frac{1}{2}a_{9}\right)\right.\] \[\left.\left.+M_{B_{s}\rightarrow\eta_{n}}^{LL^{\prime}}\left(2C _{4}+\frac{1}{2}C_{10}\right)-M_{B_{s}\rightarrow\eta_{n}}^{SP^{\prime}} \left(2C_{6}+\frac{1}{2}C_{8}\right)\right]\right\}.\]
\[A\left(\bar{B}_{s}^{0}\rightarrow\eta^{\prime}\phi\left(\phi \to K^{+}K^{-}\right)\right)= \frac{G_{F}p_{\bar{B}_{s}^{0}}\cdot\sum_{\lambda=0,\pm 1}\epsilon( \lambda)g_{\phi}\epsilon^{*}(\lambda)\cdot\left(p_{k^{+}}-p_{k^{-}}\right)}{ \sqrt{2}s_{\phi}} \tag{22}\] \[\times\left\{\frac{\sin\theta}{\sqrt{2}}\bigg{\{}V_{ub}V_{us}^{* }\left[f_{n}F_{B_{s}\rightarrow\phi}^{LL}\left(a_{2}\right)+M_{B_{s} \rightarrow\phi}^{LL}\left(C_{2}\right)\right]\right.\] \[-V_{tb}V_{ts}^{*}\left[f_{n}F_{B_{s}\rightarrow\phi}^{LL}\left(2 a_{3}-2a_{5}-\frac{1}{2}a_{7}+\frac{1}{2}a_{9}\right)\right.\] \[\left.+M_{B_{s}\rightarrow\phi}^{LL}\left(2C_{4}+\frac{1}{2}C_{1 0}\right)+M_{B_{s}\rightarrow\phi}^{SP}\left(2C_{6}+\frac{1}{2}C_{8}\right) \right]\right\}\] \[+\cos\theta\left\{-V_{tb}V_{ts}^{*}\left[f_{s}F_{B_{s}\rightarrow \phi}^{LL^{\prime}}\left(a_{3}+a_{4}-a_{5}+\frac{1}{2}a_{7}-\frac{1}{2}a_{9}- \frac{1}{2}a_{10}\right)\right.\right.\] \[\left.\left.+\,M_{B_{s}\rightarrow\phi}^{SP^{\prime}}\left(C_{6} -\frac{1}{2}C_{8}\right)+f_{B_{s}}F_{ann}^{LL^{\prime}}\left(a_{3}+a_{4}-a_{5} +\frac{1}{2}a_{7}-\frac{1}{2}a_{9}-\frac{1}{2}a_{10}\right)\right.\right.\] \[\left.\left.+\,M_{ann}^{LL^{\prime}}\left(C_{3}+C_{4}-\frac{1}{2} C_{9}-\frac{1}{2}C_{10}\right)-f_{B_{s}}F_{ann}^{SP^{\prime}}\left(a_{6}-\frac{1}{2}a_{8}\right)\right.\right.\] \[\left.\left.\left.-M_{ann}^{LR^{\prime}}\left(C_{5}-\frac{1}{2}C_ {7}\right)-M_{ann}^{SP^{\prime}}\left(C_{6}-\frac{1}{2}C_{8}\right)\right]+ \left[\eta_{s}\leftrightarrow\phi\right]\right\}.\]
\[A\left(\bar{B}_{s}^{0}\rightarrow\eta^{\prime}\rho^{0}\left( \rho^{0}\to K^{+}K^{-}\right)\right)= \frac{G_{F}p_{\bar{B}_{s}^{0}}\cdot\sum_{\lambda=0,\pm 1} \epsilon(\lambda)g_{\rho}\epsilon^{*}(\lambda)\cdot\left(p_{k^{+}}-p_{k^{-}} \right)}{\sqrt{2}s_{\rho}} \tag{23}\] \[+V_{ub}V_{us}^{*}\left[f_{B_{s}}F_{ann}^{LL}\left(a_{2}\right)+M_ {ann}^{LL}\left(C_{2}\right)\right]+\left[\rho^{0}\leftrightarrow\eta_{n} \right]\right\}\] \[+\frac{\cos\theta}{\sqrt{2}}\left\{V_{ub}V_{us}^{*}\left[f_{\rho} F_{B_{s}\rightarrow\eta_{n}}^{LL^{\prime}}\left(a_{2}\right)+M_{B_{s} \rightarrow\eta_{n}}^{LL^{\prime}}\left(2C_{2}\right)\right]\right.\] \[\left.\left.-V_{tb}V_{ts}^{*}\left[f_{\rho}F_{B_{s}\rightarrow \eta_{n}}^{LL^{\prime}}\left(\frac{3}{2}a_{7}+\frac{3}{2}a_{9}\right)+M_{B_{s }\rightarrow\eta_{n}}^{LL^{\prime}}\left(\frac{3}{2}C_{10}\right)-M_{B_{s} \rightarrow\eta_{s}}^{SP^{\prime}}\left(\frac{3}{2}C_{8}\right)\right]\right\} \right\}.\]
\[A\left(\bar{B}_{s}^{0}\rightarrow\eta^{\prime}\omega\left( \omega\to K^{+}K^{-}\right)\right)= \frac{G_{F}p_{\bar{B}_{s}^{0}}\cdot\sum_{\lambda=0,\pm 1} \epsilon(\lambda)g_{\omega}\epsilon^{*}(\lambda)\cdot\left(p_{k^{+}}-p_{k^{-}} \right)}{\sqrt{2}s_{\omega}} \tag{24}\] \[+\frac{\cos\theta}{\sqrt{2}}\left\{V_{ub}V_{us}^{*}\left[f_{\omega }F_{B_{s}\rightarrow\eta_{n}}^{LL^{\prime}}\left(a_{2}\right)+M_{B_{s} \rightarrow\eta_{s}}^{LL^{\prime}}\left(C_{2}\right)\right]\right.\] \[-V_{tb}V_{ts}^{*}\left[f_{\omega}F_{B_{s}\rightarrow\eta_{s}}^{LL^ {\prime}}\left(2a_{3}+2a_{5}+\frac{1}{2}a_{7}+\frac{1}{2}a_{9}\right)\right.\] \[\left.\left.+M_{B_{s}\rightarrow\eta_{s}}^{LL^{\prime}}\left(2C_ {4}+\frac{1}{2}C_{10}\right)-M_{B_{s}\rightarrow\eta_{s}}^{SP^{\prime}}\left(2 C_{6}+\frac{1}{2}C_{8}\right)\right]\right\}.\]
where the form factor involving \(\eta_{s}\) is distinguished from \(\eta_{n}\) by introducing a prime distinction in the upper right corner of F and M with respect to \(\eta_{s}\).
### Input parameters
The \(V_{tb}\), \(V_{ts}\), \(V_{ub}\), \(V_{us}\), \(V_{td}\), and \(V_{ud}\) terms in the above equation are derived from the CKM matrix element within the framework of the Standard Model. The CKM matrix, whose elements are determined through experimental observations, can be expressed in terms of the Wolfenstein parameters \(A\), \(\rho\), \(\lambda\), and \(\eta\): \(V_{tb}V_{ts}^{*}=\lambda\), \(V_{ub}V_{us}^{*}=A\lambda^{4}(\rho-i\eta)\), \(V_{ub}V_{ud}^{*}=A\lambda^{3}(\rho-i\eta)(1-\frac{\lambda^{2}}{2})\), \(V_{tb}V_{td}^{*}=A\lambda^{3}(1-\rho+i\eta)\). The most recent values for the parameters in the CKM matrix are \(\lambda=0.22650\pm 0.00048\), \(A=0.790^{+0.017}_{-0.012}\), \(\bar{\rho}=0.141^{+0.016}_{-0.017}\), and \(\bar{\eta}=0.357\pm 0.011\). Here, we define \(\bar{\rho}=\rho\left(1-\frac{\lambda^{2}}{2}\right)\) and \(\bar{\eta}=\eta\left(1-\frac{\lambda^{2}}{2}\right)\)[29]. The physical quantities involved in the calculation are presented in the subsequent table :
## V Analysis of data results
### The direct CP violation from the mixing of three vector mesons
Figure 2: Plot of \(A_{CP}\) as a function of \(\sqrt{s}\) corresponding to central parameter values of CKM matrix elements. The Solid line (dashed line) corresponds to the decay channel of \(\bar{B}_{s}\to K^{+}K^{-}\pi(K^{0})\).
We present the plots illustrating the CP violation in the decay processes of \(\bar{B}_{s}\to K^{-}K^{+}P\). These plots are shown in Fig. 2 and Fig. 3, where we investigate the mixing of \(\rho-\omega-\phi\) particles. Fig. 2 and Fig. 3 depict the variation of \(A_{CP}\) as a function of \(\sqrt{s}\), which represents the invariant mass of \(K^{+}K^{-}\). The central parameter values of CKM matrix elements are used to obtain these results. The observed \(CP\) violation in these decay processes provides valuable insights into fundamental physics phenomena such as vector mesons interferences.
The maximum of CP violation from the decay process \(\bar{B}_{s}\to K^{+}K^{-}\pi\) in Fig.2, with a value of \(-38\%\), occurs at an invariant mass of 1.02 GeV, which corresponds to the mass position of the \(\phi\) meson. Additionally, small peaks are also observed in the invariant mass range of \(\rho^{0}-\omega\). Therefore, it can be concluded that the decay process \(\bar{B}_{s}\to\phi\pi\to K^{+}K^{-}\pi\) plays a significant role in this decay channel. Furthermore, for the decay process \(\bar{B}_{s}\to K^{+}K^{-}K^{0}\), a sharp variation in CP violation is observed when the invariant masses of \(K^{+}K^{-}\) pairs fall within the region around 0.75 GeV, reaching a peak value of \(-70\%\). In this case, it is effects from the \(\rho^{0}-\omega\) mixing mechanism rather than contributions from the QCD penguin dominant decay \(\bar{B}_{s}\to\phi K^{0}\). Consequently, interference effects are expected to occur within a range near 0.7 GeV- 0.8 GeV. It should be noted that only the tree graph contributes to the \(\bar{B}_{s}\to\phi K^{0}\) decay. However, the mixed resonance effect between \(\phi-\omega-\rho\) leads to a smaller violation peak shift in the invariant mass position of the \(\phi\) meson.
While the decay process \(\bar{B}_{s}\to K^{+}K^{-}\eta(\eta^{\prime})\) is more intricate, we first consider the decay process \(\bar{B}_{s}\to V\eta(\eta^{\prime})\) involving \(\eta(\eta^{\prime})\). The physical states of \(\eta\) and \(\eta^{\prime}\) mesons are composed of a mixture of flavor eigenstates, namely \(\eta_{n}\) and \(\eta_{s}\). Furthermore, there is no contribution from penguin graphs in the decay process \(\bar{B}_{s}\to\phi\eta_{s}\); hence, the amplitude contribution of the decay \(B_{s}\to K^{+}K^{-}\eta(\eta^{\prime})\) within this entire mixture is negligible. As depicted in Fig.3, resonant interplay between large CP violation in both invariant mass intervals (\(\rho^{0}-\omega\) and \(\phi\)) ultimately leads to the observed effect shown in Fig.3. In the figure, it is evident that the CP violation peak in \(\bar{B}_{s}\to K^{+}K^{-}\eta(\eta^{\prime})\) occurs with a magnitude of \(-74\%\) (\(-88\%\)) near the range 0.8 GeV. This observation allows us to comprehend the trend of CP violation in these decay processes, which is advantageous for our research. Additionally, we can determine the invariant mass value of the \(K^{+}K^{-}\) pair during significant CP violation events, providing an opportunity for experimental measurement.
Figure 3: Plot of \(A_{CP}\) as a function of \(\sqrt{s}\) corresponding to central parameter values of CKM matrix elements. The Solid line (dashed line) corresponds to the decay channel of \(\bar{B}_{s}\to K^{+}K^{-}\eta(\eta^{\prime})\), respectively.
### Numerical results of the localized integrated CP asymmetry
The relationship between CP violation and invariant mass in the decay process, as derived from the preceding section, provides valuable insights into the dynamics of CP violation. However, to comprehensively investigate regional CP violation and establish for future experiments, we perform a local integration analysis of CP violation within the studied decay process. Consequently, Table 2 presents the localized CP violation for the aforementioned decay processes.
According to Table 2, the integration range (0.98 GeV-1.06 GeV) corresponds to the threshold of \(V\to K^{+}K^{-}\) decay process. The resonance effect between different particles can lead to more pronounced CP violation phenomena in various energy intervals. However, considering the threshold effect for generating \(K^{+}K^{-}\) meson pairs, we provide the local integral values as shown in Table 2. To compare the similarities and differences between three-particle and two-particle resonance effects, we also present the local integral results of CP value under two-particle resonance in Table 2.
In the \(\bar{B}_{s}\to K^{+}K^{-}\pi^{0}\) decay process, the value of CP violation changes less in the resonance regions above the threshold values due to any two-particle or three-particle mixing. Although the mixed resonance contributes a peak value of \(-38\%\) for \(\bar{B}_{s}^{0}\to K^{+}K^{-}\pi^{0}\) decay process in Fig. 2, the local integral values have minimal variation within a specific range in comparison to the overall resonance interval. The values of \(A_{CP}^{0}\) exhibit a consistent magnitude of approximately 0.124.
The values of \(A_{CP}^{0}\) are small due to the contributions from \(\phi-\rho-\omega\) mixing, \(\phi-\rho\) mixing, and \(\phi-\omega\) mixing. However, a significant CP violation of 0.169 can be observed from the contribution of \(\rho-\omega\) mixing. This behavior changes in the decay process \(\bar{B}_{s}\to K^{+}K^{-}K^{0}\) since it involves the QCD penguin dominant decay \(\bar{B}_{s}\to\phi K^{0}\) without any tree-level contribution. In this case, only the decay process involving intermediate states with \(\rho-\omega\) particles exhibits noticeable CP violation.
The decay process \(\bar{B}_{s}\to K^{+}K^{-}\eta(\eta^{\prime})\) is also a special decay process, characterized by the presence of meson mixing between \(\eta\) and \(\eta^{\prime}\). The process \(\bar{B}_{s}\to\phi\eta_{s}\) is the QCD penguin dominant decays without any contribution from a tree diagram, while the process \(\bar{B}_{s}\to\phi\eta_{n}\) involves tree diagram and penguin diagram contributed. Thus \(\eta_{s}\) and \(\eta_{n}\) mixing results in the presence of a smaller tree contribution for \(\eta\) (\(\eta^{\prime}\)). Consequently, the involvement of \(\phi\) as an intermediate state in the decay process leads to a reduction in the value of \(A_{CP}^{0}\). The CP violation induced by the decay process
involving \(\rho-\omega\) mixing exhibits distinct characteristics, with a maximum value of \(-0.237(-0.240)\) observed for the processes \(\bar{B}_{s}\to K^{+}K^{-}\eta\) (\(\bar{B}_{s}\to K^{+}K^{-}\eta^{\prime}\)), respectively.
Theoretical errors give rise to uncertainties in the results. In general, the major theoretical uncertainties arise from power corrections beyond the heavy quark limit, necessitating the inclusion of \(1/m_{b}\) power corrections. Unfortunately, there exist numerous possible \(1/m_{b}\) power suppressed effects that are typically nonperturbative in nature and therefore not calculable using perturbation theory. Consequently, this scheme introduces additional sources of uncertainty. The first error arises from variations in the CKM parameters, while the stems from hadronic parameters, such as the shape parameters, form factors, decay constants, and the wave function of the Bs meson. The third error corresponds to selecting appropriate hard scales that characterize the size of next-to-leading order QCD contributions. By employing central values for these parameters, we initially compute numerical results for CP violation and subsequently incorporate errors based on standard deviation in Table 2. It has been determined that the impact of mixing parameter errors on local CP violation is negligible compared to the overall CP asymmetry, therefore this influence value will not be further discussed.
## VI Summary and conclusion
The CP violation in the decay process of \(\bar{B}_{s}^{0}\) meson is predicted through an invariant mass analysis of \(K^{+}K^{-}\) meson pairs within the resonance region, resulting from the mixing of \(\phi\), \(\omega\), and \(\rho\) mesons. We observe a sharp change in CP violation within the resonance regions of these mesons. Local CP violation is quantified by integrating over phase space. For the decay process \(\bar{B}_{s}\to K^{+}K^{-}\pi^{0}\), we find a local CP violation value around \(-0.12\) arising from interference between \(\phi\), \(\omega\), and \(\rho\) mesons. In decays such as \(\bar{B}_{s}\to K^{+}K^{-}K^{0}\), \(\bar{B}_{s}\to K^{+}K^{-}\eta\) and \(\bar{B}_{s}\to K^{+}K^{-}\eta^{\prime}\), CP violations are observed due to contributions from both two-meson mixing and three-meson mixing processes. Particularly involving the \(\rho\) - \(\omega\) mixing, the local CP violation is large. Experimental detection of local CP violation can be achieved by reconstructing the resonant states of \(\phi\), \(\omega\), and \(\rho\) mesons within the resonance regions.
We propose a quasi-two-body approach, namely, \(\bar{B}_{s}^{0}\to VP\to K^{+}K^{-}P\) to elucidate the three-body decay mechanism of \(\bar{B}_{s}^{0}\to K^{+}K^{-}P\). During this process, V acts as an intermediate state and undergoes resonance with other particles, ultimately decaying into pairs of \(K^{+}K^{-}\) mesons. The three-body decay process of bottom is appropriately formulated using the chain decay of quasi-two-body. We consider the \(B\to RP_{3}\) decay process as a case study for analyzing quasi-two-body decays, where R represents an intermediate resonance state that can further decay into harons \(P_{1,2}\), while \(P_{3}\) denotes another final hadron. The process under consideration can be factorized utilizing the narrow width approximation (NWA). The expression for \(B\to RP_{3}\) can be written as follows: \(\mathcal{B}\left(B\to RP_{3}\to P_{1}P_{2}P_{3}\right)=\mathcal{B}\left(B\to RP _{3}\right)\mathcal{B}\left(B\to P_{1}P2\right)\) which holds true due to the branching ratio. The effects of small widths \(\phi\), \(\rho\), and \(\omega\) in quasi-two-body decay processes into \(KK\) can be safely neglected. Considering the substantial decay rate of \(\rho(770)\), it is reasonable to perform a correction. From the QCD factorization approach, the correction factor for the decays process of \(B^{-}\rightarrow\rho(770)\pi^{-}\rightarrow\pi^{+}\pi^{-}\pi^{-}\) is at level \(7\%\). The parameter \(\eta_{R}\) is introduced as a quant approximation between \(\Gamma\left(B\to RP_{3}\right)\mathcal{B}\left(B\to P_{1}P_{2}\right)\) and \(\Gamma\left(B\to RP_{3}\to P_{1}P_{2}P_{3}\right)\)[32; 33]. When calculating the CP violation, this constant can be eliminated, thereby exerting no influence on our ultimate outcome.
Recently, the LHCb experimental group has made significant progress in investigating the three-body decay of B mesons and has obtained noteworthy findings [34]. By analyzing previous experimental data, they have measured
direct CP violation in various decay modes such as \(B^{\pm}\to K^{+}K^{-}K^{\pm}\), \(B^{\pm}\to\pi^{+}\pi^{-}K^{\pm}\), \(B^{\pm}\to\pi^{+}\pi^{-}\pi^{\pm}\), and \(B^{\pm}\to K^{+}K^{-}\pi^{\pm}\). Based on LHCb experiments, it is anticipated that future investigations will primarily focus on exploring the three-body decays of \(\bar{B}_{s}\).
## Acknowledgements
This work was supported by Natural Science Foundation of Henan (Project No. 232300420115).
|
2301.07177 | Holographic fluids: a thermodynamic road to quantum physics | Quantum mechanics, superfluids, and capillary fluids are closely related: it
is thermodynamics that links them. In this paper, the Liu procedure is used to
analyze the thermodynamic requirements. A comparison with the traditional
method of divergence separation highlights the role of spacetime. It is shown
that perfect Korteweg fluids are holographic. The conditions under which a
complex field can represent the density and velocity fields of the fluid, and
where the complex scalar field becomes a wave function of quantum mechanics,
are explored. The bridge between the field and particle representations of a
physical system is holography, and the key to holography is the Second Law of
thermodynamics. | Peter Ván | 2022-12-20T12:45:19Z | http://arxiv.org/abs/2301.07177v2 | # Holographic fluids: a thermodynamic road to quantum physics
###### Abstract.
Quantum mechanics, superfluids and capillary fluids are closely related: it is thermodynamics that connects them. Liu procedure is applied to analyze the thermodynamic requirements, and the comparison to the traditional method of divergence separation highlights the role of spacetime. It is shown that perfect Korteweg fluids are holographic. The conditions when a complex field can represent the fluid density and velocity fields and when the complex field becomes a wave function of quantum mechanics are treated. The bridge between field and particle representations of physical systems is holography, and the key to holography is the Second Law.
_"A theory is the more impressive the greater the simplicity of its premises is, the more different kinds of things it relates, and the more extended is its area of applicability." in Albert Einstein: Autobiographical Notes_
## 1. Introduction
### Classical holography
Holography is an expected property of theories of quantum gravity. It states that the volumetric forces on a test mass are equivalent to a lower dimensional formulation on the boundary of the corresponding region [1, 2]. Holography is a principle motivated by black hole thermodynamics. Therefore, in general, a relation to thermodynamics is expected [3]. The concept of entropic force gives a particular direct connection, where the holographic principle, together with the Unruh effect, leads to gravity, [4]. For Newtonian gravity, the holographic property and Poisson equation are closely related, the Unruh effect does not play a role, [5].
It is easy to see that the Poisson equation implies holography: the force on a test mass induced by gravitational pressure on a closed surface is equivalent to the bulk gravitational force density integrated on the volume with nonzero mass density. It is best seen as a local identity, without a system of screens, because the force density of a gravitational field, density multiplied by the gradient of the gravitational potential, can also be written as the divergence of a second-order tensor, the pressure of classical gravity:
\[\nabla\cdot\boldsymbol{P}_{grav}=\rho\nabla\phi. \tag{1}\]
Here \(\phi\) is the gravitational potential of the Newtonian theory, \(\rho\) is the mass density and \(\boldsymbol{P}_{grav}=\frac{1}{8\pi G}\left(\nabla\phi\cdot\nabla\phi \boldsymbol{I}-2\nabla\phi\nabla\phi\right)\) is the pressure of the gravitational field. \(\rho\) is the density, \(\boldsymbol{I}\) is the second order identity tensor, \(G\) is the gravitational constant and the central dot denotes contraction.
Moreover, thermodynamic principles extend and explain the above relation of holographic property and the field equation. The Poisson equation of Newtonian
gravity follows from thermodynamic principles when those are applied to a classical scalar field. Both the holographic pressure-force relation, (1), and the Poisson equation emerge in the marginal case of zero dissipation for perfect fluids, [6, 7, 8].
It is a well-known fact in fluid mechanics that ideal, Euler fluids are holographic, too. That is a consequence of elementary thermodynamics, applied to perfect fluids, where the pressure tensor is the thermostatic scalar pressure: \(\boldsymbol{P}=p\boldsymbol{I}\). It is because
\[\nabla p=\rho(\nabla h-T\nabla s)=\rho(\nabla\mu+s\nabla T), \tag{2}\]
where \(h\) and \(s\) are the specific enthalpy and specific entropy, \(\mu\) is the chemical potential and \(T\) is the temperature. Therefore, in Euler fluids, the divergence of the pressure is also a force density, with the specific enthalpy as potential in the case of isentropic processes (or barotropic fluids) and the chemical (Gibbs) potential in the case of isothermal fluids.
One may wonder about possible generalizations. Therefore, it is worth clarifying the concept itself. In the following, a field theory, or a continuum, is called _classical holographic_ if for the second order constitutive tensor field \(P_{i}^{j}\) and for the constitutive scalar field \(\Phi\), the following identity is valid
\[\partial_{j}P_{i}^{j}=\rho\partial_{i}\phi,\qquad\nabla\cdot \boldsymbol{P}=\rho\nabla\phi=0. \tag{3}\]
A field is called constitutive if it is a function of constitutive space space, the field variables and its derivatives. The thermodynamic theory determines its exact meaning, and it is flexible in a well-determined sense. For example, for Euler fluids, the constitutive state space is the same as the thermodynamic state space and spanned by the specific entropy and the specific volume of the fluid, \((s,\mathrm{v})\). However, the usual thermodynamic variable transformations can be applied, e.g. on the right-hand side of (2) shows a Legendre transformation and temperature and density, \((T,\rho)\) can be a different choice. Remarkably, the definition (3) is formulated for nonrelativistic field theories.
The equation (2), when combined with a balance of momentum, gives the Friedmann form of the Euler equation, [9]:
\[\rho\boldsymbol{\dot{v}}+\nabla p=-\rho\nabla\varphi,\quad\to \quad\boldsymbol{\dot{v}}=-\nabla(\varphi+h)+T\nabla s=-\nabla(\varphi+\mu)+s \nabla T. \tag{4}\]
Here the momentum balance of the fluid is given in the Lagrangian form, where the overdot denotes the comoving, substantial time derivative. Therefore, in the case of barotropic fluids, the last term of the right-hand side is zero, and the holographic property, (2), implies that the partial differential equations of fluid motion are given in the form of a Newtonian equation of a point-mass. However, the potential is not fixed; it is state space dependent. If the density distribution is not given, (4) is coupled to the continuity equation, to the conservation of the mass:
\[\dot{\rho}+\rho\nabla\cdot\boldsymbol{v}=0. \tag{5}\]
Furthermore, one cannot avoid dealing with velocity as a field. The construction is similar to the idea of pilot waves without quantum mechanics.
### Quantum fluids
The quantum-hydro correspondence goes back to the reformulation of the Schrodinger equation
\[i\hbar\frac{\partial\psi}{\partial t}+\frac{\hbar^{2}}{2m}\Delta \psi-V(\boldsymbol{x},t)\psi=0, \tag{6}\]
to an evolution equation of the wave function, a complex scalar field \(\psi\). Here \(i\) is the imaginary unit, \(m\) is the mass of a quantum particle, and \(\hbar\) is the reduced
Planck constant. The above equation can be transformed to a fluid form with the Madelung transformation, separating the amplitude, \(R\), and the phase, \(S\), of the wave function:
\[\psi=Re^{i\frac{m}{\hbar}S}. \tag{7}\]
Then the real and imaginary parts of (6) give the continuity equation and the Bernoulli equation of irrotational flow, [10], with the \(\rho=R^{2}=|\psi|\) a density and \(S=\nabla\boldsymbol{v}\) the velocity potential. A fluid theory emerges, with the balance of mass, (5), and the balance of momentum
\[\rho\boldsymbol{\dot{v}}+\nabla\cdot\boldsymbol{P_{Q}}=\boldsymbol{0}, \tag{8}\]
as evolution equations. However, this is a very particular fluid because the pressure tensor has the following form, [11]:
\[\boldsymbol{P}_{Q}=-\left(\frac{\hbar}{2m}\right)^{2}\rho\nabla\frac{\nabla \rho}{\rho}. \tag{9}\]
It is easy to see that this pressure is holographic in the above sense because its divergence will be
\[\nabla\cdot\boldsymbol{P}_{Q}=\rho\nabla\phi_{B}=-\frac{\hbar^{2}}{4m^{2}} \rho\nabla\left(\frac{\Delta\rho}{\rho}-\frac{\nabla\rho\cdot\nabla\rho}{2 \rho^{2}}\right). \tag{10}\]
\(\phi_{B}\) is the Bohm potential, usually written as
\[\phi_{B}=-\frac{\hbar^{2}}{2m^{2}}\frac{\Delta R}{R}, \tag{11}\]
with \(R=\sqrt{\rho}=|\psi|\). Therefore, the (10) holographic relation connects the Bohmian and pilot-wave interpretations of quantum mechanics to a particular fluid form, where the pressure tensor is a nonlinear function of the density, the gradient of the density and its second gradient. This correspondence of hydrodynamics and single-particle quantum mechanics is fascinating and inspired several extensions (see, e.g. in Jammer in this respect [12], and Jackiw and his coworkers toward a reformulation of quantum field theories, [13, 14]).
From a continuum point of view, the (9) pressure represents a perfect Korteweg fluid. Korteweg fluids were proposed as extensions of the static theory of capillarity of Van der Waals, [15], where the static part of the pressure tensor depends on the first, and also the second spatial derivatives of the density in an isotropic manner [16]:
\[\boldsymbol{P}_{K}(T,\rho,\nabla\rho,\nabla^{2}\rho) =\left(p-\alpha\Delta\rho-\beta(\nabla\rho)^{2}\right)\boldsymbol {I}-\delta\nabla\rho\circ\nabla\rho-\gamma\nabla^{2}\rho,\] \[P^{ij}(T,\rho,\partial_{i}\rho,\partial_{ij}\rho) =\left(p-\alpha\partial_{k}^{\;k}\rho-\beta(\partial_{k}\rho \partial^{k}\rho)\,\delta^{ij}-\delta\partial^{i}\rho\partial^{j}\rho-\gamma \partial^{ij}\rho\right. \tag{12}\]
Here \(\alpha,\beta,\gamma,\delta\) are density and temperature-dependent material parameters, like the thermostatic pressure \(p\). The equation was also given in index notation, where the spatial indices are \(i,j,k=1,2,3\) and upper-lower identical ones indicate summation. The coefficients in the above formula cannot have arbitrary values; the compatibility with the Second Law of thermodynamics restricts the functional forms. The thermodynamic analysis of Sobrino and, later on, an analogous but more general analysis of Dunn and Serrin revealed, [17, 18], that the pressure of perfect Korteweg fluids can be expressed with the help of the specific Helmholtz free energy,
\(f(T,\rho,\nabla\rho)\), as
\[\textbf{{P}}_{S}=\left(\rho^{2}\partial_{\rho}f-\rho\nabla\cdot\left(\rho\partial _{\nabla\rho}f\right)\right)\textbf{{I}}-\nabla\rho\circ\partial_{\nabla\rho}f. \tag{13}\]
Here the lower indexed \(\rho\) and \(\nabla\rho\) are for partial derivatives, e.g. \(\partial_{\rho}f=\frac{\partial f}{\partial\rho}(T,\rho,\nabla\rho)\). The thermodynamic compatible Korteweg pressure has the holographic property as well because at constant temperature
\[\nabla\cdot\textbf{{P}}_{S}=\rho\nabla\phi_{S}=\rho\nabla\left(\partial_{\rho} (\rho f)-\nabla\cdot\partial_{\nabla\rho}(\rho f)\right). \tag{14}\]
Therefore the functional derivative of the free energy density, \(\rho f\), is the potential of the mechanical force field. This quantity is most frequently interpreted as generalised, gradient-dependent chemical potential, \(\mu_{S}=\frac{\delta\rho f}{\delta\rho}=\partial_{\rho}(\rho f)-\nabla\cdot \partial_{\nabla\rho}(\rho f)\). The holographic property of Korteweg fluids was recognised independently by several other authors, like [19, 20] and later became a crucial part of the variational principle motivated phase field approaches, [21]. However, the (9) pressure cannot be written in the Sobrino form, (14). Therefore, it is seemingly incompatible with thermodynamics, contradicts the Second Law.
In quantum mechanics and quantum field theories, the complex field represents a probabilistic fluid, and the density field, \(\rho\), is a probability density of the position of a quantum particle with mass \(m\). In capillarity phenomena, for Korteweg fluids, the density is a real mass density. The theory of macroscopic quantum systems, in particular, superfluidity (and superconductivity) in the \(\Psi\) theories of Ginzburg field equations, are both for complex fields and hydrodynamic ones (see, e.g. [22, 23, 24]). Both the quantum and hydro forms are useful and important, and \(\rho=\left|\Psi\right|^{2}\) represents a real mass (or charge) density. The related Gross-Pitaevskii, the Ginzburg-Sobyanin and the logarithmic Bialynicki-Birula-Mycielski equations and their combinations represent a family of nonlinear Schrodinger equations in the form
\[i\hbar\frac{\partial\Psi}{\partial t}+\frac{\hbar^{2}}{2m}\Delta\Psi-\Phi_{sf }(|\psi|,\textbf{{x}},t)\Psi=0, \tag{15}\]
where \(\Phi_{sf}\) is a density dependent potential function. The particular forms are modelling experimental properties of superfluid He around the \(\lambda\) phase transition [25, 26, 27, 28, 29]. These fluids are holographic by definition, and the particle-like pilot-wave equation of motion is characterised by a combination of the Bohmian and the superfluid potentials.
Therefore, according to the above mentioned examples, Euler fluids and perfect Korteweg fluids are holographic classical field theories connected to complex field representations of superfluids and Madelung representation of quantum mechanics. The following scheme represents the usual relation.
\begin{tabular}{|c|c|c|c|} \hline quantum mechanics & \(\rightarrow\) & \(superfluids\) & \(\rightarrow\) & capillary fluids \\ \hline \end{tabular} where the arrows are interpreted as analogies, thermodynamic compatibility is not an issue, and holographic property looks accidental.
As we have seen, for Newtonian gravity, for Euler and Korteweg fluids, the origin of classical holographic property is connected to the Second Law, to nonequilibrium thermodynamics. The relation to the Second Law is well expected in the case of simple Euler fluids in local equilibrium and can be understood in capillarity phenomena, where a thermodynamic potential is allowed to depend on the gradient
of density as well. The purpose of this paper is to analyse the reverse relation:
\[\boxed{\text{capillary fluids}}\implies\boxed{superfluids}\implies\boxed{\text{ quantum mechanics}},\]
and classify the related conditions. We will see that here the relations - from a mathematical point of view - are not analogies: superfluids are particular Korteweg fluids, and Madelung fluids are particular superfluids.
There is a conceptual question in the background. How could one connect thermodynamics to gravity and also, more fundamentally, to quantum mechanics? Can a theory be understood as a result of emergent, collective behaviour of particulate matter connected to individual particles (Schrodinger equation) and pure fields (gravity)?
The fundamental balances and the Second Law inequality are universal and independent of the material's structure. The methodology of nonequilibrium thermodynamics is general and applicable to both particulate matter and fields. If the analysis aims to separate the universal and the structure-dependent aspects, then thermodynamics is prior to statistical physics. A particular material structure, e.g. material composed of rigid particles with vacuum interactions, introduces additional mathematical conditions which can and must be compatible with the more general universal considerations. One of the most remarkable consequences and a justification of the thermodynamic methodology is that the nondissipative part of the field equations emerges in an Euler-Lagrange form, as functional derivatives, without any variational principles, [30].
The organisation of the paper will be the following. In the second section, the nonequilibrium thermodynamics of Korteweg fluids is treated with the Liu procedure in an Eulerian, laboratory frame of reference. The spacetime aspects are best shown in this way, and the covariance of the final form, the entropy production, is apparent then. Then the third section presents the classical, heuristic treatment in a Lagrangian, comoving frame. Then Gibbs relation and the divergence separation are much simpler, and the comparison to the Liu procedure is instructive to justify their proper usage and meaning. The third section specifies the previous results towards quantum mechanics and analyses the conditions when a complex field, a wave function, can represent a Korteweg fluid. It is shown that the Bohm potential is either the consequence of the wave function representation or, at the same time, the consequence of multiparticle separability: the relation of multiplicative representation of probabilistic independence and additive representation of thermodynamic independence. Finally, the discussion highlights further aspects of the theoretical framework and some consequences. It is argued that there is a novel thermodynamic method of quantisation that works without a Hamiltonian structure of the evolution equations and applicable to dissipative evolution equations as well.
## 2. Nonequilibrium thermodynamics of Korteweg fluids
### Spacetime aspects: balances and thermodynamics
The objectivity of classical continua is a long-time-discussed subject. The absolute time of Galilean relativistic spacetime1 separates space and time and four-dimensional aspects; four
quantities are not apparent. Therefore the mathematical representation of objective and frame-independent formulation of nonrelativistic theories is discussed and controversial. Correctly considering spacetime aspects in extensions of classical nonequilibrium thermodynamics is crucial.
There are two aspects of objectivity: the problem of frame-independent physical quantities and the problem of constitutive relations. In both cases, realising that fundamental balances are four-divergences is helpful to identify four quantities and formulate objectivity requirements, [32, 33], but without a clear geometrical concept of Galilean relativistic spacetime, the transformation rule based approach can be misleading, [34]. The usual transformation rule based definition of objectivity, [35], excludes momentum and velocity from the thermodynamic state space. This is a blocking problem because thermodynamics is best formulated in a comoving material frame with comoving quantities. Without a covariant approach, the thermodynamics of fields cannot be formulated in nonrelativistic spacetime. However, it is possible to obtain frame-independent results despite the frame-dependent framework if some rules are respected.
The correct mathematical representation of the absolute time of the Galilean relativistic spacetime is crucial. It cannot be embedded in the absolute four-dimensional spacetime, e.g. [36, 31]. Let us emphasise again: fluids are frame independent, and their theories can be formulated in a frame-independent theory, including the Gibbs relation of local equilibrium and the entropy production itself, [37, 38]. Gibbs relation is not invariant, but covariant: relative velocity appears due to frame transformation formulas. However, the required unusual mathematical tools can be avoided by considering some simple rules:
1. Spacetime derivatives are Galilean four-covectors. Therefore, if changing reference frames, their timelike part transforms, but their spacelike part is invariant2. Therefore the substantial, comoving time derivative is the transformed partial time derivative to a comoving reference frame, and the spatial derivative is invariant. Footnote 2: A simple explanation with relative quantities is the following. The covector maps a vector to an invariant scalar. Let us assume that the transformation rule of a four-vector \(A\) is Galilean, therefore in a reference frame \(K\), it has the time and spacelike components \(A\overset{K}{\prec}(a,\boldsymbol{a})\) and in the other reference frame \(K^{\prime}\), moving with a relative velocity \(\boldsymbol{v}\) related to \(K\), \(A\overset{K^{\prime}}{\prec}(a,\boldsymbol{a}-a\boldsymbol{w})\). The timelike part does not change, but the spacelike part is Galilean transformed. If the time and spacelike components of a four-covector, \(B\), in the reference frame \(K\) and \(K^{\prime}\) are represented by \(B\overset{K}{\prec}(b,\boldsymbol{b})\) and \(B\overset{K^{\prime}}{\prec}(b^{\prime},\boldsymbol{b}^{\prime})\), respectively, then the transformation rules for the covector components follow from the invariance of the \(B\cdot A\): \[B\cdot A=b^{\prime}a^{\prime}+\boldsymbol{b}^{\prime}\cdot\boldsymbol{a}^{ \prime}=b^{\prime}a+\boldsymbol{b}^{\prime}\cdot(\boldsymbol{a}-a\boldsymbol{ w})=(b^{\prime}-\boldsymbol{b}^{\prime}\cdot\boldsymbol{v})a+\boldsymbol{b}^{ \prime}\cdot\boldsymbol{a}=ba+\boldsymbol{b}\cdot\boldsymbol{a}.\] Therefore \(\boldsymbol{b}^{\prime}=\boldsymbol{b}\), the spacelike part of the four covector does not transform, and the transformation rule of the timelike part is \(b^{\prime}=b+\boldsymbol{b}\cdot\boldsymbol{v}\).
2. Energy is not a scalar but a component of a higher-order tensor in the Galilean relativistic spacetime. The so-called total energy is the energy in a given reference frame, and internal energy, the difference of the total energy and the kinetic energy, is the comoving form of the energy, analogously
to the well-known special relativistic rules, see, e.g. [39]. In a Galilean relativistic spacetime, the quadratic kinetic energy is part of the Galilean transformation rule [38].
3. Balances are four divergences. The Second Law is a constrained inequality. If the constraints for the entropy inequality are balances or their spatial derivatives, then the result will be objective.
4. Relative velocity is the spatial part of the four-velocity vector. The timelike part of nonrelativistic four-velocity is the constant one because of the absolute time, therefore spatial derivatives of the relative velocity in an inertial frame are absolute. Hence, if objective four-velocity can be a state variable, a relative velocity to an inertial reference frame can be a state variable, too. The concept of Noll forbids velocity-dependent state spaces, and proper spacetime concepts do not. It is the most notable difference in our approaches. We will see that laboratory frame calculations in this section, and comoving frame calculations in section 3 lead to the same objective, reference frame independent result. 3. Footnote 3: Remarkable, that Noll himself joined the critics of Noll objectivity, complaining about the insufficient formulation and argued that the concept must be developed: the Descartes product of space and time is not a correct representation of spacetime in nonrelativistic physics [40, 41, 42]
In special relativistic thermodynamics, four quantities, including four-velocity, cannot be avoided, and then the simple transition of nonrelativistic thermodynamic theory with comoving representation is problematic. For example, the paradox of temperature transformations refers precisely to the distinction of comoving and covariant concepts and transformation properties of the energy, see, e.g. [43, 44, 45, 46]. The resolution of the paradox is based on the careful separation of velocity concepts, and the usage of spacetime quantities [47, 48].
In the following calculations, the usual relative quantities are used. However, two independent derivations are presented: in the first one, velocity is part of the constitutive state space, and in the second one, it is not, and Noll objectivity is respected. Also, the two derivations use different methods of Second Law analysis. First, Korteweg fluids are represented in the Eulerian inertial reference frame, and the entropy production will be calculated with the help of the Liu procedure. This way, the presentation of the thermodynamic methodology is more transparent. Then Korteweg fluids are represented in a Lagrangian frame, and entropy production is calculated by divergence separation. There velocity is not part of the constitutive state space, and the rules of Noll objectivity are respected. In the second case, the calculation is more straightforward but more heuristic. One obtains the same, apparently objective result in both ways. Both derivations are instructive. When interpreting and explaining the results, one can get a firm grasp of the methodology and clearer insight into the interplay of spacetime aspects and Second Law requirements. The constructive nature of the methodology is demonstrated, too.
### Balances and state spaces
In this section, the formulas are presented both with and without indices. The index notation is best considered abstract; it does not refer to Descartes's coordinates but clarifies higher-order tensors' tensorial properties and contractions. Upper and lower indices are distinguished, and identical ones denote contractions.
The fundamental balances of mass, momentum and energy are first given in a local form from the point of view of an external, inertial observer. The balance of mass is
\[\partial_{t}\rho+\partial_{i}(\rho v^{i})=0,\qquad\partial_{t}\rho+\nabla\cdot( \rho\boldsymbol{v})=0. \tag{16}\]
Here \(\partial_{t}\) is the partial time derivative. \(\boldsymbol{v}\) is the barycentric velocity; the single component fluid is comoving with the mass, and there is no diffusion flux. The balance of momentum, the Cauchy equation, is
\[\partial_{t}(\rho v^{i})+\partial_{j}\left(P^{ij}+\rho v^{i}v^{j}\right)=0^{i },\qquad\partial_{t}(\rho\boldsymbol{v})+\nabla\cdot(\boldsymbol{P}+\rho \boldsymbol{v})=\boldsymbol{0}, \tag{17}\]
where \(\rho v^{i}\) is the momentum density and \(P^{ij}\) is the pressure tensor, the conductive current density of the momentum. The pressure is a constitutive quantity, a function of the constitutive state space.
The balance of energy is
\[\partial_{t}(\rho e_{T})+\partial_{i}(q_{T}^{i}+\rho e_{T}v^{i})=0,\qquad \partial_{t}\rho e_{T}+\nabla\cdot(\boldsymbol{q}_{T}+\rho e_{T}\boldsymbol{ v})=0. \tag{18}\]
Here \(e_{T}\) is the specific energy; therefore, \(\rho e_{T}\) is the energy density. One can see that the total current density of the energy in a laboratory frame, \(q_{T}^{i}+e_{T}v^{i}\), is written as a sum of the conductive and convective current densities, \(q_{T}^{i}\), and \(\rho e_{T}v^{i}\), respectively.
The substantial form of balances the conductive current densities of the currents are eliminated. The overdot denotes the substantial time derivative of the corresponding physical quantity, defined as \(\dot{}=d_{t}=\partial_{t}+v^{i}\partial_{i}\). With comoving time derivatives and with the mass balance, the velocity-dependent parts can be eliminated, and the balances are obtained as
\[\dot{\rho}+\rho\partial_{i}v^{i} =0,\qquad\quad\dot{\rho}+\rho\nabla\cdot\boldsymbol{v}= 0, \tag{19}\] \[\rho\dot{v}^{i}+\partial_{j}P^{ij} =0^{i},\qquad\rho\dot{\boldsymbol{v}}+\nabla\cdot\boldsymbol{P}= \boldsymbol{0},\] (20) \[\rho\dot{e}_{T}+\partial_{i}q_{T}^{i} =0,\qquad\rho\dot{e}_{T}+\nabla\cdot\boldsymbol{q}_{T}= 0. \tag{21}\]
It is worth emphasising that both the local and the substantial balances are space-time four-divergences. The balance of internal energy is obtained by subtracting the balance of kinetic energy from the total energy, The specific internal energy is \(\mathfrak{u}=e_{T}-\frac{v^{2}}{2}\), therefore
\[\rho\dot{\mathfrak{u}}+\partial_{i}q^{i}=-P^{ij}\partial_{i}v_{j},\qquad\rho \dot{\mathfrak{u}}+\nabla\cdot\boldsymbol{q}=-\boldsymbol{P}:\nabla \boldsymbol{v}. \tag{22}\]
Here the conductive current density of the internal energy, the heat flux, is defined as \(q^{i}=q_{T}^{i}-P^{ij}v_{j}\).
The closure of the system is complete if the constitutive functions, the pressure tensor and the heat flux, \(P^{ij}\) and \(q^{i}\), are given. They are restricted by the Second Law of thermodynamics and by material symmetries. The thermodynamic restrictions are determined with the help of the entropy balance. The entropy density, \(\rho s\) and the entropy flux, \(J^{i}\), the conductive part of the local entropy current density, are constitutive quantities. The local and substantial forms of the entropy balance are:
\[\partial_{t}(\rho s)+\partial_{i}(J^{i}+\rho sv^{i}) \geq 0,\qquad\partial_{t}(\rho s)+\nabla\cdot(\boldsymbol{J}+\rho s \boldsymbol{v})\geq 0, \tag{23}\] \[\rho\dot{s}+\partial_{i}J^{i} \geq 0,\qquad\rho\dot{s}+\nabla\cdot\boldsymbol{J}\geq 0. \tag{24}\]
The entropy balance is a conditional inequality. It is subject to constraints determined by the field variables, the physical model of the continuum. In our case,
the constraints are the simplest and most general ones. They are the fundamental balances.
In the following, we distinguish three kinds of state spaces. The constitutive functions, \(\mathbf{q},\mathbf{P},s\) and \(\mathbf{J}\) are defined on the _constitutive state space (CSS)_. The primary constitutive function is the specific entropy (or the entropy density). The entropy inequality, the constitutive state space and the constraints determine _thermodynamic state space (TSS)_, which is a reduction of CSS, and the _process direction space (PDS)_, which is an extension of CSS. For Korteweg fluids, the CSS is spanned by the specific internal energy, mass density and their gradients, by the velocity and its gradient and by the second spatial derivative of the density: \((e,\nabla e,\rho,\nabla\rho,\nabla^{2}\rho,\mathbf{v},\nabla\mathbf{v})\). For Fourier-Navier-Stokes fluids, TSS is spanned by the specific internal energy and the mass density, \((e,\rho)\). For Korteweg fluids, the TSS is larger, the specific entropy also depends on the density gradient, \((e,\rho,\nabla\rho)\). In the simplest cases, the PDS is spanned by the time and space derivatives of the fields in the constitutive state space that are already not in the constitutive state space. Hence for Korteweg fluids it is spanned by \((\partial_{t}e,\partial_{t}\nabla e,\nabla^{2}e,\partial_{t}\rho,\partial_{t} \nabla\rho,\partial_{t}\nabla^{2}\rho,\nabla^{3}\rho,\partial_{t}\nabla \circ\mathbf{v},\partial_{t}\nabla^{2}\circ\mathbf{v})\).
The starting point of the analysis is the constitutive state space, the domain of constitutive functions. The process direction space and the thermodynamic state space are obtained through the Second Law analysis. In the following sections, we will calculate the thermodynamic requirements with the help of both the local and substantial forms, with the Liu procedure and the more heuristic divergence separation, respectively. The result will be the same.
## 3. Second Law compatible Korteweg fluids: Liu procedure
The inequality of the Second Law is obtained with the entropy balance, (23), considering physical conditions as constraints. In our case, these constraints are the fundamental balances, (16), (17) and (18).
If the functional form of a constitutive function is given, like the Korteweg pressure (12), then the Second Law can be used to test its thermodynamic compatibility. However, the Second Law can be used constructively, and one can obtain the most general form of the constitutive functions that is allowed considering the Second Law and the given constraints. The generality of the conditions determines the generality of the result. If the conditions are only the basic balances, then the constitutive functions are universal, independent of material composition, and depend only on the representative capability of the constitutive state space.
For Korteweg fluids, where the constitutive state space is second order weakly nonlocal in the mass density, the gradient of the mass balance, (25), is a further constraint to the entropy inequality,
\[\partial_{it}\rho+\partial_{ij}(\rho v^{j})=0_{i}, \tag{25}\]
That is the most important speciality of the thermodynamic methodology in case of weakly nonlocal state spaces [49, 50]. Liu procedure is a method that represents the second law inequality as a linear algebraic problem in the process direction space. The PDS vectors can have any values, in particular, they can be both positive and negative, depending on the related initial and boundary value problems. For example, equation (25) above defines a linear algebraic condition of the \(\partial_{it}\rho\) and \(\partial_{ij}v\) process direction space vectors. Further derivation of the constraint does not lead to more conditions because the higher derivatives are out
of the process direction space. Whether a constraint's derivative is a constraint depends on the constitutive state space. (25) is a constraint because the second gradient of the mass density is in the CSS.
Then, with the \(\lambda,\Lambda^{i},\Gamma_{i},\gamma\) Lagrange-Farkas multipliers, and with the constraints (25), (25), (17) and (18), the starting point of Liu procedure is the following form of the entropy inequality
\[0\leq \partial_{t}(\rho s)+\partial_{i}J^{i}+\partial_{i}(sv^{i})- \lambda\left[\partial_{t}\rho+\partial_{i}(\rho v^{i})\right]-\Lambda^{i} \left[\partial_{it}\rho+\partial_{ij}(\rho v^{j})\right]- \tag{26}\] \[\Gamma_{i}\left[\partial_{t}(\rho v^{i})+\partial_{j}\left(P^{ij} +\rho v^{i}v^{j}\right)\right]-\gamma\left[\partial_{t}(\rho e)+\partial_{i} (q^{i}+ev^{i})\right].\]
The formula gives
\[(s+\partial_{\rho}s)\partial_{t}\rho+\rho\partial_{\partial_{i} \rho}s\partial_{ti}\rho+\rho\partial_{\partial_{ij}\rho}s\partial_{tij}\rho+ \rho\partial_{v^{i}}s\partial_{t}v^{i}+\rho\partial_{\partial_{i}v^{j}}s \partial_{tj}v^{i}+\rho\partial_{e}s\partial_{t}e+\] \[\rho\partial_{\partial_{i}e}s\partial_{ti}e+\] \[+\partial_{\rho}J^{i}\partial_{i}\rho+\partial_{\partial_{j} \rho}J^{i}\partial_{ij}\rho+\partial_{\partial_{jk}\rho}J^{i}\partial_{ijk} \rho+\partial_{v^{j}}J^{i}\partial_{i}v^{j}+\partial_{\partial_{kv}{}^{j}}J^{i }\partial_{jk}v^{i}+\partial_{e}J^{i}\partial_{i}e+\] \[\partial_{\partial_{j}e}J^{i}\partial_{ij}e+\] \[+\partial_{i}(sv^{i})-\lambda\left(\partial_{t}\rho+\rho\partial _{i}v^{i}+v^{i}\partial_{i}\rho\right)-\] \[-\Lambda^{i}\left(\partial_{it}\rho+\rho\partial_{ij}v^{j}+v^{j} \partial_{ij}\rho+\partial_{i}\rho\partial_{j}v^{j}+\partial_{j}\rho\partial_ {i}v^{j}\right)-\] \[-\Gamma_{i}\left(\rho\partial_{t}v^{i}+v^{i}\partial_{t}\rho+\rho v ^{i}\partial_{j}v^{j}+\rho v^{j}\partial_{j}v^{i}+v^{j}v^{i}\partial_{j}\rho+ \partial_{\rho}P^{ij}\partial_{j}\rho+\partial_{\partial_{k}\rho}P^{ij} \partial_{jk}\rho+\right.\] \[+\partial_{\partial_{kl}\rho}P^{ij}\partial_{jkl}\rho+\partial_{ v^{k}}P^{ij}\partial_{j}v^{k}+\partial_{\partial_{l}v^{k}}P^{ij}\partial_{jl}v^{k}+ \partial_{e}P^{ij}\partial_{j}e+\partial_{\partial_{k}e}P^{ij}\partial_{jk}e \right)-\] \[-\gamma\left(\rho\partial_{t}e+e\partial_{t}\rho+e\partial_{i}v^{ i}+v^{i}\partial_{i}e+\rho_{\rho}q^{i}\partial_{i}\rho+\partial_{\partial_{j} \rho}q^{i}\partial_{ji}\rho+\right.\] \[\left.+\partial_{\partial_{jk}\rho}q^{i}\partial_{ijk}\rho+ \partial_{v^{j}}q^{i}\partial_{i}v^{j}+\partial_{\partial_{kv}{}^{j}}q^{i} \partial_{ik}v^{j}+\partial_{e}q^{i}\partial_{i}e+\partial_{\partial_{j}e}q^{ i}\partial_{ji}e\right)\geq 0.\]
Regrouping the terms according to the process direction space components one can get the inequality in the following form
\[(s+\rho\partial_{\rho}s-\lambda-\Gamma_{i}v^{i}-\gamma e)\partial_{ t}\rho+(\rho\partial_{\partial_{i}\rho}s-\Lambda^{i})\partial_{ti}\rho+\rho \partial_{\partial_{ij}\rho}s\partial_{tij}\rho+(\rho\partial_{v^{i}}s-\rho \Gamma_{i})\partial_{t}v^{i}+\] \[+\rho\partial_{\partial_{i}v^{j}}s\partial_{tj}v^{i}+\rho( \partial_{e}s-\gamma)\partial_{t}e+\rho\partial_{\partial_{i}e}s\partial_{ti}e+\] \[+(\partial_{\partial_{jk}\rho}J^{i}-\Gamma_{l}\partial_{\partial_ {jk}\rho}P^{li}-\partial_{\partial_{jk}\rho}q^{i})\partial_{ijk}\rho+\] \[+(\partial_{\partial_{k}v^{j}}J^{i}-\Gamma_{l}\partial_{\partial_ {kv}v^{i}}P^{lj}-\partial_{\partial_{kv}v^{i}}q^{j}\partial_{ik}v^{j})\partial_ {jk}v^{i}\] \[+(\partial_{\partial_{je}e}J^{i}-\Gamma_{l}\partial_{\partial_{ie}e }P^{lj}+\gamma\partial_{\partial_{j}e}q^{i})\partial_{ij}e+\] \[+\partial_{\rho}J^{i}\partial_{i}\rho+\partial_{\partial_{j}\rho}J ^{i}\partial_{ij}\rho+\partial_{v^{j}}J^{i}\partial_{i}v^{j}+\partial_{e}J^{i} \partial_{i}e+\] \[+\partial_{i}(sv^{i})-\lambda\left(\rho\partial_{i}v^{i}+v^{i} \partial_{i}\rho\right)-\] \[-\Lambda^{i}\left(\rho\partial_{ij}v^{j}+v^{j}\partial_{ij}\rho+ \partial_{i}\rho\partial_{j}v^{j}+\partial_{j}\rho\partial_{i}v^{j}\right)-\] \[-\Gamma_{i}\left(\rho v^{i}\partial_{j}v^{j}+\rho v^{j}\partial_{j}v ^{i}+v^{j}v^{i}\partial_{j}\rho+\partial_{p}P^{ij}\partial_{j}\rho+\partial_{ \partial_{k}\rho}P^{ij}\partial_{jk}\rho+\right.\] \[\left.+\partial_{v^{k}}P^{ij}\partial_{j}v^{k}+\partial_{e}P^{ij} \partial_{j}e\right)-\] \[-\gamma\left(e\partial_{i}v^{i}+v^{i}\partial_{i}e+\partial_{\rho} q^{i}\partial_{i}\rho+\partial_{\partial_{j}\rho}q^{i}\partial_{ji}\rho+\partial_{v^{j}}q^{i} \partial_{i}v^{j}+\partial_{e}q^{i}\partial_{i}e\right)\geq 0.\]
The coefficients of the process direction space vectors must be zero, and one obtains the following Liu equations:
\[\partial_{t}\rho\ :\quad s+\rho\partial_{\rho}s-\lambda-\Gamma_{i}v^{i}- \gamma e =0, \tag{27}\] \[\partial_{ti}\rho:\qquad\qquad\qquad\rho\partial_{\partial_{i}\rho }s-\Lambda^{i} =0^{i},\] (28) \[\partial_{tij}\rho:\qquad\qquad\qquad\partial_{\partial_{ij}\rho }s =0^{ij},\] (29) \[\partial_{t}v^{i}\ :\qquad\qquad\qquad\partial_{v^{i}}s-\rho\Gamma_{i} =0_{i},\] (30) \[\partial_{tj}v^{i}:\qquad\qquad\qquad\partial_{\partial_{j}v^{i}} s =0^{j}_{i},\] (31) \[\partial_{t}e\ :\qquad\qquad\qquad\partial_{e}s-\gamma =0,\] (32) \[\partial_{ti}e:\qquad\qquad\qquad\partial_{\partial_{i}e}s =0^{i},\] (33) \[\partial_{ijk}\rho:\qquad\qquad\qquad\partial_{\partial_{(kj} \rho}J^{i)} =\Gamma_{l}\partial_{\partial_{(kj}\rho}Pl^{li)}+\gamma\partial_{ \partial_{(kj}\rho}q^{i)},\] (34) \[\partial_{jk}v^{i}:\qquad\qquad\qquad\partial_{\partial_{(k}v^{i }}J^{j)} =\Gamma_{l}\partial_{\partial_{(k}v^{i}}P^{lj)}+\gamma\partial_{ \partial_{(k}v^{i}}q_{j)}+\frac{\rho}{2}\Lambda^{l}(\delta^{j}_{l}\delta^{k}_{ i}+\delta^{k}_{l}\delta^{j}_{i}),\] (35) \[\partial_{ij}e:\qquad\qquad\qquad\partial_{\partial_{(j}e}J^{i)} =\Gamma_{l}\partial_{\partial_{(i}e}P^{lj)}+\gamma\partial_{ \partial_{(j}e}q^{i)}. \tag{36}\]
Here the parenthesis of the indices denotes the symmetric part of the tensor, e.g. \(A^{(ij)}=(A^{ij}+A^{ji})/2\). Due to (29), (31) and (33) the entropy density does not depend on the partial derivatives \(\partial_{ij}\rho\), \(\partial_{j}v^{i}\) and \(\partial_{i}e\). Then (27), (28), (30) and (32) connect the Lagrange-Farkas multipliers and entropy derivatives as
\[\rho\partial_{\rho}s=\lambda+\Gamma_{i}v^{i}+\gamma e-s,\qquad\partial_{v^{i} }s=\Gamma_{i},\quad\partial_{e}s=\gamma,\quad\rho\partial_{\partial_{i}\rho} s=\Lambda^{i}. \tag{37}\]
Let us observe that the symmetry of the coefficients influences the Liu equations and should be explicitly considered in (35). The solution of the (34)-(36) system of equations leads to the following entropy flux:
\[J^{i}=\ \partial_{e}sq^{i}+\frac{\rho^{2}}{2}\left(\partial_{\partial_{i}\rho}s \partial_{j}v^{j}+\partial_{\partial_{j}\rho}s\partial_{j}v^{i}\right)+ \partial_{v^{j}}sP^{ji}+\mathfrak{J}^{i}(\rho,\partial_{i}\rho,v^{i},e). \tag{38}\]
Here the residual entropy flux, \(\mathfrak{J}^{i}\), is not restricted; it can be an arbitrary function. Then the Liu equations are completely solved, and the functional form of the entropy density and the entropy flux is restricted. Finally, we obtain the entropy production, the dissipation inequality as follows:
\[0\leq\sigma_{s}=q^{i}\partial_{i}(\partial_{e}(\rho s))+P^{ij} \partial_{i}\left(\partial_{v^{j}}s\right)+\\ +\partial_{j}v^{j}\left[s+e\partial_{e}s-\rho\partial_{\rho}s+ \frac{\rho^{2}}{2}\partial_{i}\left(\partial_{\partial_{i}\rho}s\right) \right]+\partial_{j}v^{i}\left[\frac{\rho^{2}}{2}\partial_{i}\left(\partial_{ \partial_{j}\rho}s\right)\right].\]
Let us assume that the residual, local part of the entropy flux, \(\mathfrak{J}^{i}\), is zero and define the internal energy as the difference of total and kinetic energies, \(\mathfrak{u}:=e-v^{2}/2\). Both conditions are usual and also natural. The second one is justified by the expected covariance, too.
Partial derivatives of the entropy can identify the Lagrange-Farkas multipliers by thermodynamic intensives:
\[\frac{1}{T}:=\partial_{\mathfrak{u}}s=\partial_{e}s,\quad\frac{p}{T}:=\partial _{\mathfrak{v}}s=-\rho^{2}\partial_{\rho}s,\quad\frac{A^{i}}{T}:=\partial_{ \partial_{i}\rho}s, \tag{39}\]
where \(\mathfrak{v}=1/\rho\) is the specific volume, the thermostatic temperature and pressure are \(T\), and \(p\), and \(A^{i}\) is a convenient notation to recover a traditional form of the
Gibbs relation:
\[d\mathsf{u}=Tds+\frac{p}{\rho^{2}}d\rho-A^{i}d\partial_{i}\rho. \tag{40}\]
Let us define the _homogeneous chemical potential_, \(\mu_{h}\), for the weakly nonlocal continuum as usual, by a homogeneous Gibbs-Duhem relation, \(\mu_{h}:=\mathsf{u}-Ts+\frac{p}{\rho}\).
Then the Lagrange-Farkas multipliers are
\[\gamma=\frac{1}{T}=\frac{\partial\mathsf{e}}{\partial s}=\frac{ \partial\mathsf{u}}{\partial s},\quad\Gamma_{i}=-\frac{v_{i}}{T}=\frac{ \partial s}{\partial\mathsf{u}}\frac{\partial\mathsf{u}}{\partial v^{i}}, \quad\lambda=-\frac{\mu_{h}-v^{2}/2}{T},\quad\Lambda^{i}=\rho\frac{A^{i}}{T}. \tag{41}\]
The second equality shows that the momentum balance is not a constraint in a comoving frame. The third formula is related to the Galilean transformation of the chemical potential from a comoving to an external reference frame, whose relative velocity to the material is \(v^{i}\). Both are the consequences of the combination of Liu procedure and the internal energy; see [38]. We can see that the Lagrange-Farkas multipliers of the local laboratory frame balances are the entropic intensive quantities in the laboratory frame, as expected.
Summarizing the second law restrictions for the specific entropy and the entropy flux one obtains:
\[s =s\left(\mathsf{u},\rho,\partial_{i}\rho\right), \tag{42}\] \[J^{i} =\frac{1}{T}\left(q^{i}+\frac{\rho^{2}}{2}\left(A^{i}\partial_{j }v^{j}+A^{j}\partial_{j}v^{i}\right)\right). \tag{43}\]
Here the conductive current density of the internal energy emerged as the difference between the total energy flux and the current density of the kinetic energy, \(q^{i}=q^{i}_{T}-v_{j}P^{ji}\).
Finally, the dissipation inequality, that is, the entropy production rate, is written as
\[0\leq\sigma_{s}=q^{i}\partial_{i}\frac{1}{T}-\left(P^{ij}-\left( p+\frac{T\rho^{2}}{2}\partial_{k}\frac{A^{k}}{T}\right)\delta^{i}_{j}-\frac{T \rho^{2}}{2}\partial_{j}\frac{A^{i}}{T}\right)\frac{\partial_{i}v^{j}}{T}, \tag{44}\]
where the first part of the quadratic expression is the thermal part of the entropy production, and the second is the mechanical part. The thermal part is the usual one, the heat flux multiplied by the gradient of the reciprocal temperature. The second term is the product of the viscous pressure, the difference of the pressure tensor, \(P^{ij}\), and the thermostatic pressure. If the entropy is independent of the density gradient, then it simplifies to the usual scalar pressure, \(p\).
However, the thermal part of the entropy production is somehow incomplete. One may expect the separation of mechanical and thermal effects if the thermal part of the dissipation disappears if the entropy flux is zero. Therefore we require that the entropy flux should be parallel to the coefficient of the temperature gradient and rewrite the entropy production in the following form:
\[0\leq\sigma_{s}=\left(q^{i}+\frac{\rho^{2}}{2}\left(A^{i} \partial_{j}v^{j}+A^{j}\partial_{j}v^{i}\right)\right)\partial_{i}\frac{1}{T}-\] \[\left(P^{i}_{j}-\left(p+\frac{\rho^{2}}{2}\partial_{k}A^{k} \right)\delta^{i}_{j}-\frac{\rho^{2}}{2}\partial_{j}A^{i}\right)\frac{\partial _{i}v^{j}}{T}. \tag{45}\]
The final form of the entropy balance is
\[\partial_{t}(\rho s)+\partial_{i}\left(\frac{q^{i}-q^{i}_{ThK}}{T}+\rho sv ^{i}\right)=(q^{i}-q^{i}_{ThK})\partial_{i}\frac{1}{T}-\left(P^{i}_{j}-(P_{ThK} )^{i}_{j}\right)\frac{\partial_{i}v^{j}}{T}\geq 0, \tag{46}\]
where
\[q^{i}_{ThK}=-\frac{\rho^{2}}{2}\left(A^{i}\partial_{j}v^{j}+A^{j}\partial_{j} v^{i}\right),\quad\text{and}\quad(P_{ThK})^{i}_{j}=\left(p+\frac{\rho^{2}}{2} \partial_{k}A^{k}\right)\delta^{i}_{j}+\frac{\rho^{2}}{2}\partial_{j}A^{i} \tag{47}\]
are the Korteweg heat flux and Korteweg pressure tensor, respectively. If the heat flux, \(q^{i}\) and the pressure tensor, \(P^{i}_{j}\), are equal to the Korteweg ones, then the entropy production rate density is zero, and there is no dissipation. Then the Korteweg fluid is perfect. The Korteweg heat flux is the interstitial working with the terminology of Dunn and Serrin; however, our expression here is different from theirs. We will discuss the reason for the difference in the next session. The thermodynamic fluxes and forces are identified accordingly and given in Table 1.
It is remarkable that then the Korteweg pressure can be written by derivatives of the internal energy because according to the Gibbs relation (40):
\[A^{i}=\left.\frac{\partial\mathsf{u}}{\partial(\partial_{i}\rho)}\right|_{s, \rho},\qquad p=\rho^{2}\left.\frac{\partial\mathsf{u}}{\partial\rho}\right|_{s,\partial_{i}\rho}. \tag{48}\]
Therefore the mechanical part of the dissipation is connected to isentropic processes as expected.
The linear solution of the inequality assumes that thermodynamic fluxes are proportional to the thermodynamic forces; that is, the thermal flow and the viscous pressure are proportional to the gradient of the reciprocal temperature and the velocity gradient. For isotropic nonpolar materials with a symmetric pressure that leads to three material parameters, the heat conduction coefficient, \(\lambda\) and the bulk, shear and rotational viscosities are \(\eta_{v}\), \(\eta\) and \(\eta_{r}\), respectively:
\[\kappa^{i}=\lambda\partial_{i}\frac{1}{T},\qquad\Pi^{ij}=-\eta_{v}\partial_{k }v^{k}\delta^{ij}-\eta(\partial^{i}v^{j}+\partial^{j}v^{i}-2\partial_{k}v^{k} \delta^{ij}/3)-\eta_{r}(\partial^{i}v^{j}-\partial^{j}v^{i}). \tag{49}\]
The linearization is analogous to the Fourier-Navier-Stokes system.
## 4. Thermodynamic constraints of Korteweg fluids: divergence separation
In this section, we derive the previous results with the simpler but more heuristic methodology of classical irreversible thermodynamics, with the help of _divergence separation_. The method was introduced by Eckart and later on used with the help of the hypothesis of local equilibrium [51, 52], and it is essentially a heuristic identification of the entropy flux, breaking up bulk and surface terms of the entropy
\begin{table}
\begin{tabular}{c||c|c} & Thermal & Mechanical \\ \hline Force & \(\partial_{i}\frac{1}{T}\) & \(\frac{\partial_{i}v^{j}}{T}\) \\ \hline Flux & \(\kappa^{i}=q^{i}+\frac{\rho^{2}}{2}\left(A^{i}\partial_{j}v^{j}+A^{j}\partial_{ j}v^{i}\right)\) & \(\Pi^{i}_{j}=P^{i}_{j}-\left(p+\frac{\rho^{2}}{2}\partial_{k}A^{k}\right)\delta^{ i}_{j}-\frac{\rho^{2}}{2}\partial_{j}A^{i}\) \\ \end{tabular}
\end{table}
Table 1. Adiabatic system of thermodynamic forces and fluxes.
balance. In this section, the abstract index notation of the previous section will be changed back to the usual invariant notation of fluid mechanics and continuum mechanics with bold letters for vectors and tensors and nabla for spatial derivation.
The first step is to determine the thermodynamic state space, the variables of the entropy density, or, like the previous section, the variables of the specific entropy function. The hypothesis of local equilibrium is best expressed by the Gibbs relation of specific quantities. In our case, with gradients in the thermodynamic state space, it is better called the _hypothesis of weakly nonlocal equilibrium_, and we have already introduced the related Gibbs relation in (40), with a convenient representation of the partial derivatives. With the nabla notation, it is written as
\[\mathrm{d}\mathsf{u}=T\mathrm{d}s+\frac{p}{\rho^{2}}\mathrm{d}\rho-\boldsymbol {A}\cdot\mathrm{d}\nabla\rho. \tag{50}\]
A straightforward consequence is the Gibbs relation for the densities of internal energy and entropy:
\[\mathrm{d}(\rho\mathsf{u})=T\mathrm{d}(\rho s)+\mu_{h}\mathrm{d}\rho-\rho \boldsymbol{A}\cdot\mathrm{d}\nabla\rho, \tag{51}\]
where \(\mu_{h}=e+p/\rho-Ts\), the homogeneous chemical potential. A complete homogeneous relation requires the density scaling of the gradient term as well
\[\mathrm{d}(\rho\mathsf{u})=T\mathrm{d}(\rho s)+\mu\mathrm{d}\rho-\boldsymbol {A}\cdot\mathrm{d}(\rho\nabla\rho). \tag{52}\]
Here \(\mu=\mu_{h}+\boldsymbol{A}\cdot\nabla\rho\), according to complete Euler homogeneity of the thermodynamic potential. Apparently, integrated extensive quantities of a corresponding homogeneous thermodynamic body, with a homogeneous gradient, leads to shape dependence: thermodynamic relations of integrated extensive quantities of thermodynamic bodies with finite volume are not well defined. We do not pursue that direction but remark that thermodynamic relations are best introduced locally, [53]. From (50) and (52) follows that
\[\mu_{h}=\left.\frac{\partial(\rho\mathsf{u})}{\partial\rho}\right|_{\rho s, \nabla\rho},\qquad\rho\boldsymbol{A}=-\left.\frac{\partial(\rho\mathsf{u})}{ \partial\nabla\rho}\right|_{\rho s,\rho}. \tag{53}\]
With the method of divergence separation, the entropy balance is determined by calculating the time derivative of the entropy and then introducing the constraints directly. It is most convenient with the help of substantial forms because then the momentum balance is not necessary, and only the balance of mass and internal energy (19) and (22) are used. Then the material time derivative of the specific
entropy follows as
\[\rho\dot{s} =\rho\frac{\dot{\textbf{u}}}{T}-\frac{p}{T\rho}\dot{\rho}+\rho\frac{ \boldsymbol{A}}{T}\cdot\frac{d}{dt}\nabla\rho=\] \[=-\frac{\nabla\cdot\boldsymbol{q}+\boldsymbol{P}:\nabla\textbf{v} }{T}+\frac{p}{T}\nabla\cdot\boldsymbol{v}-\rho\frac{\boldsymbol{A}}{T}\cdot \nabla(\rho\nabla\cdot\boldsymbol{v})-\rho\nabla\rho\cdot\nabla\boldsymbol{v }\cdot\frac{\boldsymbol{A}}{T}=\] \[=-\nabla\cdot\frac{\boldsymbol{q}}{T}+\boldsymbol{q}\cdot\nabla \frac{1}{T}-\frac{\nabla\boldsymbol{v}}{T}:(\boldsymbol{P}-p\boldsymbol{I})- \frac{\boldsymbol{A}}{T}\cdot\nabla\frac{\rho^{2}}{2}\nabla\cdot\boldsymbol{v }-\nabla\frac{\rho^{2}}{2}\cdot\nabla\boldsymbol{v}\cdot\frac{\boldsymbol{A} }{T}-\] \[\qquad\qquad\qquad-\boxed{\rho^{2}\frac{\boldsymbol{A}}{T}\cdot \frac{\nabla\nabla\cdot\boldsymbol{v}+\nabla\cdot(\nabla\boldsymbol{v})}{2}}= \tag{54}\] \[=-\nabla\cdot\frac{\boldsymbol{q}}{T}+\boldsymbol{q}\cdot\nabla \frac{1}{T}-\frac{\nabla\boldsymbol{v}}{T}:\left(\boldsymbol{P}-p\boldsymbol{I }+\boldsymbol{A}\nabla\frac{\rho^{2}}{2}\boldsymbol{I}+\nabla\frac{\rho^{2}}{2 }\boldsymbol{A}\right)-\] \[\qquad-\nabla\cdot\left(\frac{\rho^{2}}{2T}\left[\nabla \boldsymbol{v}\cdot\boldsymbol{A}+\boldsymbol{A}\nabla\cdot\boldsymbol{v} \right]\right)+\left(\frac{\rho^{2}}{2T}\left[\nabla\boldsymbol{v}\cdot \boldsymbol{A}+\boldsymbol{A}\nabla\cdot\boldsymbol{v}\right]\right)\cdot \nabla\frac{1}{T}+\] \[\qquad+\frac{\nabla\boldsymbol{v}}{T}:\left(\nabla\cdot\left[ \frac{\rho^{2}}{2}\boldsymbol{A}\right]\boldsymbol{I}+\nabla\left[\frac{\rho^{2 }}{2}\boldsymbol{A}\right]\right)=\] \[=-\nabla\cdot\left(\frac{\boldsymbol{q}}{T}+\frac{\rho^{2}}{2T} \left[\nabla\boldsymbol{v}\cdot\boldsymbol{A}+\boldsymbol{A}\nabla\cdot \boldsymbol{v}\right]\right)+\] \[\qquad+\left(\boldsymbol{q}+\frac{\rho^{2}}{2}\left[\nabla \boldsymbol{v}\cdot\boldsymbol{A}+\boldsymbol{A}\nabla\cdot\boldsymbol{v} \right]\right)\cdot\nabla\frac{1}{T}-\] \[\qquad-\frac{\nabla\boldsymbol{v}}{T}:\left[\boldsymbol{P}- \left(p+\frac{\rho^{2}}{2}\nabla\cdot\boldsymbol{A}\right)\boldsymbol{I}-\frac {\rho^{2}}{2}\nabla\boldsymbol{A}\right]\geq 0. \tag{55}\]
Here \(\boldsymbol{I}\) denotes the second-order identity tensor, and also the following formula was used:
\[\frac{d}{dt}\nabla\rho=-\nabla\rho\cdot\nabla\boldsymbol{v}-\nabla\big{(} \rho\nabla\cdot\boldsymbol{v}\big{)}. \tag{56}\]
One can identify the entropy flux and the entropy production rate density as well, and then the entropy balance is
\[\rho\dot{s}+\nabla\cdot\left(\frac{\boldsymbol{q}}{T}+\frac{\rho^ {2}}{2}\left[\nabla\boldsymbol{v}\cdot\frac{\boldsymbol{A}}{T}+\frac{ \boldsymbol{A}}{T}\nabla\cdot\boldsymbol{v}\right]\right)\] \[= \nabla\frac{1}{T}\cdot\left(\boldsymbol{q}+\frac{\rho^{2}}{2} \left[\nabla\boldsymbol{v}\cdot\boldsymbol{A}+\boldsymbol{A}\nabla\cdot \boldsymbol{v}\right]\right)-\] \[\frac{\nabla\boldsymbol{v}}{T}:\left[\boldsymbol{P}-\left(p+ \frac{\rho^{2}}{2}\nabla\cdot\boldsymbol{A}\right)\boldsymbol{I}-\frac{\rho^{ 2}}{2}\nabla\boldsymbol{A}\right]\geq 0. \tag{57}\]
That is identical to the form obtained in (46) with the help of the Liu procedure in the previous section.
\begin{table}
\begin{tabular}{l|c|c} & Thermal2 & Mechanical2 \\ \hline Forces & \(\nabla\left(\frac{1}{T}\right)\) & \(-\frac{\nabla\boldsymbol{v}}{T}\) \\ \hline Fluxes & \(\boldsymbol{q}+\frac{\rho^{2}}{2}\left[\nabla\boldsymbol{v}\cdot\boldsymbol{A} +\boldsymbol{A}\nabla\cdot\boldsymbol{v}\right]\) & \(\boldsymbol{P}-\left(p+\frac{\rho^{2}}{2}\nabla\cdot\boldsymbol{A}\right) \boldsymbol{I}-\frac{\rho^{2}}{2}\nabla\boldsymbol{A}\) \\ \end{tabular}
\end{table}
Table 2. Thermodynamic fluxes and forces of Korteweg fluids: thermal version.
The linear isotropic solution of the inequality gives the Korteweg version of the Fourier-Navier-Stokes system of equations, with reversible contributions of Korteweg pressure and Korteweg heat flux, (47). The complete form of the constitutive functions of viscous, heat-conducting Korteweg fluids is the following
\[\boldsymbol{q} =-\frac{\rho^{2}}{2}\left[\nabla\boldsymbol{v}\cdot\boldsymbol{A}+ \boldsymbol{A}\nabla\cdot\boldsymbol{v}\right]+\lambda\nabla\frac{1}{T}, \tag{58}\] \[tr\boldsymbol{P}/3 =p+\frac{2\rho^{2}}{3}\nabla\cdot\boldsymbol{A}-\eta_{v}\nabla \cdot\boldsymbol{v},\] (59) \[\boldsymbol{P^{0s}} =\frac{\boldsymbol{P}+\boldsymbol{P}^{T}}{2}-\frac{tr \boldsymbol{P}}{3}\boldsymbol{I}=\frac{\rho^{2}}{2}(\nabla\boldsymbol{A})^{0s} -2\eta(\nabla\boldsymbol{v})^{0s}=\] \[=\frac{\rho^{2}}{4}\left(\nabla\boldsymbol{A}+(\nabla\boldsymbol{ A})^{T}-\frac{2\nabla\cdot\boldsymbol{A}}{3}\boldsymbol{I}\right)-\eta\left( \nabla\boldsymbol{v}+(\nabla\boldsymbol{v})^{T}-\frac{2\nabla\cdot \boldsymbol{v}}{3}\boldsymbol{I}\right),\] (60) \[\boldsymbol{P^{a}} =\frac{\boldsymbol{P}-\boldsymbol{P}^{T}}{2}=\frac{\rho^{2}}{2}( \nabla\boldsymbol{A})^{A}-2\eta_{r}(\nabla\boldsymbol{v})^{A}=\frac{\rho^{2}} {4}\left(\nabla\boldsymbol{A}-(\nabla\boldsymbol{A})^{T}\right)-\eta_{r}\left( \nabla\boldsymbol{v}-(\nabla\boldsymbol{v})^{T}\right) \tag{61}\]
Here the upper indices, \(\boldsymbol{0s}\) and \(\boldsymbol{a}\) denote the traceless symmetric and antisymmetric parts of the corresponding second-order tensors. The heat conduction coefficient, \(\lambda\), is related to the Fourier heat conduction coefficient, \(\lambda_{F}=\frac{\lambda}{T^{2}}\). \(\eta_{v},\eta,\eta_{r}\) are the bulk, shear and rotational viscosities. The second law requires that the viscosities and the heat conduction coefficient be non-negative.
There are no cross effects; the scalar, vector and deviatoric, and antisymmetric tensorial components are independent, according to the representation theorems of isotropic tensors. Remarkably, zero dissipation does not require zero heat flux and shear stress in Korteweg fluids. One expects that the fluid pressure relaxes to the perfect Korteweg pressure, like normal viscous fluids in equilibrium. The last equation is also remarkable: in the case of Navier-Stokes fluids, the rotational viscosity ensures that the antisymmetric part of the pressure relaxes to zero [52]. In the case of Korteweg fluids, it is not necessary.
The method of Liu procedure in the previous section and divergence separation above lead to the same results. The two methods are compatible.
### Holographic perfect fluids
A Korteweg fluid is called perfect if its entropy production is zero because of material properties. That happens if the heat flux, \(\kappa^{i}\) and the viscous pressure, \(\Pi^{ij}\), are zero. Therefore the thermodynamic-compatible current density of the internal energy and the thermodynamic-compatible pressure tensor of perfect Korteweg fluids are
\[\boldsymbol{q}=\boldsymbol{q}_{ThK}=-\frac{\rho^{2}}{2}\left( \boldsymbol{A}\nabla\cdot\boldsymbol{v}+(\nabla\boldsymbol{v})\cdot\boldsymbol {A}\right), \tag{62}\] \[\boldsymbol{P}=\boldsymbol{P}_{ThK}=\left(p+\frac{\rho^{2}}{2} \nabla\cdot\boldsymbol{A}\right)\boldsymbol{I}+\frac{\rho^{2}}{2}\nabla \boldsymbol{A}. \tag{63}\]
If the entropy is independent of the density gradients, then the pressure tensor reduces to the Pascal pressure of Euler fluids, and the heat flux is zero:
\[\boldsymbol{P}_{\text{Euler}}=-T\rho^{2}\partial_{\rho}s(\mathsf{u},\rho) \boldsymbol{I}=\rho^{2}\partial_{\rho}\mathsf{u}(s,\rho)\boldsymbol{I}=p \boldsymbol{I},\qquad\boldsymbol{q}=\boldsymbol{0}. \tag{64}\]
Remarkably, the thermostatic Pascal pressure is a different function for isentropic, isoenergetic and isothermal processes. In that last case, the partial derivative of the
Helmholtz free energy is to be considered: \(p(T,\rho)=\rho^{2}\partial_{\rho}f(T,\rho)\), where \(f=\mathfrak{u}-Ts\), is the specific free energy.
Perfect Korteweg fluids are classically holographic, in the sense of (3), if some thermodynamic conditions are fulfilled. One can see that when calculating the divergence of the Korteweg pressure:
\[\nabla\cdot\boldsymbol{P}_{ThK}=\rho\nabla\left(\mu_{h}+\nabla\cdot\left( \rho\boldsymbol{A}\right)\right)+\rho s\nabla T=\rho\nabla(h+\nabla\cdot \left(\rho\boldsymbol{A}\right))-\rho T\nabla s. \tag{65}\]
Here both the nonlocal enthalpy \(\Phi_{h}=h+\nabla\cdot\left(\rho\boldsymbol{A}\right)\), or the nonlocal chemical potential, \(\Phi_{\mu}=\mu_{h}+\nabla\cdot\left(\rho\boldsymbol{A}\right)\) are mechanical potentials with homogeneous entropy or temperature fields, respectively. In both cases, the potential is a functional derivative. For the nonlocal enthalpy, the "Lagrangian" is the internal energy density, \(\rho\mathfrak{u}(s,\rho,\nabla\rho)\):
\[\Phi_{h}=h+\nabla\cdot\left(\rho\boldsymbol{A}\right)=\left.\frac{\partial( \rho\mathfrak{u})}{\partial\rho}\right|_{\rho s,\nabla\rho}-\nabla\cdot\left. \frac{\partial(\rho\mathfrak{u})}{\partial\nabla\rho}\right|_{\rho s,\rho}= \left.\frac{\delta(\rho\mathfrak{u})}{\delta\rho}\right|_{\rho s}, \tag{66}\]
where the specific enthalpy, \(h=\mathfrak{u}+p/\rho\). It follows from (51) that for the nonlocal chemical potential, the "Lagrangian" is the free energy density, \(\rho f(T,\rho,\nabla\rho)=\rho(\mathfrak{u}-Ts)\):
\[\Phi_{\mu}=\mu_{h}+\nabla\cdot\left(\rho\boldsymbol{A}\right)=\left.\frac{ \partial(\rho f)}{\partial\rho}\right|_{T,\nabla\rho}-\nabla\cdot\left.\frac{ \partial(\rho f)}{\partial\nabla\rho}\right|_{T,\rho}=\left.\frac{\delta(\rho f )}{\delta\rho}\right|_{T}. \tag{67}\]
In the case of local equilibrium, if the thermodynamic potentials are independent of the density gradient, (65) reduces to (2) of Euler fluids. The Euler-Lagrange form, the functional derivative, emerged independently of any variational principle. That is in sharp contrast with phase field approaches to Korteweg fluids, where functional derivatives are combined with fluid mechanics [54], and the above formula is a basic assumption.
A straightforward consequence of classical holography is that the momentum balance of a Korteweg fluid is like an equation of motion of a point mass in a force field given by a scalar potential:
\[\rho\dot{\boldsymbol{v}}+\nabla\cdot\boldsymbol{P}_{ThK}=\boldsymbol{0}\quad \Longleftrightarrow\quad\dot{\boldsymbol{v}}=-\nabla\Phi. \tag{68}\]
However, the similarity can be deceptive because \(\Phi\) is not a fixed function of spacetime; it is not an external potential; it depends on the density distribution and its derivatives and is best seen as a characteristic, particular self-force of the fluid. That can be interpreted as a local force of a test mass, but the density and energy fields of the continuum determine the force field itself. The continuity equation is not the only coupling because the energy balance is coupled to the mechanical system of equations, also in the case of seemingly pure mechanical homoentropic processes of perfect fluids. It is because the conductive part of the energy current density is not zero but given by (62).
### On the uniqueness of the entropy production and the interstitial working
It was already mentioned that the interstitial working of Dunn and Serrin (and Sobrino, who did not introduce a nomination to the same concept), (13), is similar to the perfect Korteweg heat flux, (62), but is not the same. Their calculations are based on the Helmholtz free energy as thermodynamical potential, and the entropy production is calculated implicitly. The difference in the formulas is due to the different calculations of the constraints. If one does not consider the symmetry of the second derivative of the velocity field like it was in (35), or instead
of the symmetric, boxed formula of (54), only \(\nabla\nabla\cdot\boldsymbol{v}\) is used in the calculations, one obtains the Korteweg pressure (13) and also the corresponding interstitial working. The mistake is best seen in (13) formula of [17], which is less apparent in Dunn and Serrin's elasticity-motivated finite deformation calculation. For homothermal or homoentrophic processes of perfect Korteweg fluids, the difference in the pressure term is a total divergence; therefore, one recovers the same holographic properties.
Another aspect of interstitial working is the difference between the entropy production rate densities (44) and (45). In the first formula, a heat flux, the multiplier of the gradient of the reciprocal temperature in the thermal term, is the current density of the internal energy the interstitial working is missing. While in the second formula, the heat flux is parallel to the entropy flux. In the first case, perfect fluid pressure is temperature and gradient temperature dependent; thermal and mechanical processes cannot be separated. Also, it is reasonable from a physical point of view: one expects that the heat flux is parallel to the entropy current density, and therefore the temperature gradient drives the thermal interaction. Remarkably, the same decision must be made whenever the thermal interaction is not zero. The simplest case is thermodiffusion: there the heat flux parallel to the entropy flux is called the true heat flux in chemical engineering [55, 56]. We will analyze the consequences of apparent uncertainty in a forthcoming publication, [57].
## 5. Field or particle: superfluids and quantum mechanics
It was already mentioned in the introduction that quantum mechanics and quantum field theories, both nonrelativistic and relativistic, can be reformulated in a fluid form. The relation to Korteweg fluids is also known: the Bohm potential defines a particular chemical potential; therefore, it is a particular Korteweg fluid that has a complex field formulation, too. The key is Madelung transformation; starting from Schrodinger, Klein-Gordon or Dirac equations, one can obtain a fluid form. Both fluid and quantum mechanics are field theories in that both wave function and density-velocity fields are defined in spacetime. Nevertheless, the Schrodinger equation is a theory of a point mass, but fluid mechanics is not.
The holographic property of isentropic perfect fluids is only a partial explanation. There are two additional characteristics of quantum mechanics that should be represented in the fluid formulation:
* Quantum mechanics is a probabilistic theory.
* The field equations of independent particles are independent; they are additively separated.
The probability density of a two-particle system is represented by \(\rho(\boldsymbol{x}_{1},\boldsymbol{x}_{2},t)\) and for independent particles, the probability density is the product of individual probability densities, \(\rho(\boldsymbol{x}_{1},\boldsymbol{x}_{2},t)=\rho_{1}(\boldsymbol{x}_{1},t) \rho_{2}(\boldsymbol{x}_{2},t)\). For normal fluids, the density of a mixture is the sum of the component densities, a multicomponent quantum fluid is not like an ordinary one. However, the energy of the quantum field is additively separated for individual particles. That usually is the requirement of the Hamiltonian of the multiparticle system. For a fluid theory, it must be a requirement for the density dependence of the mass-specific internal energy:
\[\mathsf{u}(\rho_{1}(\boldsymbol{x}_{1},t)\rho_{2}(\boldsymbol{x}_{2},t))= \mathsf{u}(\rho_{1}(\boldsymbol{x}_{1},t))+\mathsf{u}(\rho_{2}(\boldsymbol{ x}_{2},t)) \tag{69}\]
If \(\mathsf{u}\) is continuously differentiable, then there is a unique solution of this functional equation, namely \(\mathsf{u}(\rho)=k\ln(\rho)\), with a constant \(k\) parameter, [58, 59]. This
is fundamental to the relationship between statistical mechanics and equilibrium thermodynamics. In statistical mechanics, the additivity of thermodynamic entropy is the requirement; in quantum mechanics, it is the additivity of evolution equations. We will see that the additivity of the isentropic weakly nonlocal specific internal energy is our best choice to formulate a similar requirement. We require that the Korteweg fluid is isotropic, therefore depends only on the magnitude of the density gradient vector, on \((\nabla\rho)^{2}\). Because the gradient of a multiparticle density is \(\nabla\rho(\boldsymbol{x}_{1},\boldsymbol{x}_{2},t)=(\nabla_{\boldsymbol{x}_{ 1}}\rho,\nabla_{\boldsymbol{x}_{2}}\rho)(\boldsymbol{x}_{1},\boldsymbol{x}_{ 2},t)\) then the additivity requirement can be formulated as
\[\mathfrak{u}(\rho,(\nabla\rho)^{2})=\mathfrak{u}(\rho_{1}\rho_{2},(\rho_{2} \nabla_{1}\rho_{1})^{2}+(\rho_{1}\nabla_{2}\rho_{2})^{2})=\mathfrak{u}(\rho_{1 },(\nabla_{1}\rho_{1})^{2})+\mathfrak{u}(s,\rho_{2},(\nabla_{2}\rho_{2})^{2}) \tag{70}\]
It is easy to see, that a solution to this functional equation is
\[\mathfrak{u}(\rho,(\nabla\rho)^{2})=k\ln\rho+\frac{K}{2}\frac{(\nabla\rho)^{2 }}{\rho^{2}}, \tag{71}\]
where \(k\) and \(K\) are constants. It is more remarkable that the above solution is unique among continuously differentiable functions, [20]. If one expects that the above formula is the specific energy, the energy per unit mass, then it cannot change if the \(\rho\) is rescaled; therefore, the logarithm cannot represent the energy density of a particle with mass \(m\) in this sense it is not distinguished. Therefore, let us introduce a slightly more general form as the internal energy of the Korteweg fluid, with a separation of the additive, gradient-dependent term, \(\mathfrak{u}_{F}\) and another one representing the usual local internal energy, \(\mathfrak{u}_{T}\):
\[\mathfrak{u}_{qf}(s,\rho,\nabla\rho)=\mathfrak{u}_{F}+\mathfrak{u}_{T}=\frac{ K}{2}\frac{(\nabla\rho)^{2}}{\rho^{2}}+\mathfrak{u}_{T}(s,\rho), \tag{72}\]
We will see that \(\mathfrak{u}_{qf}\) is the thermodynamical potential of quantum fluids. According to the Gibbs relation (50), one obtains the partial derivatives of the internal energy as
\[T=\frac{\partial\mathfrak{u}_{T}}{\partial s},\quad p=\rho^{2}\frac{\partial \mathfrak{u}_{T}}{\partial\rho}-K\frac{(\nabla\rho)^{2}}{\rho},\quad\boldsymbol {A}=K\frac{\nabla\rho}{\rho^{2}}. \tag{73}\]
Then the corresponding heat flux (interstitial working) and pressure tensor of perfect Korteweg fluid are
\[\boldsymbol{q}_{qf} =\frac{K}{2}\left(\nabla\rho\nabla\cdot\boldsymbol{v}+(\nabla \boldsymbol{v})\cdot\nabla\rho\right), \tag{74}\] \[\boldsymbol{P}_{qf} =\left(\rho^{2}\left.\frac{\partial(\rho\mathfrak{u}_{T})}{ \partial\rho}\right|_{s}+\frac{K}{2}\Delta\rho\right)\boldsymbol{I}+K\left( \frac{\nabla\rho\circ\nabla\rho}{\rho}-\frac{\nabla^{2}\rho}{2}\right). \tag{75}\]
The divergence of the pressure becomes
\[\nabla\cdot\boldsymbol{P}_{qf}=\rho\nabla\left[\left.\frac{\partial(\rho \mathfrak{u}_{T})}{\partial\rho}\right|_{s}+\frac{K}{2}\left(\frac{\nabla\rho \cdot\nabla\rho}{\rho^{2}}-2\frac{\Delta\rho}{\rho}\right)\right]-\left.\frac {\partial(\rho\mathfrak{u}_{T})}{\partial s}\right|_{\rho}\nabla s. \tag{76}\]
Here the last term in the parenthesis, the nonlocal part of the potential, can be written as
\[\frac{K}{2}\left(\frac{\nabla\rho\cdot\nabla\rho}{\rho^{2}}-2\frac{\Delta\rho }{\rho}\right)=2K\frac{\Delta R}{R}, \tag{77}\]
where \(R=\sqrt{\rho}\) and if \(K=\frac{\hbar^{2}}{2m^{2}}\) then one can identify the Bohm potential (11). It is the functional derivative of the internal energy density \(\rho\mathsf{u}_{qf}=\rho(\mathsf{u}_{F}+\mathsf{u}_{T})\). More exactly, it is a partial functional derivative with constant specific entropy:
\[\delta_{\rho}(\rho\mathsf{u}_{qf})|_{s}=\left.\frac{\partial(\rho\mathsf{u}_{ qf})}{\partial\rho}\right|_{s,\nabla\rho}-\nabla\cdot\left.\frac{\partial(\rho \mathsf{u}_{F})}{\partial\nabla\rho}\right|_{\rho}=\left.\frac{\partial(\rho \mathsf{u}_{T})}{\partial\rho}\right|_{s}+2K\frac{\Delta R}{R}, \tag{78}\]
Also, because
\[\left.\frac{\partial(\rho\mathsf{u})}{\partial\rho}\right|_{s,\nabla\rho}= \left.\frac{\partial(\rho\mathsf{u})}{\partial\rho}\right|_{\rho s,\nabla\rho} +Ts. \tag{79}\]
one obtains
\[\delta_{\rho}\mathsf{u}(s,\rho,\nabla\rho)|_{\rho s}=\delta_{\rho}\mathsf{u}( s,\rho,\nabla\rho)|_{s}-Ts, \tag{80}\]
because \(s=\rho s/\rho\), exchanging the variables. Therefore the functional derivative of the internal energy density can be written as
\[\delta_{\rho}(\rho\mathsf{u}_{qf})|_{s}=\delta_{\rho}(\rho\mathsf{u}_{qf})|_{ \rho s}+Ts=\left.\frac{\partial(\rho\mathsf{u}_{T})}{\partial\rho}\right|_{ \rho s}+2K\frac{\Delta R}{R}+Ts, \tag{81}\]
and the holographic property of the quantum fluid is expressed like an Euler fluid,
\[\nabla\cdot\textbf{{P}}_{qf}=\rho\nabla\delta_{\rho}(\rho\mathsf{u}_{qf})|_{s} -\rho T\nabla s=\rho\nabla\delta_{\rho}(\rho\mathsf{u}_{qf})|_{\rho s}+\rho s \nabla T. \tag{82}\]
Therefore quantum fluids are holographic in two particular important situations, for homoentropic and homothermal materials (with homogeneous entropy or temperature distributions) and then the respective potentials differ only in the thermal part; it is either the specific enthalpy or the chemical potential of the thermal part of the fluid.
### Wave function representation
The hydrodynamic, the Bohmian and the pilot-wave formulations of quantum mechanics are derived starting from the field equations of the quantum theory, from the Schrodinger, Klein-Gordon, Dirac equations [10, 60] and also from field equations of quantum field theory, see [14]. We have shown in the introduction on the example of the Schrodinger equation that the connection is based on the transformation of the variables by an appropriate version of the Madelung transformation:
\[\Psi=Re^{i\frac{S}{S_{0}}} \tag{83}\]
Here \(\rho=R^{2}\) is the probability density of the particle in a given position, and \(S\) is the velocity potential, \(\textbf{{v}}=\nabla S\). Then, substituting the amplitude-phase representation of the wave function into the quantum field equations: the imaginary part of the quantum dynamics results in the conservation of probability density, the continuity equation (5) and the real part gives the Bernoulli equation of the energy balance of a potential flow, with conserved vorticity. Therefore the four-component classical field of \(\rho,\textbf{{v}}\) can be represented by two scalar fields and the well-known complex field equations. Starting from the quantum side, the continuum equations appear as mere "interpretations", and hydrodynamics is an "analogy" because the Bohm potential or the quantum pressure is very special, and several aspects of the hydrodynamic form, e.g. the viscosity are apparently not physical. Also, if recognised, the classical holographic property looks ad hoc and special.
From the point of view of Korteweg fluids, the conditions are clear. If momentum balance can be transformed to the Bernoulli form, then it can also be represented by
a complex scalar function. If a perfect, nondissipative Korteweg fluid is holographic, has a homogeneous temperature or homogeneous entropy field, for the gradient part of its velocity field there is a velocity potential, \(\boldsymbol{v}=\nabla S\), and one obtains the Bernoulli equation:
\[\rho\boldsymbol{\dot{v}}+\nabla\cdot\boldsymbol{P}_{K}=\rho\left(\partial_{t} \boldsymbol{v}+\boldsymbol{v}\cdot\nabla\boldsymbol{v}+\nabla\Phi\right)=\rho \nabla\left(\partial_{t}S+\frac{\nabla S\cdot\nabla S}{2}+\Phi\right)=0, \tag{84}\]
because of (65) and \(\boldsymbol{v}\cdot\nabla\boldsymbol{v}=\nabla(\boldsymbol{v}^{2})/2\), if \(\nabla\times\boldsymbol{v}=0\). Then the Bernoulli equation expresses conserved specific energy (energy per unit mass) along a streamline
\[\partial_{t}S+\frac{\nabla S\cdot\nabla S}{2}+\Phi(\rho,\nabla\rho)=const. \tag{85}\]
The continuity equation becomes
\[\dot{\rho}+\rho\nabla\cdot\boldsymbol{v}=\partial_{t}R^{2}+\nabla(R^{2}\nabla S )=0. \tag{86}\]
Therefore multiplying (86) by \(\frac{1}{2R}e^{i\frac{S}{S_{0}}}\) and (85) by \(\frac{iR}{S_{0}}e^{i\frac{S}{S_{0}}}\) and adding the formulas one can separate the time derivative of the wave function as
\[\partial_{t}\psi+\frac{1}{2R}\left(2\nabla R\cdot\nabla S+R\Delta S+i\frac{R} {S_{0}}(\nabla S)^{2}\right)\psi+\frac{i}{S_{0}}\Phi\psi=0. \tag{87}\]
Then one may recognise the Laplacian of the wave function in the parenthesis, and one obtains
\[\partial_{t}\psi-\frac{iS_{0}}{2}\Delta\psi+\frac{i}{S_{0}}\left(\Phi+\frac{S _{0}^{2}}{2}\frac{\Delta R}{R}\right)\psi=0. \tag{88}\]
The natural unit of the velocity potential for a particle with mass \(m\) is \(S_{0}:=\frac{\hbar}{m}\). Also multiplying the above equation by \(i\hbar\), one obtains the evolution equation of the perfect Korteweg fluid in the form of a complex scalar field
\[i\hbar\partial_{t}\psi+\frac{\hbar^{2}}{2m}\Delta\psi-m\left(\Phi+\frac{\hbar^ {2}}{2m^{2}}\frac{\Delta R}{R}\right)\psi=0 \tag{89}\]
This is valid for any Korteweg potential \(\Phi\), obtained in (66) or (66). Moreover, one can see above that the Bohm potential form is separated naturally; therefore, if the gradient dependence comes from the additive energy form, (72), then one can get the pure thermal part in the parenthesis. However, for a quantum fluid, the parameter \(K\) can depend on the entropy and spacetime; here \(S_{0}\) must be constant, otherwise, the wave function interpretation is lost.
Finally, it is worth formulating the quantum fluid form of the wave function evolution, with the internal energy (72) and \(K=\frac{\hbar^{2}}{m^{2}}\) for homoentropic fluids:
\[i\hbar\partial_{t}\psi+\frac{\hbar^{2}}{2m}\Delta\psi-mh(\rho,s)\psi=0 \tag{90}\]
Here, from (82) one gets \(h=\left.\frac{\partial(\rho u_{T})}{\partial\rho}\right|_{s}\), the specific thermal enthalpy. We have obtained the complex field equation of superfluids, the generalisation of Ginzburg's \(\Psi\) theory. If \(h\) (or \(\mu\)) is a fixed field, independent of the thermodynamic state variables, then (90) is the Schrodinger equation of a particle with mass \(m\). For quantum fluids, \(m\) is a fixed parameter denoting the total mass of the fluid in the system. That can be arbitrary, like in the case of superconductivity, [24].
## 6. Summary
Quantum mechanics, superfluids and capillary fluids represent very different aspects of reality. Equation of motion of a particle in microphysics, low-temperature perfect fluids with extraordinary properties and fluids with surface effects. In this paper, we have shown that the similarity in their theories is based on a universal background and can be treated uniformly in the framework of nonequilibrium thermodynamics. It was shown that the property that connects particles and fields, classical holography, is a consequence of the Second Law of thermodynamics.
Classical holography is a property in that surface, and bulk forces are interchangeable, because the divergence of the pressure equals a potential generated force density. It was shown that this particular holographic property is the consequence of the Second Law of thermodynamics in the marginal, nondissipative case of perfect fluids. Moreover, the potential has a particular form; it is a partial variational derivative of the internal energy density by the density. The result was derived first with the rigorous Liu procedure, then with the help of divergence separation, a more heuristic and transparent method of classical irreversible thermodynamics. The latter derivation demonstrates that the nonequilibrium thermodynamics can be extended from local equilibrium to _weakly nonlocal equilibrium_.
Then the conditions of complex scalar field representation were explored. Then a general nonlinear Schrodinger type equation emerges, where various models of superfluids, like Gross-Pitajevskii, Ginzburg-Soboyanhin equations and the Schrodinger equation of a single particle are members of the Korteweg fluid family. The wave function representation was connected to a particular form of the internal energy, with a natural probabilistic interpretation, due to the unique additive form of the internal energy.
The conditions that separate the different Korteweg fluids are enumerated below, each point representing the next level of modelling conditions from the most general with more and more special ones:
1. _Mass, momentum and energy are conserved._ It is a universal condition, independent of whether particle-based or field-based representation of continua.
2. _Second Law, entropy balance._ One does not expect that Second Law is directly applied to microscopic systems. The previous sections showed that the Second Law nontrivially restricts those perfect fluid equations of state. A perfect, ideal system without dissipation is a marginal situation from the point of view of the Second Law. Moreover, according to our understanding of the atomic-nuclear-subnuclear hierarchical structure of matter, there is no reason to assume that there is no submicroscopic level below the Schrodinger equation. It is a universal restriction because it does not depend on a particular fluid structure. In this respect, looking at a fluid as only a set of particles is misleading. The Second Law is expected to be valid both for the field part and the particle part of the material and also for both of them together. The Second Law is independent of the usual artificial separation of matter into particles and vacuum fields.
3. _The fluid energy is sensitive to density inhomogeneities._ Therefore, our continuum is weakly nonlocal in mass density; its energy depends on the spatial derivatives of the density. The holographic property is based only on
these general conditions. For Korteweg fluids, energy depends on first-order derivatives, but the pressure tensor and the equivalent potential field are second-derivative dependents. Capillarity phenomena and surface tension follow from the static solution, assuming thermal and mechanical equilibrium. The definition of perfect fluid follows from the Second Law, too.
The methods can be generalized to higher-order nonlocalities and also to other fields. An instructive example is a first-order weakly nonlocal scalar field that turns out to be Newtonian gravity when Second Law restrictions are considered [6, 7].
4. _The fluid is perfect._ The definition follows from the conditions of conserved entropy when it follows from thermodynamic material properties. The Second Law is valid in a marginal case. This is the requirement of zero dissipation, as is expected in microscopic theories. Perfect fluids are almost holographic.
5. _The entropy or the temperature is homogeneous._ This condition ensures that a perfect fluid is strictly holographic and mechanics can be separated from thermodynamics. This is valid for homoentropic of homothermal fluids when entropy or temperature is homogeneous. Therefore the thermal part of the evolution did not start or is already finished, then the mechanical potential in the momentum balance is the specific enthalpy or the specific Gibbs free energy, the chemical potential, respectively. The last terms of (65) are invisible if holography is postulated and not derived.
6. _Vorticity of the fluid is conserved._ Then there is a velocity potential, and a wave function, a complex scalar field, can represent the fluid fields.
7. _The Planck constant is constant \(K\)_ in (71) is constant, instead of being a state function. Probabilistic separability is valid, and the nonlinear Schrodinger equation is obtained.
8. _Density independent potential. \(h\)_ does not depend on the thermodynamic state variables; it is a fixed function of time and position. All of the above conditions are valid only for single-particle quantum mechanics, where the Schrodinger equation is responsible for system evolution.
The first three conditions specify the thermodynamic system, namely Korteweg fluids, a family of fluid theories that incorporates the Fourier-Navier-Stokes system of equations, among others. Capillarity phenomena and vulcanic lava flow are the physical systems where Korteweg fluids are a good model.
The fourth and fifth conditions fix that the fluid is strictly holographic; therefore, the representation of the forces governing the motion of the fluid fields is dimensionally reduced. If the thermodynamics potential, the internal energy density, \(\rho\mathsf{u}\), is independent of the density and only spacetime dependent, then it is an equation of motion of a point mass. If the thermodynamic potential is density-dependent, then we obtain a pilot wave theory [61, 62]. It becomes a de Broglie-Bohm theory if the potential is the Bohm potential.
The last three conditions separate various aspects of quantum systems. With the seventh condition, one obtains theories of superfluids, where the nonlinear Schrodinger equation represents the evolution of a wave function and a mass density evolution.
Finally, if all conditions are valid, one obtains the Schrodinger equation of a point mass, and the spacetime-dependent chemical potential becomes the potential energy when multiplied by the particle's mass.
The above scheme of hierarchically arranged conditions outlines a uniform theoretical approach to various continuum and field theories.
## 7. Discussion
### Universal thermodynamics
Thermodynamics is considered a theory with limited validity. However, the presented research results are based on the Second Law and could hardly be explained without it. Our fundamental assumption is that the Second Law is universal, independent of material structure. Therefore, it agrees with any statistical model, where the existence of entropy and the validity of the fundamental balances are part of the theory. The fact that gravity and quantum mechanics, the universal theories of physics, can be treated in this framework are the best arguments to reconsider the above-mentioned limited validity.
Naturally, one may look for particular (and particulate) statistical models and explanations of the presented ideas: thermodynamics is an emergent theory. One may expect different microscopic mechanisms, modelling Korteweg fluids, superfluids and classical quantum mechanics as emergent theories. Therefore, none of them could be an explanation for the uniform treatment. It looks impossible to find a _common_ microscopic background for gravity on cosmological and galactic scales and quantum mechanics of an electron. What conceptual background can explain the uniform thermodynamic structure of these physical systems? As far as we see, it is the stability of matter. The Second Law and, in particular, the role of entropy can be understood as theoretical tools to formulate evident and expected stability properties of equilibrium in any theory. It is easy to recognise the mathematical conditions of the Liapunov theorem when postulating entropy as maximal and increasing quantity. A complete rigorous proof requires the definition of the physical system, but the analogy itself could be fruitful in mathematics, see [63], and insightful in physics [64, 65]. In our particular case, the form of entropy production indicates that the perfect Korteweg fluid defines the equilibrium. In a dissipative framework, it should be asymptotically stable, a straightforward prediction from a physical point of view and a task to find boundary conditions and function spaces for rigorous mathematical statements.
### Holographic principle
It is an expected property of quantum gravity and a source of inspiration, like in the case of AdS-CFT correspondence, and also for hydrodynamics, see [66]. The concept itself originated in black hole thermodynamics, and the connection between thermodynamics and holography has been treated many times by several authors. For Newtonian gravity, it is somehow trivial, but the formulation is simple and has a straightforward explanation: it is reduced to the observation that the gravitational force density can be transformed to a pressure tensor due to the field equation, the Poisson equation. That concept is the _classical holographic property_, (3). As we have seen, it is valid for perfect Korteweg fluids due to the Second Law, independently of any particular interaction. In the analogous parallel thermodynamic treatment, the Poisson equation is the perfect Newtonian gravity; therefore, the justification of classical holographic property is the same, [6].
The connection to thermodynamics is different from that of the entropic force concept of Verlinde, [4], and its connection to quantum mechanics is independent of the concept of entanglement entropy (see [67]).
Any direct connection of our classical holographic concept, (3), to quantum gravity or to string theory could be overly exaggerated; however, the logical relations are remarkable. In our case, the holographic principle is not a condition but a consequence. Also, in our case, the simple background comes with a direct and plausible interpretation. It is the critical aspect of the particle-field duality in quantum mechanics: quantum systems can be represented and modelled from both conceptual points of view, an aspect that is somehow overshadowed by the historical black hole origin.
### Dissipation and variational principles
One does not expect that evolution equations of non-equilibrium thermodynamics are derived from variational principles (contrary to, e.g. [68]). The variational formulation is connected to perfect materials without dissipation. In our analysis, the mechanical potential that embodies the holographic principle has a form of a (partial) functional derivative where the "Lagrangian" is a thermodynamic potential. We emphasise that the variational form, the functional derivative emerged without any variational principle: it was the consequence of the Second Law analysis.
There are other ways to get similar results. For example, a Poisson bracket structure is applied in [69]. Also, given a functional derivative, one can find a suitable variational justification. However, then the starting point is an ideal system, and the dissipative part must be added to deal with the real world. That way, the theoretical concepts and mathematical structures are doubled, and the uniform origin cannot be recognised.
It is also remarkable that classical holography results in two kinds of dissipation. One is connected to the fluid flow, to the linear solution of the entropy inequality, with viscosity and heat conduction. Another form is connected to the streamlines, to the Newtonian form of the evolution equation, the point mass representation, (68), with damping and friction. The damping term in the second case can be
\[\dot{\mathbf{v}}=-\nabla\Phi-\beta\mathbf{v}. \tag{91}\]
It is not a simple property if the potential, \(\Phi\), depends on the density or the gradient of the density, like in pilot-wave hydrodynamics. Then it can be a potential, [61, 62], where the damping of the individual droplet motion is well expected and must be counterbalanced by the active excitation background. If we are in the Schr0dinger level, then \(\Phi\) is the Bohm potential, and (91) is equivalent to a dissipative version of the Schrodinger equation, to the Schrodinger-Langevin-Kostin equation, [70, 71, 72]. It is a fruitful idea, a possible application can be cooling with coherent control [73].
### Quantum mechanics: interpretations and extensions
Some remarks cannot be avoided regarding the foundations of quantum mechanics. As mentioned in the introduction, there is an enormous amount of literature on the various reformulations, including the hydrodynamic one. It is hard to distinguish between scientific and speculative arguments. For example, the speculative completeness argument of the Copenhagen school disqualified the alternative approaches as mere "interpretations". Therefore, no one is looking beyond quantum physics. Moreover, the connections of the interpretations are not really analysed. Stochastic [74, 75],
pilot-wave and Bohmian [11, 76], etc., approaches are reproducing quantum mechanics to an extent and encountered various difficulties. One may wonder whether the research for connections could be helpful, when one looking for new predictions.
From the general thermodynamic point of view of the recent analysis, single particle quantum mechanics is a very special Korteweg fluid in a large family of theories and models of various natural physical systems where the transitions are well-defined and justified from a mathematical and a physical point of view. Some known aspects of quantum mechanics appear from new perspective. One of them is objectivity and frame independence, treated in the following subsection. Also, some extensions of quantum mechanics are well motivated, like the logarithmic Schrodinger equation of Bialynicki-Birula and Mycielski, [28], or a complex potential, representing mass source term [77], not to mention the various theories superfluids. The concept of superfluidity at cosmological scales, [78], or to quark-gluon plasma [79, 80], is also somehow natural from the point of view of perfect Korteweg fluids.
Our following remark is that fluid mechanics is far from competing with the well-developed operator formalism and the related Copenhagen interpretation. Nevertheless, the simplicity of fluid models is surprising, as well as the fact that Hilbert space operators and integration by projector measures can be substituted, even in a restricted sense, by fluid mechanics.
Finally, the thermodynamic road to quantum mechanics is not related to a Hamiltonian structure of the evolution equations. It is also clear that it is a road and not a jump: one should respect the second law when investigating classical, e.g. Bohmian, mechanics and must be consistent regarding the constraints of the classical evolution equations when introducing additive gradient energy contributions. The thermodynamic method is a way of quantisation. It is a novel and genuine one. One can test it with any classical continuum theories, including dissipative ones.
### Relativistic theory
Our analysis is Galilean relativistic (nonrelativistic). As we have emphasised in subsection 2.1, spacetime aspects of continuum theories are mostly hidden but essential. It is straightforward to prove that the final evolution equations of Korteweg fluids, also the dissipative ones, are not only Galilean covariants but also independent of reference frames. Comparing the Liu procedure and the divergence separation method highlights the hidden aspects of Galilean covariance when expressed with relative quantities. Also, a complete Galilean covariant treatment is straightforward; see [38]. However, without further ado, the thermodynamic method cannot be generalised to special or general relativistic theories. Thermodynamics is based on the separation of space and timelike evolution, and the separation is based on the definition of comoving quantities, which requires the concept of the fluid's velocity field. That problem also appears with absolute time, in the Galilean relativistic theory, see [81, 82, 38]. In the case of dissipative relativistic fluids, entropy production based arguments are not enough, and instabilities appear [83, 84]. The application of Liu procedure without any further ado cannot clarify the problem [85].
However, starting from the quantum mechanical side, the fluid mechanical forms of the fundamental quantum mechanical evolution equations, like the Klein-Gordon or the Dirac equations, show that, at least for the marginal case of perfect fluid dynamics, the thermodynamic conditions can be interpreted.
### Gradient theories of classical continua
There are several theoretical approaches to obtain weakly nonlocal evolution equations of classical continua. The thermodynamic analysis of Korteweg fluids has been the subject of research in various frameworks for over fifty years. There are several methods in the literature, mainly fixing the entropy flux in advance and therefore required to introduce additional concepts like the mentioned interstitial working, the balance of self-equilibrated forces, multipolarity or virtual powers, see [18, 86, 87, 88, 89]. In this paper, a constructive, Second Law based methodology was used, without extra conditions, beyond the extension of the constitutive state space, [49]. All of the previous analyses obtain the same pressure of perfect Korteweg fluids as Sobrino, (13), because the symmetry condition is not apparent without the Liu procedure. Also remarkable that in [90, 91, 92, 93, 94] Liu procedure is applied to the analysis of various Korteweg fluid systems, but the holographic property was not recognised, because Liu equations cannot be solved, due to the particular treatment of constitutive state spaces, see [50].
Our result regarding the perfect fluid is unique; there is no freedom to add an extra divergence term. The choice of parallel heat and entropy flux is a general aspect of non-equilibrium thermodynamics. Moreover, the dissipative part, the solution of the entropy inequality with linear constitutive equations, can be analysed by the maximal entropy production method of Rajagopal, [95]. Then, for Korteweg fluids, one can obtain a rigorous solution, [96].
The gradient expansions of classical local equilibrium theories can be obtained without detailed thermodynamic analysis. However, Korteweg fluids are only one of the theories where thermodynamic conditions lead to significant improvement. It was the case for the variational procedures of phase field theories, too, [97, 98]. However, in the case of sophisticated static equilibrium, like for generalised continua (see [99]), or for distinguishing various dissipative effects in complicated geometries of non-Fourier heat conduction (see [100]), one cannot distinguish between the different theories without precision numerical calculations and dedicated experiments. The challenge is to control numerical dissipation and distinguish between numerical and physical one, [101, 102]. Therefore, the direct connection of Korteweg fluids to multiple phenomena with a large amount of experimental data is exceptionally favourable for benchmarking and testing the various methodologies.
## 8. Acknowledgement
The work was supported by the grants National Research, Development and Innovation Office - FK134277. The support of TKP is acknowledged. The authors thank Robert Kovacs and Matyas Szucs for valuable discussions.
The research reported in this paper and carried out BME has been supported by the NRDI Fund (TKP2020 NC, Grant No. BME-NCS) based on the charter of bolster issued by the NRDI Office under the auspices of the Ministry for Innovation and Technology.
|
2303.00075 | Q-Map: Quantum Circuit Implementation of Boolean Functions | Quantum computing has gained attention in recent years due to the significant
progress in quantum computing technology. Today many companies like IBM, Google
and Microsoft have developed quantum computers and simulators for research and
commercial use. The development of quantum techniques and algorithms is
essential to exploit the full power of quantum computers. In this paper we
propose a simple visual technique (we call Q-Map) for quantum realisation of
classical Boolean logic circuits. The proposed method utilises concepts from
Boolean algebra to produce a quantum circuit with minimal number of quantum
gates. | Hassan Hajjdiab, Ashraf Khalil, Hichem Eleuch | 2023-02-28T20:47:31Z | http://arxiv.org/abs/2303.00075v2 | ###### Abstract
###### Abstract
Quantum computing has gained attention in recent years due to the significant progress in quantum computing technology. Today many companies like IBM, Google and Microsoft have developed quantum computers and simulators for research and commercial use. The development of quantum techniques and algorithms is essential to exploit the full power of quantum computers. In this paper we propose a simple visual technique (we call Q-Map) for quantum realisation of classical Boolean logic circuits. The proposed method utilises concepts from Boolean algebra to produce a quantum circuit with minimal number of quantum gates.
Q-Map: Quantum Circuit Implementation of Boolean Functions
**Hassan Hajidiab\({}^{1,*}\), Ashraf Khalil\({}^{2}\), Hichem Eleuch\({}^{3,4,5}\),**
\({}^{1}\) Computer Science and Software Engineering Department, Concordia University, Montreal, Quebec, Canada; Email: [email protected]
\({}^{2}\) College of Technological Innovation, Zayed University, Abu Dhabi, UAE; Email: [email protected]
\({}^{3}\) Department of applied physics and astronomy, University of Sharjah, Sharjah, UAE; Email: [email protected]
\({}^{4}\) College of Arts and Sciences, Abu Dhabi University, Abu Dhabi 59911, UAE
\({}^{5}\) Institute for Quantum Science and Engineering, Texas A&M University, College Station, TX 77843, USA
*Author to whom correspondence should be addressed
## Introduction
The advancement of quantum computing hardware and software intrigues researchers to develop quantum algorithms in areas such as cryptography, image processing, algorithms, finance [10, 13, 6, 15] and many other areas. One main advantage of quantum computers compared to classical computers is the processing power. The quantum computer can process computationally expensive tasks exponentially faster than the classical computer. While classical algorithms are limited in complexity to \(O(n)\), a quantum search algorithm proposed by Grover [8] uses \(O(\sqrt{n})\) for unsorted list of \(n\) items. Shor [23] proposed a quantum algorithm to factor an integer \(n\) in polynomial of \(\log n\) time complexity. At this point, there is no classical algorithm that can solve number factorisation in polynomial time.The RSA cryptographic system [20] is based on prime number factorisation, and thus with quantum computers an RSA encrypted message can be decrypted in polynomial time complexity. Hallgen [9] presented a polynomial-time quantum algorithm to solve the Pell-Fermat equation [3] ( also known as the Pells equation 1). In classical |
2309.15345 | Simulation of noisy Clifford circuits without fault propagation | The design and optimization of a large-scale fault-tolerant quantum computer
architecture relies extensively on numerical simulations to assess the
performance of each component of the architecture. The simulation of
fault-tolerant gadgets, which are typically implemented by Clifford circuits,
is done by sampling circuit faults and propagating them through the circuit to
check that they do not corrupt the logical data. One may have to repeat this
fault propagation trillions of times to extract an accurate estimate of the
performance of a fault-tolerant gadget. For some specific circuits, such as the
standard syndrome extraction circuit for surface codes, we can exploit the
natural graph structure of the set of faults to perform a simulation without
fault propagation. We propose a simulation algorithm for all Clifford circuits
that does not require fault propagation and instead exploits the mathematical
structure of the spacetime code of the circuit. Our algorithm, which we name
adjoint-based code (ABC) simulation, relies on the fact that propagation
forward is the adjoint of propagation backward in the sense of Proposition 3
from [14]. We use this result to replace the propagation of trillions of
fault-configurations by the backward propagation of a small number of Pauli
operators which can be precomputed once and for all. | Nicolas Delfosse, Adam Paetznick | 2023-09-27T01:30:03Z | http://arxiv.org/abs/2309.15345v1 | # Simulation of noisy Clifford circuits without fault propagation
###### Abstract
The design and optimization of a large-scale fault-tolerant quantum computer architecture relies extensively on numerical simulations to assess the performance of each component of the architecture. The simulation of fault-tolerant gadgets, which are typically implemented by Clifford circuits, is done by sampling circuit faults and propagating them through the circuit to check that they do not corrupt the logical data. One may have to repeat this fault propagation trillions of times to extract an accurate estimate of the performance of a fault-tolerant gadget. For some specific circuits, such as the standard syndrome extraction circuit for surface codes, we can exploit the natural graph structure of the set of faults to perform a simulation without fault propagation. We propose a simulation algorithm for all Clifford circuits that does not require fault propagation and instead exploits the mathematical structure of the spacetime code of the circuit. Our algorithm, which we name adjoint-based code (ABC) simulation, relies on the fact that propagation forward is the adjoint of propagation backward in the sense of Proposition 3 from [14]. We use this result to replace the propagation of trillions of fault-configurations by the backward propagation of a small number of Pauli operators which can be precomputed once and for all.
At the core of the architectures of fault-tolerant quantum computers are quantum error correction codes such as surface codes [15, 35, 19], Floquet codes [26, 33, 22] or quantum LDPC codes [38, 9, 34, 32, 41]. Universal fault-tolerant quantum computing requires defining a set of logical operations by way of (physical) quantum circuits. This may, for instance, include idle gates, lattice surgery [28], magic state distillation and state injection circuits [7]. Characterizing and optimizing all these circuits, sometimes called gadgets, for a given specification of qubits, gate set, connectivity and noise model requires substantial numerical simulation.
A typical scenario is that you have a Clifford circuit implementing a fault-tolerant gadget and you want to estimate the failure rate of this gadget for different noise parameters. We consider the standard circuit-noise model [15]. Each circuit operation is followed by a random Pauli error acting on its support and measurement outcomes are flipped with some probability. The standard approach to estimating the performance of a Clifford circuit proceeds with the following steps.
1. Sample circuit faults according to the noise model.
2. Propagate these faults through the circuit using the Gottesman-Knill algorithm [24] to determine their effect on the measurement outcomes and the output qubits.
3. Run some classical post-processing based on the measurement outcomes flipped by the faults.
4. Determine if the faults lead to a failure of the gadget.
The classical post-processing may include computation of syndrome data, execution of a decoder, or computing parities of measurement outcomes based on which the gadget performs post-selection. Figure 1 shows an example of computation of the syndrome using this approach. This simulation is typically repeated a large number of times to generate enough data to obtain a good estimate of the failure rate of the gadget. Say, for example, that we want to probe a three-parameter noise model. If we select only ten values for each noise parameter and if for each triple of values we need a billion samples to reach a sufficiently small error bar for the corresponding data point, this results in one trillion repetitions of the previous steps.
The dominant cost of this simulation is generally running the decoder and propagating Pauli faults through the circuits. Low-complexity decoders have been designed [19, 13] and fast decoder implementations are available [27, 43]. In what follows, we propose a simulation protocol that provides the same estimate of the failure rate of a Clifford gadget with circuit-noise without any fault propagation.
The basic idea of our ABC simulation to leverage the spacetime code structure [14]; see also [3, 25]. More specifically, Proposition 3 from [14] (replicated below in Proposition 1) allows us to replace the (forward) propagation of Pauli faults through a Clifford circuit by the backward propagation of stabilizer generators of the spacetime code. As a result, instead of propagating faults trillions of times, we only need to precompute the backward propagation of the spacetime generators once. Similarly, we precompute the backward propagation of some logical operators to determine if the protocol fails. Figure 2 illustrates our propagation-free simulation method.
The paper is organized as follows. Our notations and assumptions are described in Section 1 and we briefly review the correction of circuit faults based on the outcome code [14] in Section 2. The standard simulation protocol is reviewed in Section 3 and our ABC simulation protocol without fault propagation is presented in Section 4. Finally, we discuss the application of the ABC simulation strategy to the simulation of large circuit in Section 5.
## 1 Noisy Clifford circuits
We consider Clifford circuits made with unitary Clifford gates and measurements of Pauli operators. We follow the assumptions and notations of [14], which we briefly review now.
We consider a circuit acting on \(n\) qubits with depth \(\Delta\). Denote by \(\mathcal{P}_{n}\) the set of \(n\)-qubit Pauli operators and by \(\overline{\mathcal{P}}_{n}\) its quotient by the phase operators \(\{\pm I,\pm iI\}\). A configuration of faults in the circuit is represented by a _fault operator_, which is a Pauli operator \(F\in\overline{\mathcal{P}}_{n(\Delta+1)}\) acting on \(n(\Delta+1)\) qubits. We can think of a fault operator as a Pauli operator acting on qubits placed on half-integer times steps of the circuit and indexed by pairs \((\ell+0.5,q)\) where \(\ell\in\{0,1,\ldots,\Delta\}\) is a level of the circuit and \(q\in\{1,\ldots,n\}\) is a qubit.
A _circuit-noise model_ for a circuit \(\mathcal{C}\) is defined to be a probability distribution, denoted \(\mathbb{P}_{\mathcal{C}}\) over the set of fault operators. Such a noise model includes Pauli faults and measurement outcomes flips that can be represented by a Pauli error before and after a measurement.
We assume that the circuit contains \(n_{m}\) measurements. Each run of the circuit produces an _outcome bit-string_\(o\in\mathbb{Z}_{2}^{n_{m}}\) whose \(i\)th component is the outcome of the \(i\)th measurement. Based on the outcome bit-string, a _logical outcome_\(\ell\in\mathbb{Z}_{2}^{n_{\ell}}\) is computed by applying a binary matrix \(\mathbf{M}_{\ell}\) to \(o\), that is \(\ell^{T}=\mathbf{M}_{\ell}o^{T}\) where \(\mathbf{M}_{\ell}\in M_{n_{\ell},n_{m}}(\mathbb{Z}_{2})\). Each component of the vector \(\ell\) is a logical bit which is
Figure 1: A Clifford circuit made with Pauli measurements and CZ gates. This circuit implements the measurement of the stabilizer generators \(Z_{1}Z_{2}\) and \(Z_{2}Z_{3}\) of the repetition code (repeated twice) on the top three qubits using the two bottom qubits as ancillas. The white circles indicate the locations of potential faults. In the absence of faults, the outcome bit-string \(o\in\mathbb{Z}_{2}^{12}\) satisfies the following checks: (i) \(o_{1}+o_{2}+o_{4}+o_{6}=0\), (ii) \(o_{2}+o_{3}+o_{5}+o_{7}=0\), (iii) \(o_{1}+o_{2}+o_{6}+o_{11}=0\), (iv) \(o_{2}+o_{3}+o_{7}+o_{12}=0\). If any of these checks is violated, we know that a fault must have occurred in the circuit. This allows us to detect or correct some circuit faults. (a) A fault configuration represented by a Pauli operator \(F\), called a fault operator. We can think of \(F\) as a Pauli operator acting on qubits placed on the spacetime locations of the circuit (white circles). (b) The cumulant \(\overrightarrow{F}\) is obtained by propagating the faults of \(F\) through the circuit. The propagation through measurements is trivial and the propagation through a unitary gate is obtained by conjugating the input faults by the gate. By inspecting the cumulant \(\overrightarrow{F}\), one can determine whether \(F\) flips the outcomes \(o_{i}\) and then compute the checks. Indeed, the outcome \(o_{i}\) is flipped iff \(\overrightarrow{F}\) anti-commutes with the measured operator \(o_{i}\) at the time step right before the measurement. In this example, \(F\) flips the outcomes \(o_{6}\) and \(o_{8}\). The syndrome, that is the value of the four checks (i), (ii), (iii), (iv), is \((1,0,1,0)\).
Figure 2: The standard approach to compute the syndrome of a set of faults is through fault-propagation as shown in Figure 1. This figure illustrates our ABC simulation scheme that does not require fault-propagation. The syndrome is computed using the backpropagation of the checks. The backpropagation of the checks (i), (ii), (iii) and (iv) of Figure 1 is represented in (a), (b), (c) and (d). They are obtained by placing the measured operators (involved in the check) right before they are measured and by backpropagating them [14]. If \(\overleftarrow{G}\) denotes the backpropagated operator corresponding to a check, then value of the check is non-trivial iff \(F\) anti-commutes with \(\overleftarrow{G}\). By inspecting the commutation between \(F\) and the operators (a), (b), (c), (d), we recover the syndrome \((1,0,1,0)\) of \(F\) without needing \(\overrightarrow{F}\).
obtained by taking the parity some measurement outcomes1. For example, the outcome of a logical \(X\) measurement in the surface code is obtained by measuring all the qubits of a logical patch in the \(X\) basis and then taking the parity of the measurement outcomes along a line of qubits supporting a logical \(X\) operator [18]. In the presence of noise this logical outcome bit must be corrected by the decoder.
Footnote 1: These outcomes are indicated by the corresponding row of \(\mathbf{M}_{\ell}\).
The _effect_ of a fault operator \(F\) is defined to be the pair \(\operatorname{eff}(F)=(f,E)\) where \(f\in\mathbb{Z}_{2}^{n_{m}}\) represents the measurement outcome flips induced by \(F\) and \(E\in\overline{\mathcal{P}}_{n}\) is the residual error on the qubits at the end of the circuit when \(F\) occurs. Recall that \(f_{j}=1\) iff \(F\) leads to a flip of the outcome of the \(j\)th measurement of the circuit. We use the notation \(\operatorname{eff}_{m}(F)=f\) and \(\operatorname{eff}_{q}(F)=E\) for the effect on measurement outcomes and the effect on qubits.
## 2 Correction of circuit faults using the outcome code
In [14], we proved that the outcome bit-string belongs to a linear code (up to a relabelling of the measurement outcomes) that we call the _outcome code_ and we explain how to correct circuit faults using this code. This leads to a general correction protocol including a broad class of fault tolerant gadgets for stabilizer codes [23], surface codes [15, 19], color codes [6] or Floquet codes [26].
The correction of circuit faults based on the outcome code works as follows. After extracting the outcome bit-string \(o\), compute its syndrome \(s\in\mathbb{Z}_{2}^{n_{s}}\) by applying a binary matrix \(\mathbf{M}_{s}\in M_{n_{s},n_{o}}(\mathbb{Z}_{2})\) to \(o\), that is \(s^{T}=\mathbf{M}_{s}o^{T}\). The matrix \(\mathbf{M}_{s}\) can be efficiently generated using Algorithm 1 of [14]. In the absence of faults, the syndrome is trivial. If the faults corresponding to a fault operator \(F\) occur, we obtain the syndrome \(s^{T}=\mathbf{M}_{s}f^{T}\) where \(f=\operatorname{eff}_{m}(F)\), which depends only on \(F\). Moreover, \(F\) flips some logical outcomes. The indicator vector of the flipped logical bits is the vector \(\bar{f}\) given by \(\bar{f}^{T}=\mathbf{M}_{\ell}f^{T}\).
The syndrome matrix corresponding to the four checks of Figure 1 and Figure 2 is
\[\mathbf{M}_{s}=\begin{pmatrix}1&1&0&1&0&1&0&0&0&0&0\\ 0&1&1&0&1&0&1&0&0&0&0&0\\ 1&1&0&0&0&1&0&0&0&0&1&0\\ 0&1&1&0&0&0&1&0&0&0&0&1\end{pmatrix}. \tag{1}\]
Here, we have \(n_{m}=12\) and \(n_{s}=4\). Each column of this matrix corresponds to an outcome bit and each row is the indicator vector of a check. For example, the first row defines the check \(o_{1}+o_{2}+o_{4}+o_{6}=0\). Using this circuit, one can measure a logical \(Z\) operator for the repetition code. The logical outcome is simply \(o_{8}\) and the corresponding logical matrix is
\[\mathbf{M}_{\ell}=\begin{pmatrix}0&0&0&0&0&0&1&0&0&0&0\end{pmatrix}. \tag{2}\]
Moreover, the effect of the fault \(F\) from Figure 1 on measurement outcomes is
\[f=\begin{pmatrix}0&0&0&0&0&1&0&1&0&0&0&0\end{pmatrix} \tag{3}\]
because \(F\) induces a flip of the outcomes \(o_{6}\) and \(o_{8}\). The syndrome of \(F\) is \(s=(1,0,1,1)\) and the logical effect is \(\bar{f}=(1)\). Note that this example is intended for illustration purposes. The circuit is not fault tolerant.
A _decoder_\(D\) is used to correct the logical outcome of the circuit. It takes as an input the syndrome \(s\) and returns a correction \(D(s)=\bar{f}^{\prime}\) to apply to the logical outcome \(\ell\), _i.e._ replacing \(\ell\) by \(\ell+\bar{f}^{\prime}\). We say a _failure_ occurs if the logical outcome \(\ell+\bar{f}^{\prime}\) after decoding is incorrect due to the presence of faults in the execution of circuit, that is iff \(\bar{f}^{\prime}\neq\bar{f}\).
Our goal is to design a Monte-Carlo simulation to estimate the _failure rate_\(\mathbb{P}_{\mathcal{C}}(D(s)\neq\bar{f})\) of the circuit. We refer to such a simulation as a _circuit-noise simulation_.
For simplicity, we focus on failures to recover the logical outcome but our simulation protocol without fault propagation can be generalized to failures induced by residual errors on the output qubits of the circuit. In that case, the decoder may also apply a correction to the output qubits.
## 3 Circuit-noise simulation based on fault propagation
Here, we review the standard circuit-noise simulation protocol based on the propagation of faults through the circuit. The pseudo-code is provided in Algorithm 1. This is a detailed version of the procedure discussed in introduction. As explained earlier, we may have to repeat this process many times.
Following [14], denote by \(\overrightarrow{F}\) the cumulant of a fault operator \(F\). It is the fault operator whose component \(\overrightarrow{F}_{\ell+0.5}\in\overline{\mathcal{P}}_{n}\) after level \(\ell\) is the result of all the faults occurring during the first \(\ell\) levels of the circuit propagated through the first \(\ell\) levels of unitary gates. The cumulant can be computed by conjugating faults through unitary gates using the standard stabilizer simulation algorithm [23].
```
input : A Clifford circuit \(\mathcal{C}\), a noise model \(\mathbb{P}_{\mathcal{C}}\), a syndrome matrix \(\mathbf{M}_{s}\), a logical matrix \(\mathbf{M}_{\ell}\), a decoder \(D\), an integer \(n_{\text{sample}}\). output : A Monte-Carlo estimation of the failure rate of the circuit \(\mathbb{P}_{\mathcal{C}}(D(s)\neq\bar{f})\).
1 Initialize \(n_{\text{fail}}=0\).
2for\(i=1,2,\ldots n_{\text{sample}}\)do
3 Sample a fault operator \(F\) according to the circuit-noise distribution \(\mathbb{P}_{\mathcal{C}}\).
4 Compute the cumulant \(\overrightarrow{F}\).
5 Compute the effect \(f=\text{eff}_{m}(F)\) using \(\overrightarrow{F}\).
6 Compute the syndrome \(s\) using \(s^{T}=\mathbf{M}_{s}f^{T}\).
7 Compute the logical flips \(\bar{f}\) using \(\bar{f}^{T}=\mathbf{M}_{\ell}f^{T}\).
8 If \(D(s)\neq\bar{f}\), do \(n_{\text{fail}}\gets n_{\text{fail}}+1\).
9return\(\frac{n_{\text{fail}}}{n_{\text{sample}}}\).
```
**Algorithm 1**Standard circuit-noise simulation
For simplicity, we separate the computation of the cumulant \(\overrightarrow{F}\) and the computation of its effect \(f\) on measurement outcomes. In practice, we do not need to store the whole cumulant in memory to obtain \(f\). It is enough to compute the levels of the cumulant \(\overrightarrow{F}_{\ell+0.5}\) sequentially which reduces the memory cost of the simulation. In this note, we ignore the memory cost because it is not the bottleneck of the simulation and it is not a clear differentiator between the algorithms discussed here.
Suppose that we have a fast decoder and consider the cost of the other steps of this simulation. We provide the worst-case computational complexity of steps 3 to 7 of Algorithm 1 in two settings:
(i) for a general circuit and (ii) for a sparse circuit made with unitary gates and measurement supported on a bounded number of qubits. We compute the worst-case complexity as a function of the number of qubits \(n\), the depth \(\Delta\), the number of measurements \(n_{m}\), the number of syndrome bits \(n_{s}\) and the number of logical outcome bits \(n_{\ell}\). Many noise models produce low-weight fault operators \(F\). If this is the case, we can speed up some of the steps of the simulation. Then, we provide the complexity as a function of the weight \(|F|\) of the fault operator sampled. Table 1 summarizes our results.
_Generation of the matrices \(\mathbf{M}_{s}\) and \(\mathbf{M}_{\ell}\)._ In this work, we assume that the syndrome matrix \(\mathbf{M}_{s}\) and the logical matrix \(\mathbf{M}_{\ell}\) are given as an input with the circuit. An algorithm generating a syndrome matrix, that is checks of the outcome code, is described in [14]. For general circuit, it runs in \(O(n^{4}\Delta)\) bits operations. Its complexity is reduced to \(O(n\Delta)\) for LDPC spacetime codes and \(O(n)\) for periodic LDPC spacetime codes. The logical matrix \(\mathbf{M}_{\ell}\) can be obtained by backpropagating the logical operators measured at the end of the circuit.
_Complexity of step 3 - Sampling \(F\)._ The cost of sampling \(F\) depends on the details of the circuit-noise distribution \(\mathbb{P}_{\mathcal{C}}\). For most popular noise models such as the phenomenological model or the circuit-level noise model of [15], the complexity of generating a random fault operator \(F\) is linear in the volume of the circuit, that is \(O(n\Delta)\).
_Complexity of step 4 - Computation of \(\overrightarrow{F}\)._ The standard approach to compute the cumulant \(\overrightarrow{F}\) of \(F\) is through fault propagation. Assume that a unitary gate \(U\) acting on \(w\) qubits is represented by a \(2w\times 2w\) binary matrix that stores the conjugation \(UPU^{-1}\) of the Pauli operators \(P=X_{q}\) and \(Z_{q}\) acting on a qubit \(q\) of the support of \(U\). Conjugating a general Pauli fault \(Q\) through this gate is equivalent to applying this matrix to the binary representation of the fault \(Q\). This can be done in \(O(w^{2})\) operations. As a result, the worst-case computational complexity of the computation of the cumulant with this approach is \(O(n^{2}\Delta)\) bit operations for a general circuit (at most \(O(n^{2})\) per level). For a sparse circuit, the complexity drops to \(O(n\Delta)\) bit operations. The computation of the cumulant of a low-weight fault operator with this method is not significantly faster because a single fault may rapidly spread to many qubits after propagating through a few levels of the circuit. Even though \(F\) is sparse, it is generally not the case for its cumulant. Better scaling is achieved in some cases such as circuits implemented exclusively with Pauli measurements, or fault-tolerant circuits designed to avoid spreading faults.
_Complexity of step 5 - Computation of \(f\)._ Given \(\overrightarrow{F}\), the effect \(f=\operatorname{eff}_{m}(F)\) can be obtained with a worst-case complexity of \(O(n\Delta)\) bit operations for a general circuit. More precisely, the \(j\)th bit \(f_{j}\) of \(f\) is \(1\) iff \(F\) induces a flip of the outcome of the \(j\)th measurement of the circuit. Let \(S_{j}\) be the measured operator and let \(\ell_{j}\) be the level of this measurement. Then, \(f_{j}\) is obtained as the commutator of \(\overrightarrow{F}_{\ell_{j}-0.5}\) with the \(j\)th measured operator \(S_{j}\) because the outcome of the measurement of \(S_{j}\) is flipped iff the faults accumulated before this measurement (\(\overrightarrow{F}_{\ell_{j}-0.5}\)) anti-commute with \(S_{j}\). This corresponds to Lemma 3 of [14], but we include it here because it is also used later on. Recall that \(\eta_{\ell_{j}-0.5}(S_{j})\) is the fault operator obtained by placing the operator \(S_{j}\) right before level \(\ell_{j}\).
**Lemma 1** (Lemma 3 of [14]).: _Let \(F\) be a fault operator. The faults corresponding to \(F\) induce a flip of the measurement of \(S_{j}\) iff \([\overrightarrow{F},\eta_{\ell_{j}-0.5}(S_{j})]=1\)._
In other words, we have \(f_{j}=[\overrightarrow{F},\eta_{\ell_{j}-0.5}(S_{j})]\). Recall that following [14], we use the notation \([P,Q]=0\) if \(P\) and \(Q\) commute and \(1\) if they anti-commute.
The worst-case complexity of the computation of \(f\) remains unchanged for a sparse circuit or for a low-weight fault operator \(F\) because \(\overrightarrow{F}\) typically has large weight.
_Complexity of steps 6 and 7 - Computation of \(s\) and \(\bar{f}\)._ For a general circuit, the computation of the syndrome \(s\) requires applying a \(n_{m}\times n_{s}\) binary matrix to \(f\) which can be done with a worst-case complexity of \(O(n_{m}n_{s})\). Again, this subroutine is not significantly faster for sparse circuits or for low-weight fault operators. Similarly, the computation of the logical flips \(\bar{f}\) can be done with a complexity of \(O(n_{m}n_{\ell})\) bit operations in the worst case. However, the number of logical outcome bits \(n_{\ell}\) is often small.
Putting things together, we obtain a worst-case complexity in \(O(n^{2}\Delta^{2})\) dominated by the syndrome computation. For some circuits, the syndrome and the logical flips can be computed more efficiently (in \(O(n\Delta)\)) and then the worst-case complexity of the simulation is in \(O(n^{2}\Delta)\), or \(O(n\Delta)\) for sparse circuits, dominated by the computation of the cumulant \(\overrightarrow{F}\) through fault propagation.
## 4 ABC simulation: Circuit-noise simulation without fault propagation
For some specific circuits, such as the standard syndrome extraction circuit for surface codes, we can bypass some of the steps of this simulation by directly computing the syndrome without fault propagation. This is because the standard surface code circuit [39] has a natural graph structure where each vertex corresponds to the measurement of an ancilla qubit in the circuit. This is less obvious for other surface code circuits like the measurement-based circuits of [10] or [21] or for Floquet codes [26].
In what follows, we propose a general algorithm to perform circuit-noise simulations without fault propagation; see Algorithm 2. This can be seen as a hypergraph generalization of the graph-based simulation technique for surface codes. We also simplify the computation of the syndrome and the logical flips, resulting in a more favorable worst-case complexity for the simulation of arbitrary Clifford circuits. Table 1 compares the complexity of Algorithm 1 and Algorithm 2. At a high level, these algorithm are similar. One can think of Algorithm 2 as obtained by removing steps 4 and 5 from Algorithm 1. In Table 1, we use the step numbers of Algorithm 1 (step 4, 5, 6, 7) to compare these two strategies even though step 4 and 5 are not present in Algorithm 2.
### Case of general circuits
In this section, we propose a circuit-noise simulation algorithm without fault-propagation for general Clifford circuits. The procedure is described is Algorithm 2.
The key ingredient to remove the fault propagation is the following result from [14]. It relates the accumulator \(F\mapsto\overrightarrow{F}\) and the back-accumulator \(F\mapsto\overleftarrow{F}\). Recall that \(\overleftarrow{F}\) is the fault operator defined in the same way as \(\overrightarrow{F}\) but by propagating faults backward through the circuit.
**Proposition 1** (Adjoint of the cumulant).: _[Proposition 3 of [14]] For all fault operators \(F,G\) of a circuit \(\mathcal{C}\), we have_
\[[\overrightarrow{F},G]=[F,\overleftarrow{G}].\]
The accumulator \(F\mapsto\overrightarrow{F}\) and the back-accumulator can \(F\mapsto\overleftarrow{F}\) are linear operators acting on the space of Pauli operators on \(n(\delta+1)\) qubits. This Pauli group is isomorphic with \(\mathbb{Z}_{2}^{2n(\delta+1)}\) (ignoring the global phase), and it is equipped with the symplectic inner product defined by \([P,Q]=0\) if \(P\) and \(Q\) commute and \(1\) if they anti-commute. Proposition 1 states that the back-accumulator
is the adjoint of the accumulator. This is because of this key property for our simulation algorithm that we name it adjoint-based code (ABC) simulation.
For any vector \(u\in\mathbb{Z}_{2}^{n_{m}}\), define the operator
\[F(u)=\prod_{j=1}^{n_{m}}\eta_{\ell_{j}-0.5}(S_{j}^{u_{j}}) \tag{4}\]
also used in [14]. Therein, \(S_{j}\) is the \(j\)th measured operator and \(\ell_{j}\) is the level of the circuit at which this operator is measured. With this notation, Proposition 1 leads to the following result.
**Corollary 1**.: _Let \(F\) be a fault operator with effect \(f=\operatorname{eff}_{m}(F)\) on measurement outcomes. If \(u\in\mathbb{Z}_{2}^{n_{m}}\), then_
\[(f|u)=[F,\overleftarrow{F(u)}] \tag{5}\]
Therein \((x|y)=\sum_{i}x_{i}y_{j}\pmod{2}\) the standard binary inner product between two binary vectors.
Proof.: By Lemma 1, we have \(f_{j}=[\overrightarrow{F},\eta_{\ell_{j}-0.5}(S_{j})]\). Using the standard properties of the commutator
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline Algorithm & Assumption & Precomp. & Comp. of \(\overrightarrow{F}\) & Comp. of \(f\) & Comp. of \(s\) & Comp. of \(\bar{f}\) \\ & & & (Step 4) & (Step 5) & (Step 6) & (Step 7) \\ \hline Naive & General circuit & None & \(O(n^{2}\Delta)\) & \(O(n\Delta)\) & \(O(n_{m}n_{s})\) & \(O(n_{m}n_{\ell})\) \\ \hline Naive & Sparse circuit & None & \(O(n\Delta)\) & \(O(n\Delta)\) & \(O(n_{m}n_{s})\) & \(O(n_{m}n_{\ell})\) \\ \hline ABC sim. & General circuit & \(O((n_{s}+n_{\ell})n\Delta)\) & None & None & \(O(n_{m}n_{s})\) & \(O(n_{m}n_{\ell})\) \\ \hline ABC sim. & LDPC ST code & \(O(n_{\ell}n\Delta)\) & None & None & \(O(|F|)\) & \(O(|F|n_{\ell})\) \\ \hline ABC sim. & LDPC ST code & \(O(n)\) & None & None & \(O(|F|)\) & \(O(|F|n_{\ell})\) \\ & + periodic & & & & & \\ \hline \end{tabular}
\end{table}
Table 1: Worst-case computational complexity of the main steps of the circuit-noise simulation for different classes of circuit with a naive simulation (Algorithm1) and with ABC simulation (Algorithm 2). We consider general Clifford circuits, sparse Clifford circuits made with circuit operations acting on a bounded number of qubits, Clifford circuits with an LDPC spacetime (ST) code and periodic circuits with an LDPC ST code. The worst-case complexity is computed as a function of the number of qubits \(n\), the circuit depth \(\Delta\), the number of measurements \(n_{m}\), the number of syndrome bits \(n_{s}\) and the number of logical outcome bits \(n_{\ell}\). Our approach is favorable when the sample size is large or when the number of Pauli faults \(|F|\) in the circuit is small. We do not include the cost of sampling (step 3) in this table because it is the same for all algorithms.
(see Section 3.3 of [14]), we find
\[(u|f) =\sum_{j=1}^{n_{m}}u_{j}f_{j} \tag{6}\] \[=\sum_{j=1}^{n_{m}}u_{j}[\overrightarrow{F},\eta_{\ell_{j}-0.5}(S_ {j})]\] (7) \[=\sum_{j=1}^{n_{m}}[\overrightarrow{F},\eta_{\ell_{j}-0.5}(S_{j}^ {u_{j}})]\] (8) \[=[\overrightarrow{F},\prod_{j=1}^{n_{m}}\eta_{\ell_{j}-0.5}(S_{j}^ {u_{j}})]\] (9) \[=[\overrightarrow{F},F(u)] \tag{10}\]
and applying Proposition 1, we reach \([F,\overleftarrow{F(u)}]\)
By definition, any bit \(b\) of the syndrome \(s\) or of the logical flips \(\bar{f}\) can be written as \(b=(u|f)\) for some vector \(u\in\mathbb{Z}_{2}^{n_{m}}\) (\(u\) is a row of \(\mathbf{M}_{s}\) or \(\mathbf{M}_{\ell}\)). As a result, one can directly compute any bit of \(s\) or \(\bar{f}\) without fault propagation and even without computing the effect \(f\). Instead, we precompute \(\overleftarrow{F(u)}\) for each of the \(n_{s}+n_{\ell}\) rows of the matrices \(\mathbf{M}_{s}\) and \(\mathbf{M}_{\ell}\). The worst-case complexity of this precomputation grows as \(O((n_{s}+n_{\ell})n^{2}\Delta)\) for a general Clifford circuit and is \(O((n_{s}+n_{\ell})n\Delta)\) for a sparse circuit. Once the precomputation is done, each syndrome bit is obtained by computing a commutator with an operator \(\overleftarrow{F(u)}\) acting on at most \(n(\Delta+1)\) qubits.
With this approach, the cost of the computation of \(\overrightarrow{F}\) and \(f\) is removed and the worst-case complexity of computing \(s\) and \(f\) remains respectively \(O(n_{m}n_{s})\) and \(O(n_{m}n_{\ell})\) bit operations.
```
input : A Clifford circuit \(\mathcal{C}\), a noise model \(\mathbb{P}_{\mathcal{C}}\), a syndrome matrix \(\mathbf{M}_{s}\), a logical matrix \(\mathbf{M}_{\ell}\), a decoder \(D\), the precomputed operators \(\overleftarrow{F(u)}\) for each row \(u\) of \(\mathbf{M}_{s}\) and \(\mathbf{M}_{\ell}\), an integer \(n_{\text{sample}}\). output : A Monte-Carlo estimation of the failure rate of the circuit \(\mathbb{P}(D(s)\neq\bar{f})\).
1 Initialize \(n_{\text{fail}}=0\).
2for\(i=1,2,\ldots n_{\text{sample}}\)do
3 Sample a fault operator \(F\) according to the circuit-noise distribution \(\mathbb{P}_{\mathcal{C}}\).
4 Compute the syndrome \(s\). The \(j\)th bit of \(s\) is \(s_{j}=[F,\overleftarrow{F(u)}]\) where \(u\) is the \(j\)th row of the matrix \(\mathbf{M}_{s}\).
5 Compute the logical flips \(\bar{f}\). The \(k\)th bit of \(\bar{f}\) is \(f_{k}=[F,\overleftarrow{F(u)}]\) where \(u\) is the \(k\)th row of the matrix \(\mathbf{M}_{\ell}\).
6 If \(D(s)\neq\bar{f}\), do \(n_{\text{fail}}\gets n_{\text{fail}}+1\).
7return\(\frac{n_{\text{fail}}}{n_{\text{sample}}}\).
```
**Algorithm 2**ABC simulation: Circuit-noise simulation without fault propagation
### Case of LDPC an spacetime code
Assume now that the spacetime code of the circuit is LDPC. Recall that the spacetime code is defined by the stabilizer generators \(\overleftarrow{F(u)}\) used to compute the syndrome bits [14]. Because this code is LDPC, each qubit belongs to at most \(O(1)\) of the operators \(\overleftarrow{F(u)}\) used to compute \(s\). Then, we can compute \(s\) using the relation (5) in \(O(|F|)\) bit operations in the worst case. The same argument does not apply to the computation of the bits of \(\bar{f}\) because the corresponding back-accumulated operator can have large weight, however \(n_{\ell}\) is often a small constant making the computation of \(\bar{f}\) inexpensive.
If, in addition, the circuit we simulate is obtained by repeating a constant depth circuit periodically. Then, the precomputation of the operators \(\overleftarrow{F(u)}\) used to compute \(s\) and \(\bar{f}\) can be done in \(O(n)\) bit operations.
## 5 Application to the simulation of large noisy Clifford circuit
In this section we argue that, using ABC simulation, the simulation of a noisy Clifford circuit with large depth acting on many logical qubits is not significantly more expensive than the simulation of a single logical operation of this circuit.
To assess the performance of a large noisy Clifford circuit, it is common to simulate a small piece of this circuit. For instance, to understand the performance of a quantum error correction code for building a quantum memory, we would like to run the quantum error scheme until a logical error occurs to estimate the lifetime of a quantum state. Instead, we often simulate a single or a small number of logical cycles and we estimate the logical error rate per logical cycle [35, 20]. Other pieces of fault-tolerant quantum computing circuits have been simulated such as a lattice surgery operation on two logical qubits [42] and recently up to four logical qubits [4]. This removes the need to propagate faults through a long circuit, making the simulation easier. However, the results extrapolated from the performance of these small subcircuits of a larger circuit are not as accurate as a simulation of the whole lifetime of an encoded quantum state because (i) estimating the logical error rate of \(T\) cycles by multiplying the logical error rate of a single logical cycle by \(T\) is a rough approximation, (ii) the residual noise at the end of a logical cycle may affect the performance of the subsequent correction cycles and we cannot observe this phenomenon if we simulate a single logical cycle, (iii) the noise model may change and the noise rate may increase during the execution of a large circuit.
As an example, assume that each logical qubit is encoded in a patch of qubits for a code equipped with a fault-tolerant lattice surgery operation to perform logical Clifford gates and an efficient decoder (think of a surface code or a Floquet code patch for example). Consider a circuit \(\mathcal{C}\) starting the fault-tolerant preparation of all the logical qubits in the state \(|\bar{0}\rangle\), followed with \(\Delta\) layers of lattice surgery operations2 and ending with the logical measurement of all the logical qubits. This circuit produces a \(N\)-bit logical outcome. The probability of an error on this logical outcome bit-string after decoding is some constant \(\varepsilon\in[0,1]\) that we want to estimate.
Footnote 2: Suppose that most logical qubits are part of a lattice surgery operation and few logical qubits are idle.
A standard way to estimate \(\varepsilon\) is to compute the logical error rate \(\varepsilon^{\prime}\) of a single lattice surgery circuit \(\mathcal{C}^{\prime}\) acting on two logical qubits and to multiply \(\varepsilon^{\prime}\) by the number of lattice surgery operations. Because there is about \(\frac{N\Delta}{2}\) lattice surgery operations in the entire circuit \(\mathcal{C}\), each of them has a logical error rate \(\varepsilon^{\prime}\) of the order of \(\frac{2\varepsilon}{N\Delta}\). To get a sufficiently small error-bar on our estimate of \(\varepsilon^{\prime}\), we
must sample of the order of \(\frac{N\Delta}{2\varepsilon}\) fault configurations in \(C^{\prime}\). Suppose that we use \(\frac{15N\Delta}{\varepsilon}\) samples, so that we observe an average of 30 logical errors. Each fault configuration is obtained by generating a random single-qubit or two-qubit Pauli fault after each circuit operation3 of \(\mathcal{C}^{\prime}\). Overall, we need to produce \(\frac{15N\Delta|\mathcal{C}^{\prime}|}{\varepsilon^{\varepsilon}}\) random Pauli faults (single-qubit or two-qubit) where \(|\mathcal{C}^{\prime}|\) is the number of single-qubit and two-qubit operations of the lattice surgery circuit.
Footnote 3: We assume that the circuit is made with single-qubit and two-qubit operations.
Consider now the number of random Pauli faults needed to estimate \(\varepsilon\) by simulating the entire circuit \(\mathcal{C}\). Because the noise rate \(\varepsilon\) of the entire circuit is much higher than the noise rate of a single lattice surgery operation, we can achieve a similar error-bar as the previous strategy using only \(\frac{30}{\varepsilon}\) samples. However, each sample requires generating a random Pauli fault for each operation of the entire circuit \(\mathcal{C}\), that is about \(\frac{N\Delta|\mathcal{C}^{\prime}|}{2}\) operations. Overall, the total number of random single-qubit or two-qubit Pauli faults needed to observe an average of 30 logical errors is again \(\frac{15N\Delta|\mathcal{C}^{\prime}|}{\varepsilon}\), just like in the previous case.
The issue with the naive simulation method based on fault propagation is that one needs to propagate these faults through the entire circuit. Our simulation method without fault propagation removes this obstacle. For this strategy to work, we need a decoder that can be executed efficiently for the whole circuit. This can be achieved with a sliding window decoder [15] which can be parallelized as proposed in [36, 37].
The price to pay for the ABC simulation of the whole circuit using Algorithm 2 is the pre-computation of the backpropagated operators \(\overleftarrow{F(u)}\) corresponding to the syndrome bits and the logical outcome bits. As discussed previously (see Table 1), this cost is not significant in many cases because the operators \(\overleftarrow{F(u)}\) associated with syndrome bits are often sparse and there are only \(N\) operators \(\overleftarrow{F(u)}\) associated with logical bits in \(\mathcal{C}\). Moreover, we can use the fact that the circuit \(\mathcal{C}\) is made with the same subcircuits repeated many times (fault-tolerant preparation, lattice surgery and logical measurements) to speed up the backpropagation of the \(F(u)\).
## 6 Conclusion
We proposed an ABC simulation algorithm for noisy Clifford circuits that removes the need for fault propagation and opens the way to the simulation of large noisy Clifford circuits. In particular, our approach is ideal for direct simulation of long sequences of fault-tolerant logical operations on many logical qubits, which stands in contrast to rough extrapolation based on composition of small subcircuits. It could, for instance, provide accurate estimates the performance of large circuits based on lattice surgery with surface codes or Floquet codes. Prime candidates include the plethora of Floquet codes recently introduced [1, 11, 31, 5, 40, 16, 44, 17, 12].
ABC simulation is compatible and may be combined with variance reduction techniques such as [8, 2, 29, 30]. In the low noise-rate regime, this may speedup fault sampling and further reduce the number of required samples overall.
|
2309.13734 | Prompting and Fine-Tuning Open-Sourced Large Language Models for Stance
Classification | Stance classification, the task of predicting the viewpoint of an author on a
subject of interest, has long been a focal point of research in domains ranging
from social science to machine learning. Current stance detection methods rely
predominantly on manual annotation of sentences, followed by training a
supervised machine learning model. However, this manual annotation process
requires laborious annotation effort, and thus hampers its potential to
generalize across different contexts. In this work, we investigate the use of
Large Language Models (LLMs) as a stance detection methodology that can reduce
or even eliminate the need for manual annotations. We investigate 10
open-source models and 7 prompting schemes, finding that LLMs are competitive
with in-domain supervised models but are not necessarily consistent in their
performance. We also fine-tuned the LLMs, but discovered that fine-tuning
process does not necessarily lead to better performance. In general, we
discover that LLMs do not routinely outperform their smaller supervised machine
learning models, and thus call for stance detection to be a benchmark for which
LLMs also optimize for. The code used in this study is available at
\url{https://github.com/ijcruic/LLM-Stance-Labeling} | Iain J. Cruickshank, Lynnette Hui Xian Ng | 2023-09-24T19:36:17Z | http://arxiv.org/abs/2309.13734v2 | # Use of Large Language Models for Stance Classification
###### Abstract
Stance detection, the task of predicting an author's viewpoint towards a subject of interest, has long been a focal point of research. Current stance detection methods predominantly rely on manual annotation of sentences, followed by training a supervised machine learning model. This manual annotation process, however, imposes limitations on the model's ability to fully comprehend the stances in the sentence and hampers its potential to generalize across different contexts. In this study, we investigate the use of Large Language Models (LLMs) for the task of stance classification, with an absolute minimum use of human labels. We scrutinize four distinct types of prompting schemes combined with LLMs, comparing their accuracies with manual stance determination. Our study reveals that while LLMs can match or sometimes even exceed the benchmark results in each dataset, their overall accuracy is not definitively better than what can be produced by supervised models. This suggests potential areas for improvement in the stance classification for LLMs. The application of LLMs, however, opens up promising avenues for unsupervised stance detection, thereby curtailing the need for manual collection and annotation of stances. This not only streamlines the process but also paves the way for expanding stance detection capabilities across languages. Through this paper, we shed light on the stance classification abilities of LLMs, thereby contributing valuable insights that can guide future advancements in this domain. The code used in this study is made available at [https://anonymous.4open.science/r/LLM-Stance-Labeling/README.md](https://anonymous.4open.science/r/LLM-Stance-Labeling/README.md).
1Army Cyber Institute
2101 New South Post Road
Highland Falls, NY 10996
2Carnegie Mellon University
5000 Forbes Road
Pittsburgh, PA
[email protected], [email protected]
## Introduction
Identifying and classifying an individual's stance towards a particular entity is a pivotal challenge in the realm of computational social science research. Stance detection entails the automated prediction of an author's viewpoint or stance towards a subject of interest, often referred to as the "target" (Alturayeif, Luqman, and Ahmed, 2023). Typically, a stance towards a subject is categorized as "Agree", "Disagree", or "Neutral". However, the labels representing stance can vary based on the specific target or context. Essentially, a stance mirrors an individual's perspective toward a specific topic or entity. Stance detection is used in downstream tasks like fake news detection, opinion surveys, and rumor detection (Kucuk and Can, 2022).
While the concept of stance might seem straightforward, detecting and classifying it involves unique challenges. First, the definitions of stance for labeling purposes can be ambiguous. For instance, previous studies have indicated discrepancies in stance definitions across various benchmark stance detection data sets. This inconsistency raises questions about the transferability of models trained on these data sets (Ng and Carley, 2022; Allaway and McKeown, 2023). Additionally, understanding stance is inherently context-dependent, as it represents an opinion about a specific entity. Without the appropriate context, comprehending the stance becomes nearly impossible. Consequently, these challenges hamper the broad applicability of any stance detection model, making stance classification an enduring challenge.
At the same time, recent developments in Large Language Models (LLM) have enabled breakthroughs in complex language understanding. In particular, through _prompting_ LLMs, researchers have been able to use LLMs to solve several complex language tasks (Brown et al., 2020; Schmidt et al., 2017). This paradigm of prompting a large pre-trained model even works in settings with few or no labeled data to solve classification problems (Liu et al., 2023; Brown et al., 2020; Zhao et al., 2021; Wei et al., 2023). Thus, it is possible for an LLM, with a suitable prompt to classify language by an ambiguous and complex label. As such, a few recent works have investigated the applicability of ChatGPT to classify stances on a couple of benchmark data sets (Zhang, Ding, and Jing, 2022; Mets et al., 2023; Aiyappa et al., 2023). These works produced mixed results for the performance of ChatGPT and only considered task-based prompting, thus it is not clear if LLMs and prompt engineering could be used for stance classification more broadly.
In this paper, we investigate the following question: How well do Large Language Models with prompt engineering perform at stance classification, without fine-tuning? We utilize five publicly available data sets in this study and query different LLMs with four different prompting schemes of increasing contextual information. We also perform our testing with minimal use of labels (only few-shot prompting) to explore how useful these models and prompting are in a more real-world setting where one does not have labels for their stance classification problem already. Despite LLMs and
prompt engineering showing promise for some data sets, our surprising conclusion is that the underlying task still remains a challenge. Indeed, we find that LLMs do not perform significantly better than current supervised machine learning stance classification models, and present artifacts like inconsistent outputs, opening up areas for future research on improving the stance detection capabilities of LLMs.
## Related Research
Previous work on stance detection has focused on the construction of supervised machine-learning models for the task. A commonly used machine learning classifier is the Support Vector Machine Lai et al. (2018); Elfardy and Diab (2016), which has performed well in the SemEval-2016 stance detection competition Mohammad et al. (2016). Supervised models that use neural network architectures are also popular. These include the use of convolutional neural networks Wei et al. (2016), or recurrent neural networks Zarella and Marsh (2016), sometimes enhanced with textual entailment Zhao and Yang (2020) and data augmentation for improved accuracies Kawinitranon and Singh (2021). Most recent work, however, has focused on multi-task learning objectives and transfer learning from transformer-based neural networks Alturayeif et al. (2023); Yang et al. (2019); Zhao and Yang (2020). Despite the typically stronger in-domain performance of these models, they often struggle to generalize to new data or other targets of stances, and are often of little use to real-world practitioners due to these generalizability shortcomings Ng and Carley (2022); Alturayeif et al. (2023).
While most work in stance detection has focused on supervised machine learning techniques, there are also some unsupervised techniques. Unsupervised learning methods make use of the idea of language homogenity for label classification Zhang et al. (2023). In that aspect, graph networks are a popular technique. Zhang et al. (2023) used graph neural networks to formulate homogeneous and heterogenous information for a user on Twitter, inferring the stance based on past tweets and tweets of neighbors. Another technique is label propagation based on the user interaction network or relationships Weber et al. (2013), to propagate stance labels based on existing knowledge. The interaction network can also be divided into partitions, with the stance of each partition then interpreted Pick et al. (2022). Darwish et al. (2020) first projected a large set of Twitter user data to a low-dimensional space before clustering the interaction network into multiple partitions. While these methods do not rely on having a set of stance labels to train a model, they often rely on very specific scenarios for their use (i.e., a social media website that has explicit behavioral links between users, which can be used to create a network) and strong assumptions about the language or behavior of users to infer stance (i.e., use of a certain word or hashtag always conveys a certain stance).
Recently, there has been an increase in research focusing on the concept of zero-shot stance detection. Notably, Allaway and McKeown (2023) discuss a variety of techniques for zero-shot stance detection and present an adaptation of the SemEval2016 dataset Zarella and Marsh (2016), along with their own VAST dataset, specifically designed for this purpose. They highlight the multitude of ways stance characterization can be achieved in a zero-shot manner. According to Allaway and McKeown (2023), there are three primary paradigms of zero-shot stance detection: topic, language, and genre. For each paradigm, a model is typically trained on all data, except for one element of the paradigm which is reserved for the zero-shot test. For instance, in zero-shot topic stance detection, a model is trained on data from all topics except one, which is then used for evaluation. Allaway and McKeown (2023) found that all models perform less optimally in the zero-shot setting compared to a fully supervised setting.
In terms of using LLMs for the stance detection task, current work has focused on just ChatGPT, with mixed results. Zhang et al. (2022) found that with just an instruction-based prompt (although they hint at the use of reasoning in a prompt in their paper), ChatGPT could produce better results on the SemEval2016 benchmark data set than supervised models. However, Aiyappa et al. (2023) examined the stance detection task using the ChatGPT model, observing that while there is a boost in performance, there could be potential contamination of data through its massive trained dataset, therefore making the evaluation unreliable. Additionally, Mets et al. (2023) probed the usability of ChatGPT as a zero-shot classifier, using only a task-based prompt, for stance detection on a custom data set on the topic of immigration in various language news articles. They found that ChatGPT performed close to the best supervised model, but was ultimately inferior for the stance classification task. Finally, given the similarities between stance and sentiment, it is also worth mentioning that Kheiri and Karimi (2023) investigated all of the OpenAI model offerings on the task of sentiment classification with a couple of different prompts and different benchmark data sets and found that GPT models, especially if fine-tuned, can significantly outperform any other models for sentiment classification. Overall, it is still unclear if LLMs, especially with prompt engineering and without the use of fine-tuning on labeled data, can perform the task of stance classification.
With the advent of LLMs for natural language tasks, the new discipline of prompt engineering has also come into being. Prompt engineering is the discipline of finding the right ways to provide input to models -- or 'prompt' them -- to get the best outputs Schmidt et al. (2023); Ramlochan (2023). While this is a fast-changing discipline, with new findings about prompt engineering constantly emerging, there are a couple of techniques that have emerged for use with LLMs in particular. The first is few-shot prompting, which is when you give a few examples of what you want the LLM to do as part of the prompt White et al. (2023); Brown et al. (2020); Wei et al. (2023). This is different from fine-tuning the LLM or low-shot learning, as no training (i.e., adjusting of model weights) is performed when the examples are given; the examples are only given as part of the context of the task Brown et al. (2020). While this prompting technique does consistently produce improved outputs, there are still possible instabilities in the technique which
can be caused by things like the ordering of the examples [13, 14]. Another prompting technique that has consistently improved the output of LLMs is Chain-of-Thought Reasoning [21, 15]. In this prompting scheme, the LLM is typically asked to explain its reasoning and to work through answering a prompt step-by-step. Answering prompts in such a process usually improves outcomes and helps to prevent undesirable behaviors like hallucination, where the model produces a plausible-looking answer that is incorrect [21, 15]. This technique has been used in an iterative, chatting format to improve implicit sentiment classification in previous research [21]. Thus, while the best means of interacting with LLMs is an open research question, there are certain prompting techniques, like few-shot prompting, that can elicit better outcomes from LLMs.
## Methodology
In this section, we review the benchmark data sets for the stance classification task and describe the prompting techniques with an LLM to produce stance classification results.
### Data Sets
We used a total of five publicly available, benchmark data sets, which have been manually annotated: covid-lies, election2016, phemerumors, semeval2016, and wtwt. These five data sets are of similar properties: they are constructed out of sentences which are Twitter posts and are written in the English language. The targets that are in the data sets range from misconceptions to elections to tragedies, which means the definition of stance varies between them. For example, in covid-lies and phemerumors the stance is about whether the statement supports or denies a rumor, while in semeval2016 and election2016 the stance is about an opinion of the target. Table 1 lists the data sets that are used, the targets that are present in the data sets, and the highest reported accuracy as found with the original papers. We followed the same data set handling procedures described in [20] in terms of standardizing labels for evaluation (but not for prompting).
### Prompting for Stance Classification
In order to investigate the use of LLMs and prompting for the task of stance classification, we used four different prompting schemes. The prompting schemes are hierarchical in nature such that each prompting scheme incorporates more information from the previous scheme. The following figure, Figure 1, displays the general, overarching prompting scheme and what elements are available to the LLM within each, individual prompting scheme, while actual examples of each prompt scheme are provided in the Appendix.
The following list details each of the prompting schemes we used to classify stance for each of the data sets.
1. **Task-only:** In the task-only prompt, we adopt a zero-shot learning prompting method, providing only the task (e.g. 'Classify the following statement... '). for this we provide different classification outcomes depending on the data set, but they generally follow a 'AGREE', 'DISAGREE', or 'NEUTRAL' format.
2. **Context:** In this prompting scheme we add in contextual information about what the statement is and the target of the stance classification to the task of classifying the stance of the statement (e.g. 'The following statement is a [context]. Classify the following statement toward [target] '). This prompting scheme is what the previous works that investigated using ChatGPT for stance labeling used [13, 14, 15].
3. **Context + FSP:** For this prompting scheme, we now utilize few-shot prompting [1] and provide some examples of the stance being classified for a statement and entity. For this prompting scheme, we keep the context provided in the context scheme, to include the target for each of the few-shot examples.
4. **Context + FSP + Reasoning:** Lastly, we further enhance the prompts with reasoning. In this prompting scheme, we provide a reason for why each few-shot example was classified as the stance that it was classified as and we further prompt the LLM to provide its reasoning for why it classifies a statement with a particular stance by '... and the reasoning for the classification in the form of:'stance: STANCE, reason: REASON' '. With this prompt, we seek to leverage some of the benefits of Chain-of-Thought prompting [21] by forcing the LLM to consider its reasoning as part of performing the task.
## Results
In this section, we present the results of using an LLM with prompting to perform stance classification. We begin by describing our test setup and then go on to present results for each of the benchmark data sets.
### Experimental Set Up
In this section, we detail the LLMs we investigated, how the prompting schemes were constructed for each data set, and the hardware used for testing.
LLMsGiven the results in [1] and [15], we opted to use only local, open-sourced LLMs for this investigation, as it is possible closed LLMs, like those from OpenAI, may have data contamination issues with the benchmark data sets. In particular, we used a number of encoder-decoder and decoder-only models available on HuggingFace [22]. For the decoder-only models, we attempted to use GPT-NeoX [1], Falcon -7B and -40B [1], MPT-7b [1], and Llama-2 -7B and -13B [16]. Unfortunately, for the decoder-only models, we found they could not reliably produce stance classifications under any of the prompting schemes. In many cases, they would return nonsensical responses such as elements of the prompt, blank spaces, or an attempt to explain the task. As
such, we do not present results from the decoder-only models. For the encoder-decoder models, we experimented with Flan-UL2 and Flan-Alpaca-GPT4-T5, which are both T5-based models and currently state of the art for these types of models [14, 15].
For each of these models, we employ HuggingFace's AutoTokenizer1 and pipeline classes to run the models [13]. We set each of the models to only provide the most probable output (e.g., by setting temperature=0) and a max_length of 1000 tokens.
Footnote 1: [https://huggingface.co/docs/transformers/v4.33.0/en/](https://huggingface.co/docs/transformers/v4.33.0/en/)
model_doc/auto/transformers.AutoTokenizer
Prompt DetailsFor each of the data sets, we altered the prompts slightly, based on the context of the data set. The following table, Table 2, summarizes how the prompting schemes were adjusted in the methodology for each data set. For the few-shot prompting (FSP) scheme, we included 5 examples (shots) that were taken from the data set at random. These examples were the same for every statement of that data set. We used the Python Package Langchain to programmatically construct these prompts for the tests 2.
Footnote 2: [https://www.langchain.com/](https://www.langchain.com/)
HardwareAll of the tests were run on a computer with Ubuntu 22.04 Linux with x64 CPU with 40 cores, 376 GB of RAM and two NVIDIA A6000 GPUs.
EvaluationFor evaluation, we report the unweighted, macro-F1 accuracy metric following previous work [1]. This macro-F1 score adjusts for the proportion of each class label type, for there is an imbalance of class labels in some of our data sets.
### Experimental Results
The results for different prompting schemes and LLMs for the different benchmark data sets are presented in Table 3. For each of the combinations, we ran the test three times and report the average result; we found that the LLM outputs could vary slightly between runs, especially when there was no context provided as part of the prompt.
From the testing, we only found that an LLM with prompting could outperform the benchmark, supervised models on only two of five of the benchmark data sets. That said, the LLM plus prompting results do come close, often to within 0.05 or less of the supervised benchmark results. Additionally, when compared to previous zero-shot stance detection results from [1], the LLMs perform significantly better on the semeval2016 data set than the zero-shot models (which still were able to train on some of the topics as part of the zero-shot topic evaluation).
We also note that the inclusion of context into the prompt was the single factor that most increased the performance of the LLMs; including context into the prompt always increased performance in stance classification. This result
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline
**Data Set** & **Event** & **Unweighted F1-score** & **Number of Examples** \\ \hline
**covid-lies** & misconceptions towards COVID-19 pandemic & 50.2 & 3,196 \\ \hline
**election2016** & 2016 US Presidential elections & 0.55 & 2,378 \\ \hline
**phemerumors** & tragedies (unrest, disasters, hostage, plane crash) & 0.33 & 2,859 \\ \hline
**semeval2016** & atheism, climate change, feminism, Hillary Clinton, abortion & 0.69 & 2,814 \\ \hline
**wtwt** & Company mergers and acquisitions & 0.62 & 32,409 \\ \hline \end{tabular}
\end{table}
Table 1: Summary of data sets used with our descriptions of the events and best-reported unweighted, macro F1-score from the original data set.
Figure 1: Overarching Prompting Scheme for Stance Classification Text highlights indicate the information available for each of the prompting schemes. Purple is the prompt with the task, green provides the addition of context, blue provides few-shot examples, and red adds reasoning to the examples. Each of the bracketed words indicates verbiage that would vary depending on the data set.
makes sense, as the definition of stance relies on context, such as the target of the stance. Whereas, the inclusion of few-shot examples and reasoning did not always improve performance across all of the benchmark data sets. This may be a result of which examples were selected for the few shots as it is known that example selection can affect few-shot prompting performance [23, 10]. Finally, we note that the larger T-5 model generally performed better than the smaller one, despite the smaller one being trained on newer data sets (i.e., Alpaca-GPT4 which are data generated by GPT-4 in response to Alpaca prompts), which argues in favor of the general consensus that larger models are more capable.
Additionally, during testing, we noted that while the LLM was given explicit instructions about the output to return, there were occasionally variances in this output. For example, the model would occasionally return responses like 'For', 'for', "'FOR', or 'The stance is FOR' for the stance label of 'FOR'. While this can be addressed by a relatively simple post-processing script, and while we inspected the outcomes of all of the runs of the models for inconsistencies in outputs and did not find anything that could not be easily addressed, this is still an issue that needs to be accounted for when using LLMs for stance classification.
Along with the inconsistencies in output formats, we also found that the LLMs did not always provide meaningful reasons when confronted with the context + FSP + reason prompt. The models would occasionally recycle a reason from the few-shot examples or would not output a reason at all. Once again, this can be handled by a simple post-processing script, but it is another issue with using LLMs for stance classification.
### Stance vs Sentiment Classification
In order to see if we could improve the performance of the LLMs, we also investigated a change in the verbiage of the prompts. Instead of asking for the'stance' of a statement toward a target, we instead asked for the'sentiment' of a statement toward a target. Sentiment analysis is a close cousin of the stance detection task. It analyzes the attitudes towards the text, expressing the attitudes in the form of a polarity (e.g. positive, negative, neutral) and has been used to understand the belief of people through their online writing (i.e., news, blogs) [11]. It is also a task that has been more broadly researched and has many more tools and data sets for the task, including data sets that are included in the pre-training of the T5 models. Our prompt generation setup and evaluation is the same as the stance classification task, except for replacing the words "stance" with "sentiment", and we only looked at the two data sets that have stance definitions closest to the definition of sentiment: semeneval2016 and election2016 and only with the Flan-Alpaca-GPT4-T5 model. The results of this investigation are presented in the table, Table 4
From these results, we can clearly see that using the term'sentiment,' despite sentiment classification being more familiar to the LLMs, actually decreased performance. As a result, we did not attempt to use a directed sentiment classification prompt as a proxy for stance classification for the
\begin{table}
\begin{tabular}{|p{85.4pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Data Set** & **Context** & **Stance Options** & **Target** & **Target Context** \\ \hline
**covid-lies** & about COVID or Coronavirus & supports, denies, neutral, unrelated & belief & is true \\ \hline
**election2016** & about politics & for, against, neutral & politician & N/A \\ \hline
**phemerumors** & commenting on whether a rumor is true & supports, denies, neutral & rumor & is true \\ \hline
**semeval2016** & expressing an opinion about an entity & for, against, neutral & entity & N/A \\ \hline
**wtwt** & that may be commenting on a corporate merger & for, against, neutral, unrelated & event & happening \\ \hline \end{tabular}
\end{table}
Table 2: Summary of the prompt differences between each of the benchmark data sets. For each data set, we used slightly different target, context, and stance labels options in order to accommodate the different purposes of stance classification between the data sets.
\begin{table}
\begin{tabular}{|p{85.4pt} p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt}|} \hline \multicolumn{5}{|c|}{**Data Set**} \\ \hline
**Prompting Scheme** & **LLM** & **Phenerumors** & **covid-lies** & **semeval2016** & **election2016** & **wtwt** \\ \hline Task Only & Flan-Alpaca-GPT4-T5-3B & 0.27 & 0.29 & 0.41 & 0.42 & 0.44 \\ Context & Flan-Alpaca-GPT4-T5-3B & 0.35 & 0.37 & 0.57 & 0.49 & 0.47 \\ Context and FSP & Flan-Alpaca-GPT4-T5-3B & 0.41 & 0.45 & 0.57 & 0.46 & 0.5 \\ Context and FSP and Reason & Flan-Alpaca-GPT4-T5-3B & 0.35 & 0.49 & 0.61 & 0.58 & 0.49 \\ \hline Task Only & Flan-UL2 & 0.31 & 0.31 & 0.42 & 0.45 & 0.42 \\ Context & Flan-UL2 & 0.37 & 0.35 & 0.66 & 0.55 & 0.57 \\ Context and FSP & Flan-UL2 & **0.44** & 0.36 & 0.65 & 0.57 & 0.56 \\ Context and FSP and Reason & Flan-UL2 & 0.32 & 0.41 & 0.64 & **0.59** & 0.53 \\ \hline Benchmark & & 0.33 & **0.5** & **0.69** & 0.55 & **0.62** \\ \hline \end{tabular}
\end{table}
Table 3: Average unweighted F1-scores for each prompting scheme, LLM, and data set combination. The benchmark results are given in the final row. The highest scoring result for each data set is bolded.
other, less sentiment-like data sets. This result also indicates that LLMs perceive stance and sentiment differently and that the classification of stance is a different task for an LLM than the classification of sentiment.
## Discussion
Large Language Models have become prominent and widely adopted due to their accessibility and capabilities to perform a plethora of natural language tasks. The use of LLMs has already become a mainstay for language-based tasks like summarization, question-answering, and translation. In our work, we examined the use of LLMs for stance prediction with different prompting schemes. We observed that the use of Large Language Models (LLMs) supplemented with adequate prompting produced results that were comparable to fully supervised models. These models also outperformed previous work looking at zero-shot models, and so in more demanding environments than previous zero-shot stance detection tests (previous zero-shot stance modeling would only hold out a particular topic or genre and train on the rest [11], whereas the LLMs tested in this research did not train on any of the data whatsoever). This is a significant finding as it underscores the potential of LLMs in conjunction with effective prompting to achieve high performance in stance prediction tasks, often matching or even surpassing more resource-intensive supervised models.
Furthermore, while supervised machine learning models generally infer the target of the sentence themselves before deciding on a stance, in our investigation of LLMs, we provided contextual information that indicates things like the stance targets. The inclusion of contextual information in the prompts consistently improved the results. This information also increases the ability of the stance prediction to work across varied targets, removing the need to construct stance data sets pertaining to each target, and thereby improve generalizability. This technique can be enhanced with past work on Target-Stance-Extraction, which automatically extracts the target and corresponding stance from a sentence, reducing the need for large-scale target annotations [13]. Overall, context-based prompts seemed to provide the models with a broader perspective of the information, enabling them to make more accurate stance predictions. This finding aligns with the notion that providing a richer context can aid in more nuanced understanding and better task performance as well as the definition of stance requiring context.
Further, LLMs are typically able to understand multiple languages, which facilitates multilingual stance classification, removing the need to collate and annotate datasets across multiple languages and find native speakers to do so. This opens up opportunities to analyze large sets of data with varied languages, such as opinion expression on social media.
Interestingly, we also found that different types of LLMs performed differently on the task. Specifically, encoder-decoder models, such as the T-5 model, demonstrated successful performance on the stance prediction task, in a zero-shot setting and with no additional model training. On the other hand, decoder-only models were not able to perform the task as effectively under the same scenario. This discrepancy in performance might be due to the inherent architecture of these models. Encoder-decoder models are inherently designed to understand the context and generate relevant output, making them well-suited for tasks like stance detection. Decoder-only models, however, might require additional fine-tuning on stance detection tasks before they can effectively carry out the task.
Our research also highlighted that few-shot prompting did not always improve performance. The reason for this is not entirely clear, but it's likely that the selection of samples for the few-shots may not have been optimal. These samples were selected at random and remained the same for every statement classified, which may have affected the effectiveness of few-shot prompting.
In conclusion, our findings underscore the versatility of LLMs in stance prediction tasks, particularly when used with context-informed prompting and an encoder-decoder architecture. Further research could explore fine-tuning strategies for decoder-only models and optimal sample selection for few-shot prompting in stance detection tasks, potentially unlocking new avenues for their application.
Stance detection as an LLM BenchmarkLarge Language Models (LLMs) are built and assessed on a range of benchmarks, from common sense reasoning, reading comprehension, to mathematical reasoning [23]. However, one language task not represented in current LLM benchmarking is complex language classification, such as stance classification. The task of stance classification is a crucial benchmark to investigate, as many policy formulations depend on understanding public opinion, for instance, opinions towards climate change policies [24]. Misclassification of sentence stances can lead to incorrect interpretations of opinion slant. For downstream analyses that study aggregated stances towards topics to understand public reaction and formulate policy, incorrect classification can result in erroneous interpretations and policy mismatches [25].
Given the importance of stance classification to the wider society and the results of this study, we would like to propose stance classification be considered as a future benchmarking task for LLMs.
\begin{table}
\begin{tabular}{|l|l|l|} \hline & \multicolumn{2}{c|}{**Data Set**} \\ \hline
**Prompt** & **semeval2016** & **election2016** \\ \hline
**Task Only** & 0.36 & 0.4 \\ \hline
**Context** & 0.45 & 0.45 \\ \hline
**Context and FSP** & 0.5 & 0.44 \\ \hline
**Context and FSP** & 0.46 & 0.43 \\ \hline \end{tabular}
\end{table}
Table 4: Results of attempting to use sentiment toward a target as a proxy for stance classification, since the models had been pre-trained on a sentiment classification task. In each case, this prompting did not improve performance
LimitationsAs in all studies, several limitations nuance our work. Our data sets are premised on manual annotations, which could be subjected to inconsistent annotations and varied sentence interpretations Ng and Carley (2022). For example, there is a sentence about Michael Essen having Ebola, "@xx no he hasn't. The man himself confirmed not true @MichaelEssien" that was annotated as a neutral stance whereas it should be a stance _against_ the claim that Michael Essen had contracted Ebola.
Additionally, while we attempted to consider a wide range of open-source models and possible configurations for those models, we were constrained by computational resources and time from using every possible permutation of open-source models for the task of stance classification. We believe that we have tested a representative sample of offerings, however, it is possible that a certain LLM, perhaps due to pre-training data or even architectural differences, may actually perform uncharacteristically in regards to the models tested in this study.
Broader Perspectives & Ethical ConsiderationsIn all research involving Large Language Models (LLMs), it is crucial to acknowledge and address potential ethical implications. A primary concern arises from the datasets used to pre-train LLMs, from which they acquire their language capabilities and knowledge base. These datasets may harbor inherent biases or offensive content Schaul et al. (2023). Although this study makes no attempt to exploit or introduce any form of bias, it is conceivable that these biases might inadvertently permeate the analysis performed using LLMs. This underlines the importance of diligently scrutinizing the data used to train LLMs, especially when they are employed in socially significant tasks such as stance classification, where bias can have profound implications.
Another ethical consideration pertains to the environmental impact of running these computationally intensive models. It's indisputable that LLMs consume more energy compared to their smaller counterparts. Thus, their use in tasks like stance classification is associated with a tangible energy cost. However, it is also crucial to balance this against the alternative scenario, which involves continuous human effort for labeling data, a process that is both labor-intensive and time-consuming due to the generalizability problem inherent in stance classification.
Looking ahead, as we strive to create more sustainable and efficient computational models, one potential avenue could be leveraging LLMs to distill smaller, more energy-efficient models for production purposes. This could significantly decrease the energy demand, making stance classification tasks more environmentally sustainable, while still benefiting from the superior performance of LLMs. As our understanding of LLMs continues to evolve, it is paramount to remain vigilant about these ethical considerations and strive towards more responsible and sustainable practices.
Finally, as with any classification effort of text, such efforts could be used for text-based censorship. For example, it's possible that our research could be used to identify, at scale, comments and users that a presenting a certain stance toward a target and then remove those users or comments. We believe, however, that the benefits of being able to more correctly classify stances of text comments outweigh its potential misuse. and that the same precautions used to prevent misuse with the classification of texts more broadly can also be applied to this work.
## Conclusion
Stance classification is a crucial task that contributes significantly to discerning the author's perspective towards a particular event. Despite the demonstrated proficiency of Large Language Models (LLMs) in numerous natural language tasks, their performance varies and does not consistently surpass state-of-the-art models Kocon et al. (2023). In this study, we have illuminated the potential of LLMs, particularly when combined with effective prompting, in the realm of stance classification. However, it's important to note that they do not definitively outperform existing supervised methods.
Stance classification, due to the intricacies of language expression and the context-dependent nature of stance, continues to pose a formidable challenge. Yet, the utilization of LLMs for stance classification offers promising opportunities. Notably, it permits the adaptation of stance classification outputs without requiring extensive human annotation, thereby enabling the application of stance classification techniques in a diverse array of contexts beyond those that the original datasets were designed for.
This study serves to enhance our understanding of the stance classification capabilities of LLMs and proposes a pathway for future advancements. The findings underscore the need for improving both prompting schemes and LLM models, using stance classification as a benchmark. As we navigate this path, we anticipate pushing the boundaries of what's possible in stance classification, ultimately contributing to more nuanced and effective natural language processing applications.
Acknowledgments.The research for this paper was supported in part by the Center for Informed Democracy and Social-cybersecurity (IDeaS) and the Center for Computational Analysis of Social and Organizational Systems (CASOS) at Carnegie Mellon University. This work was also conducted within the Cognitive Security Research Lab at the Army Cyber Institute at West Point and supported in part by the Office of Naval Research (ONR) under Support Agreement No. USMA 20057. The views and conclusions are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Department of Defense, the U.S. Army, or the U.S. Government.
|
2309.16894 | A structure-preserving particle discretisation for the Lenard-Bernstein
collision operator | Collisions are an important dissipation mechanism in plasmas. In
one-dimensional modelling, a commonly used collision operator is the
Lenard-Bernstein operator, or its modified energy- and momentum-conserving
counterpart. When approximating such operators numerically, it is important to
respect their structure in order to satisfy the laws of thermodynamics. It is,
however, challenging to discretise such operators in a structure-preserving way
when considering particle methods. In this work, we present a macro-particle
discretisation of the Lenard-Bernstein collision operator that is energy and
momentum preserving. | Sandra Jeyakumar, Michael Kraus, Matthew Hole, David Pfefferlé | 2023-09-28T23:39:26Z | http://arxiv.org/abs/2309.16894v2 | # A structure-preserving particle discretisation for the Lenard-Bernstein operator
###### Abstract
Collisions are an important dissipation mechanism in plasmas. In one-dimensional modelling, a commonly used collision operator is the Lenard-Bernstein operator, or its modified energy- and momentum-conserving counterpart. When approximating such operators numerically, it is important to respect their structure in order to satisfy the laws of thermodynamics. It is, however, challenging to discretise such operators in a structure-preserving way when considering particle methods. In this work, we present a macro-particle discretisation of the Lenard-Bernstein collision operator that is energy and momentum preserving.
###### Contents
* 1 Introduction
* 2 The conservative Lenard-Bernstein operator
* 3 Semi-discrete operator
* 4 Numerical results
* 5 Conclusion
* A Time-evolution of cumulants
Introduction
Structure-preserving numerical methods aim at preserving certain properties of a system of equations exactly at the discrete level. Some examples for properties of interest are symmetries and conservation laws, Lagrangian or Hamiltonian structure, or compatibility with the laws of thermodynamics. Preserving such structures is typically found to be advantageous for accuracy and robustness of numerical schemes, especially for strongly nonlinear problems and long-time simulations (Hairer & Wanner, 2006). This has also been recognised in plasma physics, and the last decade has seen vivid efforts towards the development of structure-preserving algorithms for problems such as magnetohydrodynamics, the Vlasov-Poisson and the Vlasov-Maxwell system (see e.g. Morrison (2017) and references therein). So far, most work focused on dissipation-less systems, with dissipative systems, such as collisional kinetic systems, being considered only more recently. However, dissipative effects, although often weak, are important for the correct simulation of physical behaviour over long simulation times. Sometimes, the neglect of dissipative effects can cause numerical problems, e.g., when small structures emerge that cannot be resolved by the computational mesh. In many cases, these structures are unphysical because dissipation would prevent their emergence in the first place. Thus, the inclusion of dissipation is important not only for physical correctness but also because it can aid numerical robustness.
Work on the structure-preserving discretisation of Vlasov-like equations has mainly focused on particle-based methods. In recent years, many authors worked on the ideal (non-dissipative) part of the problem, including Chen _et al._ (2011), Markidis & Lapenta (2011), Squire _et al._ (2012), Evstatiev & Shadwick (2013), Qin _et al._ (2016), Burby (2017), Kraus _et al._ (2017) (GEMPIC), Zhang & Gamba (2017), Campos Pinto _et al._ (2022).
After the discretisation of the ideal problem was well understood, focus shifted towards the structure-preserving discretisation of the collisional (dissipative) part. While early work focused on grid-based methods (Hirvijoki & Adams, 2017; Kraus & Hirvijoki, 2017), structure-preserving discretisations for collision operators _with particles_ have been considered lately (Hirvijoki _et al._, 2018; Carrillo _et al._, 2020; Hirvijoki, 2021). In the first work, the authors consider an approach where the weights of the marker particles are varied, instead of their velocities. In the latter two works, the authors use finite-sized marker particles to discretise the Landau operator. An alternative approach is that of Tyranowski (2021), which treats the collisions as a stochastic process, effectively modelling their underlying microscopic behaviour rather than the resultant macroscopic effects modelled by various collision operators.
The aim of this work is to provide a proof-of-concept for an alternative approach to structure-preserving particle methods for collisions, specifically for the Lenard-Bernstein collision operator (Lenard & Bernstein (1958)). We study a conservative version of the Lenard-Bernstein operator and detail a particle-based energy- and momentum-preserving discretisation for it. Numerical examples and some convergence results for the one-dimensional case will also be shown.
The structure of the paper is as follows. In Section 2, we detail the derivation of the conservative Lenard-Bernstein operator. In Section 3, the semi-discretisation of the operator is presented, and Section 4 shows several numerical tests and examples. Finally, the paper is concluded with a discussion of current and future work.
## 2 The conservative Lenard-Bernstein operator
The Lenard-Bernstein collision operator (Lenard & Bernstein (1958)) is
\[C[f](v)=\nu\frac{\partial}{\partial v}\left(\frac{\partial f}{\partial v}+vf \right), \tag{1}\]
where \(f:\mathbb{R}^{n}\times[0,\infty)\rightarrow\mathbb{R}\) is the single-particle distribution function, \(v\in\mathbb{R}^{n}\) is the velocity and \(\nu\) is the collision frequency, which is assumed to be constant in time. In most applications, such a collision operator is coupled to the Vlasov-Poisson or Vlasov-Maxwell equations, and so the distribution function would also depend on the position variables. Here, however, we will ignore this dependency as we study the collision operator independently of the ideal dynamics and this operator acts purely in velocity-space. The Lenard-Bernstein operator is applicable in velocity dimensions \(n=1,2,3\), though collision operators which describe more physics effects, such as the Landau operator, may be preferred in two and three-dimensions in order to allow, for example, interchange of momentum between different components. The steady-state solution to \(\partial_{t}f=C[f](v)\) for this operator is an \(n\)-dimensional Gaussian distribution.
This operator preserves mass density, the zeroth moment of the distribution function, but does not preserve momentum density nor energy density, which are the first and second moments respectively. In order to enforce conservation of these quantities, we follow Kraus (2013) (see also Filbet & Sonnendrucker (2003)) and modify the operator through an expansion as follows,
\[C[f]=\nu\frac{\partial}{\partial v}\left(\frac{\partial f}{\partial v}+A_{1}f +A_{2}vf\right). \tag{2}\]
In the following, we will see that the coefficients \(A_{n}\) are functions of the moments of the distribution function \(f\). In general, preserving \(k\) moments of the distribution function will require an expansion including \(k\) terms in the operator.
The coefficients are then computed by requiring the conservation conditions on the differential equation
\[\partial_{t}f=C[f](v). \tag{3}\]
Specifically, conservation of the \(k\)-th moment requires the following condition to hold:
\[\int v^{k}C[f]\mathrm{d}v=\nu\int v^{k}\left[\frac{\partial}{\partial v}\left( \frac{\partial f}{\partial v}+A_{1}f+A_{2}vf\right)\right]\mathrm{d}v=0. \tag{4}\]
Integrating this by parts, we obtain the following condition:
\[\left[v^{k}\left(\frac{\partial f}{\partial v}+A_{1}f+A_{2}vf\right)\right]_{ -\infty}^{+\infty}-k\nu\int v^{k-1}\left(\frac{\partial f}{\partial v}+A_{1}f +A_{2}vf\right)\mathrm{d}v=0. \tag{5}\]
Without loss of generality, we assume that \(f\) and \(\partial f/\partial v\) approach zero as \(v\rightarrow\pm\infty\), so that the first term in equation (5) is zero and we obtain the following condition:
\[\int v^{k-1}\left(\frac{\partial f}{\partial v}+A_{1}f+A_{2}vf\right)\mathrm{ d}v=0 \tag{6}\]
for \(k=1,2.\) Integrating the first term by parts once again, this equation becomes:
\[\int\left[(k-1)v^{k-2}-A_{1}v^{k-1}-A_{2}v^{k}\right]f\mathrm{d}v=0, \tag{7}\]
where the assumption that \(f\) approaches zero as \(v\) tends to infinity has been utilised once more. Writing the moments as \(M_{n}[f]=\int v^{m}f\mathrm{d}v\), we obtain the following conditions:
\[(k-1)M_{k-2}=A_{1}M_{k-1}+A_{2}M_{k},\quad k=1,2. \tag{8}\]
These conditions provide a linear system of equations that can be solved for the coefficients \(A_{1}\), \(A_{2}\):
\[\begin{split} A_{1}M_{0}+A_{2}M_{1}&=0,\\ A_{1}M_{1}+A_{2}M_{2}&=M_{0}.\end{split} \tag{9}\]
The solution to the system of equations in (9) is:
\[A_{1}=\frac{M_{0}M_{1}}{M_{0}M_{2}-M_{1}^{2}}=\frac{u}{u^{2}- \varepsilon}, \tag{10}\] \[A_{2}=\frac{-M_{0}^{2}}{M_{0}M_{2}-M_{1}^{2}}=-\frac{1}{u^{2}- \varepsilon}, \tag{11}\]
where \(nu\) and \(n\varepsilon\) are the momentum and energy density, respectively, and are related to the moments as follows:
\[n=M_{0}=\int f\mathrm{d}v,\quad nu=M_{1}=\int vf\mathrm{d}v,\quad n\varepsilon =M_{2}=\int v^{2}f\mathrm{d}v. \tag{12}\]
Let us note that here, \(n\), \(u\), and \(\varepsilon\) are just constants. However, in the general Vlasov-case, these quantities have a spatial dependency. Upon inserting the expressions for \(A_{1},A_{2}\) into (2), we obtain the following operator:
\[C[f](v)=\nu\frac{\partial}{\partial v}\left(\frac{\partial f}{ \partial v}+\frac{v-u}{\varepsilon-u^{2}}f\right), \tag{13}\]
which can be seen as a conservative version of the Lenard-Bernstein operator (1). This is the same operator as the one obtained by Filbet & Sonnendrucker (2003).
## 3 Semi-discrete operator
In order to discretise the Lenard-Bernstein operator in velocity-space, we need to introduce a second representation of the distribution function. The particle-representation with Dirac delta distributions, which is usually used to solve the ideal part, is not differentiable and thus cannot be used to evaluate the collisional part. Previous works regularised the collision operator by using finite sized particles (Carrillo _et al._ (2020); Hirvijoki (2021)). Here, we explore a different approach based on finite element or spline spaces of sufficient regularity.
The particle distribution function is given by
\[f_{p}(v,t)=\sum_{\alpha}w_{\alpha}\delta(v-v_{\alpha}(t)), \tag{14}\]
where \(\{v_{\alpha}(t)\}_{\alpha=1}^{N}\) are the particle velocities which evolve over time with \(N\) the number of particles. As the particle distribution function \(f_{p}\) is non-differentiable, we use an \(L^{2}\) projection of \(f_{p}\) onto a set of differentiable basis functions \(\{\varphi_{j}\}_{j=1}^{M}\) for \(M\ll N\) as follows:
\[f_{s}(v,t)=\sum_{i}\varphi_{i}(v)f_{i}(t)=\sum_{i,j}\varphi_{i}(v)\,\mathbb{M }_{ij}^{-1}\sum_{\alpha}w_{\alpha}\varphi_{j}(v_{\alpha}(t)), \tag{15}\]
where \(\{f_{i}(t)\}\) are the coefficients of the projected distribution function, \(f_{s}\), expressed in the basis \(\{\varphi_{i}\}\), and \(\mathbb{M}_{ij}=\int\varphi_{i}\varphi_{j}\mathrm{d}x\) are the elements of the corresponding mass matrix \(\mathbb{M}\). The projected representation of the distribution function, \(f_{s}(v)\), will be used for the evaluation of the collision operator where differentiability is required. This type of projection also offers the benefits of smoothing in the solution for appropriately-chosen basis functions \(\{\varphi_{j}\}\).
To construct the semi-discretisation of the conservative Lenard-Bernstein operator, we return to its form in Equation (2). We will discretise this equation first, and then derive the discrete coefficients \(A_{1}\) and \(A_{2}\) in terms of the discrete momentum and energy density.
To discretise the conservative collisional dynamics,
\[\frac{\partial}{\partial t}f=\nu\frac{\partial}{\partial v}\left(\frac{ \partial f}{\partial v}+A_{1}f+A_{2}vf\right), \tag{16}\]
we apply a deterministic particle method (see Chertock (2017) for a review). Typically, deterministic particle methods are formulated for first order transport-type problems. They can, however, be adapted to the context of diffusion problems as shown by Degond & Mustieles (1990). Following this approach, we rewrite equation (16):
\[\frac{\partial}{\partial t}f=\frac{\partial}{\partial v}\left(a(v,f)f\right),\quad a(v,f)=\nu\left(\frac{1}{f}\frac{\partial f}{\partial v}+A_{1}+A_{2}v \right). \tag{17}\]
This equation is approximately solved in terms of the particle distribution function (14), where the particle velocities \(\{v_{\alpha}\}\) satisfy the following ordinary differential equations:
\[\dot{v}_{\alpha}=a(v_{\alpha},f)=\nu\left(\frac{1}{f(v_{\alpha})}\frac{ \partial f}{\partial v}(v_{\alpha})+A_{1}+A_{2}v_{\alpha}\right). \tag{18}\]
Let us note that in order for the first term to be well defined, it is insufficient to use the particle representation \(f_{p}\). Instead, here we will use the projection shown in equation (15) and replace both instances of the distribution function with a projected distribution function \(f_{s}\), which has the required regularity through the choice of the basis for the terms to be well defined, denoted in the following equation:
\[\dot{v}_{\alpha}=a(v_{\alpha},f_{s})=\nu\left(\frac{1}{f_{s}(v_{\alpha})} \frac{\partial f_{s}}{\partial v}(v_{\alpha})+A_{1}+A_{2}v_{\alpha}\right). \tag{19}\]
The final step in obtaining the semi-discrete system of equations is to compute the coefficients \(A_{1}\) and \(A_{2}\). This will be done analogously to the continuous case of section 2, by imposing the conservation conditions on the discrete momentum and energy1:
Footnote 1: Here, we have chosen to preserve the discrete moments in the particle-basis but it is also possible to derive a different scheme by imposing the conservation conditions on the projected moments.
\[\frac{d}{dt}\sum v_{\alpha} =\nu\sum_{\alpha}\left[\frac{1}{f_{s}(v_{\alpha})}\frac{ \partial f_{s}}{\partial v}(v_{\alpha})+A_{1}+A_{2}v_{\alpha}\right]=0, \tag{20}\] \[\frac{1}{2}\frac{d}{dt}\sum v_{\alpha}^{2} =\nu\sum_{\alpha}\left[\frac{v_{\alpha}}{f_{s}(v_{\alpha})}\frac{ \partial f_{s}}{\partial v}(v_{\alpha})+A_{1}v_{\alpha}+A_{2}v_{\alpha}^{2} \right]=0. \tag{21}\]
Upon introduction of the discrete mass, momentum, and energy densities,
\[n_{h}(v_{\alpha})=\sum_{\alpha},\quad n_{h}u_{h}(v_{\alpha})=\sum_{\alpha}v_{ \alpha},\quad n_{h}\varepsilon_{h}(v_{\alpha})=\sum_{\alpha}v_{\alpha}^{2}, \tag{22}\]
we obtain a linear system of equations which can be solved to find the discrete \(A_{1}\), \(A_{2}\):
\[\begin{split} A_{1}n_{h}+A_{2}n_{h}u_{h}&=-\sum_{ \alpha}\frac{f_{s}^{\prime}(v_{\alpha})}{f_{s}(v_{\alpha})},\\ A_{1}n_{h}u_{h}+A_{2}n_{h}\varepsilon_{h}&=-\sum_{ \alpha}v_{\alpha}\frac{f_{s}^{\prime}(v_{\alpha})}{f_{s}(v_{\alpha})}.\end{split} \tag{23}\]
The solutions to this linear system are as follows:
\[\begin{split} A_{1}&=\frac{1}{n_{h}\varepsilon_{h} -n_{h}u_{h}^{2}}\sum_{\alpha}(u_{h}v_{\alpha}-\varepsilon_{h})\frac{f_{s}^{ \prime}(v_{\alpha})}{f_{s}(v_{\alpha})},\\ A_{2}&=\frac{1}{n_{h}\varepsilon_{h}-n_{h}u_{h}^{2} }\sum_{\alpha}(u_{h}-v_{\alpha})\frac{f_{s}^{\prime}(v_{\alpha})}{f_{s}(v_{ \alpha})},\end{split} \tag{24}\]
and this concludes the construction of the semi-discrete collision operator.
## 4 Numerical results
In this chapter, we present numerical experiments for the one-dimensional conservative Lenard-Bernstein operator using B-spline basis functions of arbitrary order for projecting the particle distribution function. The implementation is based on the Julia programming language (Bezanson _et al._, 2017), and the package is available publicly (Kraus _et al._, 2023).
### Convergence of the semi-discrete operator
We demonstrate convergence properties of the semi-discretisation under projection onto a B-spline basis and particle sampling. First, we investigate if the semi-discretisation indeed preserves the Maxwellian as an equilibrium solution under projection. To test this, we compute the projection of an exact Maxwellian onto the spline basis as follows:
\[\int\sum_{j}f_{j}\varphi_{j}(v)\varphi_{i}(v)\mathrm{d}v=\int f_{M}(v)\varphi_ {i}(v)\mathrm{d}v, \tag{25}\]
where \(f_{j}\) are the coefficients of the spline for which we are solving, and \(f_{M}(v)=1/\sqrt{2\pi}\exp(-v^{2}/2)\) is the Maxwellian of mean \(\mu=0\) and variance \(\sigma^{2}=1\). Rearranging this expression, we obtain the following spline coefficients for the projected Maxwellian:
\[f_{j}=\mathbb{M}_{ij}^{-1}\int f_{M}(v)\varphi_{i}(v)\mathrm{d}v, \tag{26}\]
where \(\mathbb{M}_{ij}=\int\varphi_{i}\varphi_{j}\mathrm{d}v\) are the elements of the mass matrix \(\mathbb{M}\) as before. The spline-projected representation of the Maxwellian distribution \(f_{M}(v)\) is then given by \(f_{s,M}(v)=\sum_{j}f_{j}\varphi_{j}(v)\) with the derived coefficients. We use this projected Maxwellian to compute the right-hand side of equation (19), as follows
\[\dot{v}_{\alpha}=\nu\left(\frac{1}{f_{s,M}(v_{\alpha})}\frac{\partial f_{s,M} }{\partial v}(v_{\alpha})+A_{1}+A_{2}v_{\alpha}\right), \tag{27}\]
where the coefficients \(A_{1}\) and \(A_{2}\) are computed using the projected Maxwellian distribution \(f_{s,M}\) in (24). The \(L^{2}\) norm of the time derivative of the particle velocities, \(\|\dot{v}_{\alpha}\|_{2}\), can then
be used to check if the Maxwellian is an equilibrium solution under projection, as this norm should approach zero with increasing spline resolution in such case. We compute this quantity for a range of spline resolutions, using a sample of \(N=100,000\) particles from a normal distribution where the sample is strictly used for evaluation of equation (27) and never for projection. Figure 1 shows the convergence of \(\|\dot{v}_{\alpha}\|_{2}\) with an increasing number of splines, computed using cubic spline basis functions. As expected, the norm of the time derivative approaches zero with an increasing number of splines at a rate which corresponds to the order of the splines used (cubic splines have order \(k=4\)).
Secondly, we demonstrate convergence of the semi-discretisation under particle sampling. Here, we instead compute the sample variance of the particle velocity time derivatives, i.e. \(1/N\sum\dot{v}_{\alpha}^{2}\), keeping the spline resolution fixed and varying the number of particles in the sample. Here, we directly project the particles to compute the spline representation of \(f\), as per equations (15) and (19). The results of this are shown in Figure 2, and we observe that the sample variance converges at a rate slightly above \(1/N\), which is the expected convergence rate for the sample variance generated by statistical sampling methods.
These two results demonstrate that the Maxwellian distribution remains an equilibrium solution under semi-discretisation, and the method demonstrates the expected convergence properties with both number of splines and particles.
### Relaxation of a shifted normal distribution
In the first example, we initialise the distribution function with a standard normal distribution shifted to the right by \(\mu=2\), obtaining a distribution with mean \(\mu=2\) and variance \(\sigma^{2}=1\) as shown in the following equation:
\[f(v,t=0)=\frac{1}{2\pi}e^{-\frac{1}{2}(v-2)^{2}}. \tag{28}\]
Figure 1: Convergence of the \(L^{2}\) norm of the particle velocity gradient, \(\|\dot{v_{\alpha}}\|_{2}\), when computed using a true Maxwellian \(f\), against the number of splines. The dashed line shows the reference curve \(y=x^{-4}\).
The particles are then initialised by independently and identically (iid) sampling from this distribution, for \(N=1000\) particles. The spline distribution is initialised by \(L^{2}\) projecting the initial particle distribution onto the spline basis, with the spline coefficients computed as per Equation (15). A cubic-spline basis of 41 elements was used, with equally spaced knots on the velocity domain \(v\in[-10,10]\). The time integration is performed using the implicit midpoint scheme, which is of second order. A time-step of \(t=8\times 10^{-4}\) and a collision frequency of \(\nu=1\) was used. The simulation was run until a final normalised time of \(t=1\).
In the initial condition, shown on the left of Figure 3, we observe slight variations in the spline-projected distribution function due to the sampling error of the particles. In the final distribution, we observe the expected behaviour of the projected distribution approaching an exact normal with the initial variations smoothed out, and due to the momentum conservation, the mean of the distribution stays at the same value (\(v=2\)). Here, the particle momentum and the energy are conserved up to machine precision. The evolution of the energy and momentum error are shown in Figure 4. The evolution of the entropy, \(S=\int f\log fdv\), is shown in Figure 5 (normalised by its initial value), and we observe the monotonic growth of the entropy over the course of the simulation as expected.
We also observe that the method works well even for small numbers of particles. Results for the same simulation using a sample of \(N=200\) particles instead are shown in Figure 6. The particle energy and momentum are again conserved up to machine precision in relative error, with similar behaviour as shown in Figure 4.
### Relaxation of a bi-Maxwellian distribution
Next, we consider a double Maxwellian distribution as initial condition. Each peak is a standard normal distribution which has been shifted from the origin by \(v=\pm 2\) as per
Figure 2: Convergence of the sample variance of the particle velocity time derivative, \(1/N\sum\dot{v}_{\alpha}^{2}\), with the number of particles, shown on a logarithmic scale. The dashed line represents the reference curve \(y=1/N\).
Figure 4: Energy and momentum conservation during the simulation, for the initial condition of a shifted normal distribution Figure 5: Evolution of the normalised entropy, i.e. \(S/S(t=0)\) where \(S=\int f\log fdv\), during the simulation
the following equation:
\[f(v,t=0)=\frac{1}{\sqrt{2\pi}}\left(e^{-\frac{1}{2}(v-2)^{2}}+e^{-\frac{1}{2}(v+2) ^{2}}\right), \tag{29}\]
and the sampled distribution is shown in Figure 7 on the left. The sample for the particle distribution function is again of size \(N=1000\) particles. We use the same numerical setup as the previous example, and the final distribution which is obtained after time integration until \(t=2\) is shown in Figure 7 on the right. Again, the obtained result is a Gaussian with equal mean and variance to the initial condition, and the particle energy is conserved up to machine precision in relative error. The particle momentum is conserved up to a relative error on the order of \(10^{-14}\). The behaviour of the two quantities over time is oscillatory and similar to Figure 4. The entropy also decays over time with similar behaviour to Figure 5.
### Relaxation of a uniform distribution
In the last example, we initialise the distribution function with a uniform distribution shifted and scaled to the interval \(v\in[-2,2]\), i.e.:
\[f(v,t=0)=\begin{cases}\frac{1}{4},\text{ if }v\in[-2,2],\\ 0,\text{ else.}\end{cases} \tag{30}\]
A sample of \(N=200\) particles is taken from this distribution, and all other parameters are kept the same. Figure 8 shows the initial and final distributions obtained in the
Figure 6: The initial and final distributions when the initial condition is chosen to be a normal distribution of mean \(\mu=2\) and variance \(\sigma^{2}=1\), for a sample of \(N=200\) particles.
simulation, demonstrating again the expected result that the initial distribution relaxes to a normal distribution. The resultant particle distribution function retains the same mean up to a relative error on the order of \(10^{-14}\) (which is equivalent to momentum being preserved at this level). The energy is preserved to a relative error on the order of \(10^{-15}\). We note that the method performs as well here as it does in the other examples despite this being a more challenging case, as the uniform distribution is discontinuous and therefore not amenable to being represented using B-spline basis functions.
### Remarks
It is important to ensure that the chosen velocity domain for a simulation is sufficiently large such that any particles do not leave this domain at any time. There is no sensible method for returning a particle to the simulation domain, as the true velocity space domain for this problem is infinite. Practically, a particle leaving the domain will return zero when the spline projected distribution is evaluated on the particle's velocity, which leads to the evaluation of (19) becoming undefined due to division by zero.
## 5 Conclusion
In this work, we have presented the initial development of structure-preserving particle-based approaches for the simulation of collision operators. This has been done specifically for a conservative version of the Lenard-Bernstein operator, with the derivation of an energy- and momentum-preserving particle discretisation. We have demonstrated the
Figure 7: The initial and final distribution functions when the initial condition is chosen to be a bi-Maxwellian, in both particle and spline bases, for \(N=1000\) particles.
convergence properties of the semi-discretisation under the projection used, as well as particle sampling. Numerical examples have been shown for the one-dimensional case, demonstrating the viability of the method and its conservative behaviour. The method is implemented in the Julia language, and is available publicly at (Kraus _et al._, 2023). Our method can be coupled to any Vlasov-Poisson or Vlasov-Maxwell particle solver, and future research will detail such a coupling and its benefits.
Currently, we are also exploring similar discretisations for the Landau collision operator using the metriplectic formulation, and the results will be reported in a follow-up paper.
## Appendix A Time-evolution of cumulants
One useful fact about the steady-state solution of the Lenard-Bernstein operator being a normal distribution is the fact that its cumulants of order three and above are all zero. The cumulant is a closely related quantity to a moment, being defined through a cumulant generating function which is obtained by taking the natural logarithm of the moment generating function of the distribution. The cumulant generating function for a normally-distributed random variable \(X\sim N(\mu,\sigma^{2})\) is given by
\[K(s)=\log\mathbb{E}\left[\exp sX\right]=\mu s+\frac{1}{2}\sigma^{2}s^{2},\]
Figure 8: The initial and final distribution functions for the case of a uniform initial condition, with the initial condition shown on the left and the final result shown on the right. A sample of \(N=200\) particles is used.
where the cumulants are the coefficients of the Taylor expansion in \(s\), \(\kappa_{n}=K^{(n)}(0)\). In this instance, the cumulant generating function has no terms at order three and above, implying that the corresponding cumulants of the normal distribution are zero. We can also see that the first and second cumulants are simply the mean and variance, respectively2. For ease of computation, the third and fourth cumulants can be related to central moments of the random variable (those centred around the mean) through the following relations:
Footnote 2: This is true in general for all probability distributions which have well-defined first and second moments, not only for the normal distribution.
\[\kappa_{3}(X) =\mathbb{E}\left[(X-\mathbb{E}(X))^{3}\right],\] \[\kappa_{4}(X) =\mathbb{E}\left[(X-\mathbb{E}(X))^{4}\right]-3\left(\mathbb{E} \left[(X-\mathbb{E}(X))^{2}\right]\right)^{2},\]
where \(\kappa_{3}(X)\) and \(\kappa_{4}(X)\) are the third and fourth cumulants, respectively. In the discrete setting, the behaviour of the discretised cumulants can act as a quantitative check of how close the solution is to the known solution.
In fact, the time-evolution of the cumulants can be solved analytically in the case where the coefficients \(A_{k}\) of the Lenard-Bernstein collision operator are held fixed. Let the moment-generating function be the Wick-rotation of the Fourier transform of the distribution:
\[M(s;t)=\int e^{sv}f(v,t)dv=\mathbb{E}(e^{sX_{t}})=e^{K(s;t)}.\]
With this, moments of the distribution function are the Taylor coefficients of the moment-generating function \(M_{k}(t)=\partial_{s}^{(k)}M(0;t)=\mathbb{E}(X_{t}^{k})\). If we write the Lenard-Bernstein collision operator as
\[C[f]=\nu\partial_{v}(\partial_{v}f+\sum_{k=0}^{p}A_{k}v^{k}f),\]
then, after integrating (and neglecting boundary terms originating from integration by parts), the relaxation equation \(\partial_{t}f=C[f]\) becomes a PDE for \(M(s;t)\), namely
\[\nu^{-1}\partial_{t}M=s^{2}M-s\sum_{k=0}^{p}A_{k}\partial_{s}^{(k)}M,\]
where the substitution \(\int e^{sv}v^{k}fdv=\partial_{s}^{(k)}M\) was exploited. By setting \(s=0\) we see that the zeroth-moment is conserved,
\[\frac{d}{dt}M_{0}=0\iff M_{0}(t)=m_{0},\]
which can be attributed to the collision operator being a divergence.
The construction above is fairly general in the sense that the number of terms in the collision operator is capped by an arbitrary \(p\). There are two distinct questions to address analytically.
1. For arbitrary \(p\), the steady-state moment-generating function \(M_{\infty}(s)=\lim\limits_{t\to\infty}M(s;t)\) can be inferred from the ordinary differential equation (ODE) \[\sum_{k=0}^{p}A_{k}M_{\infty}^{(k)}=sM_{\infty}.\]
In particular if \(p=1\), we have a first-order ODE
\[M^{\prime}_{\infty}=\frac{s-A_{0}}{A_{1}}M_{\infty}\iff K^{\prime}_{\infty}= \frac{s-A_{0}}{A_{1}},\]
together with the requirement that \(K_{\infty}(0)=\ln 1=0\). The solution is the cumulant-generating function of a Gaussian with mean \(\mu=-A_{0}/A_{1}\) and variance \(\sigma^{2}=1/A_{1}\):
\[K_{\infty}(s)=\mu s+\frac{1}{2}\sigma^{2}s^{2}.\]
2. In the case where \(p=1\), normalising time to the collision frequency multiplied by the variance \(t\mapsto t/(\sigma^{2}\nu)\), the relaxation equation in terms of the cumulant-generating function is \[\partial_{t}K+s\partial_{s}K=\mu s+\sigma^{2}s^{2},\] which is a linear non-homogeneous first-order PDE. We apply the method of characteristics. Let the curve \(s(\tau)\) and \(t(\tau)\) and \(\kappa(\tau)=K(s(\tau),t(\tau))\) be such that \[\frac{dt}{d\tau} =1,\] \[\frac{ds}{d\tau} =s(\tau),\] \[\frac{d\kappa}{d\tau} =\frac{d}{d\tau}K(s(\tau),t(\tau))=\partial_{s}K\frac{ds}{d\tau }+\partial_{t}K\frac{dt}{d\tau}=\mu s(\tau)+\sigma^{2}s(\tau)^{2},\] with initial conditions \(s(0)=s_{0}\), \(t(0)=0\), and \(\kappa(0)=K(s_{0},0)=\mu s_{0}+\frac{1}{2}\sigma^{2}s_{0}^{2}+R(s_{0})\). The solutions are \[t(\tau) =\tau,\] \[s(\tau;s_{0}) =s_{0}e^{\tau}\] \[\kappa(\tau;s_{0}) =\mu s_{0}e^{\tau}+\frac{1}{2}\sigma^{2}s_{0}^{2}e^{2\tau}+R(s_{0}).\] Inverting \(s(\tau;s_{0})\) and \(t(\tau;s_{0})\), we obtain the time-evolution of the cumulant-generating function as \[\tau =t,\] \[s_{0}(s,t) =se^{-t},\] \[K(s,t) =\mu s+\frac{1}{2}\sigma^{2}s^{2}+R(se^{-t})=K_{\infty}(s)+R(se^{- t}).\] As time tends to infinity the initial "excess" cumulants are lost: \[\lim_{t\to\infty}R(se^{-t})=R(0)=0.\] We are now in a position to determine the time-evolution of the individual cumulants by differentiating with respect to \(s\): \[\partial_{s}K(s,t) =\mu+\sigma^{2}s+R^{\prime}(se^{-t})e^{-t},\] \[\partial_{s}^{2}K(s,t) =\sigma^{2}+R^{\prime\prime}(se^{-t})e^{-2t},\] \[\partial_{s}^{(k)}K(s,t) =R^{(k)}(se^{-t})e^{-kt},\quad k>2.\]
By evaluating at \(s=0\), we have
\[K_{1}(t) =\mu+R_{1}e^{-t},\] \[K_{2}(t) =\sigma^{2}+R_{2}e^{-2t},\] \[K_{k}(t) =R_{k}e^{-kt},\quad k>2,\]
where \(R_{1}=\partial_{s}K(0,0)-\mu\), \(R_{2}=\partial_{s}^{2}K(0,0)-\sigma^{2}\) and \(R_{k}=\partial_{s}^{(k)}K(0,0)=R^{(k)}(0)\) for \(k>2\) are the initial residual cumulants. This shows that the decay rate of the \(k^{th}\) cumulant is proportional to \(k\), namely that the decay rate scales linearly with the order of the cumulant.
We check this scaling numerically in our method by fixing the \(A_{1}\) and \(A_{2}\) coefficients in equation (19), instead of solving the system of equations shown in equations (23). Initialising the simulation with a double Maxwellian distribution, as shown in the second example, for \(N=1000\) particles, \(M=41\) splines of order 4, and a timestep of \(\Delta t=5\times 10^{-3}\), we fix the coefficients to be \(A_{0}=0\) and \(A_{1}=1\). The evolution of the higher order cumulants is shown in Figure 9. We observe that the cumulants scale at the predicted rate, until they reach the level of accuracy supported by the chosen resolution (approximately \(10^{-2}\) for a particle resolution of \(N=1000\). The saturation point of the cumulants corresponds to the solution reaching equilibrium.
|
2309.07999 | The Dependence of Gamma-Ray Burst Jet Collimation on Black Hole Spin | Gamma-Ray Bursts are the most luminous events in the Universe, and are
excellent laboratories to study extreme physical phenomena in the cosmos.
Despite a long trajectory of progress in understanding these highly energetic
events, there are still many observed features that are yet to be fully
explained. Observations of the jet opening angle of long gamma-ray bursts
(LGRBs) suggest that LGRB jets are narrower for those GRBs at higher redshift.
This phenomenon has been explained in the context of collimation by the stellar
envelope, with denser (lower metallicity) stars at higher redshifts able to
collimate the jet more effectively. However, until now, the dependence of jet
opening angle on the properties of the central engine has not been explored. We
investigate the effect of black hole spin on the jet collimation angle for a
magnetically launched jet, using the General Relativistic Radiation
Magnetohydrodynamical (GRRMHD) code \nubhlight. We present 3D results for a
range of spin values. The simulations show that higher spinning black holes
tend to create narrower jets. If indeed LGRB progenitors in the early universe
are able to produce black hole central engines with higher spin, this could
account for at least some of the observed jet opening angle-redshift
correlation. | Valeria U. Hurtado, Nicole M. Lloyd-Ronning, Jonah M. Miller | 2023-09-14T19:22:40Z | http://arxiv.org/abs/2309.07999v2 | # The Dependence of Gamma-Ray Burst Jet Collimation on Black Hole Spin
###### Abstract
Gamma-Ray Bursts are the most luminous events in the Universe, and are excellent laboratories to study extreme physical phenomena in the cosmos. Despite a long trajectory of progress in understanding these highly energetic events, there are still many observed features that are yet to be fully explained. Observations of the jet opening angle of long gamma-ray bursts (LGRBs) suggest that LGRB jets are narrower for those GRBs at higher redshift. This phenomenon has been explained in the context of collimation by the stellar envelope, with denser (lower metallicity) stars at higher redshifts able to collimate the jet more effectively. However, until now, the dependence of jet opening angle on the properties of the central engine has not been explored. We investigate the effect of black hole spin on the jet collimation angle for a magnetically launched jet, using the General Relativistic Radiation Magnetohydrodynamical (GRRMHD) code \(\nu\)bhlight. We present 3D results for a range of spin values. The simulations show that higher spinning black holes tend to create narrower jets. If indeed LGRB progenitors in the early universe are able to produce black hole central engines with higher spin, this could account for at least some of the observed jet opening angle-redshift correlation.
Gamma-Ray Bursts -- Black Holes -- Cosmology 0000-0002-4000-3870]Valeria U. Hurtado
0000-0002-4882-7885]Nicole M. Lloyd-Ronning
0000-0002-4707-3873]Jonah M. Miller
## 1 Introduction
Gamma-Ray Bursts (GRBs) are the most powerful events in the Universe, releasing energies in the order of \(10^{52}\) erg in a typical event (for reviews see, e.g. Piran, 2004; Zhang & Meszaros, 2004; Meszaros, 2006; Gehrels et al., 2009; D'Avanzo, 2015; Levan et al., 2016). They are bursts of gamma-rays that can last from less than a second to several thousands of seconds depending on the progenitor. For GRBs lasting 2 seconds or less, they are thought to come from the merger of two neutron stars; see, e.g. Lee & Ramirez-Ruiz (2007); Berger (2014). If the gamma-ray emission lasts from a few seconds up to hundreds of seconds, they are thought to come from a hydrogen deficient star that collapses into a black hole (BH)1, often termed a collapsar (Woosley & Bloom, 2006; Woosley & Heger, 2012). Our motivation for this work comes from observational results of the latter, so-called long gamma-ray bursts (LGRBs). However, our results apply to any GRB black hole-accretion disk central engine.
Footnote 1: There also exist viable GRB models with a magnetar central engine (Usov, 1992; Duncan & Thompson, 1992; Thompson, 1994; Zhang & Meszaros, 2001) but here we consider only a black hole central engine.
A LGRB begins with a star whose core collapses into a BH. This, along with the remaining gas available from the collapse of the star that is gravitationally bound to the newly formed BH and able to form an accretion disk, are necessary elements to give rise to the LGRB central engine. (MacFadyen et al., 2001; Woosley & Heger, 2006). As general relativistic frame-dragging causes more rapid rotation near the black hole, magnetic fields present in the disk will be wound up along the spin axis of the black hole. As a result, a powerful Poynting flux is generated along this axis and a highly relativistic jet is launched through the so-called Blandford-Zjanek (BZ mechanism) (Blandford & Znajek, 1977a; MacDonald & Thorne, 1982). This is an efficient way to form and power a jet, assuming there exists enough angular momentum and magnetic flux in the system. The BZ mechanism does not depend on the mass accretion rate (at least not until the disk reaches a MAD state), but instead, again, on the rotation of the BH and the pres
ence of magnetic flux. For additional detailed discussion of BZ jets in the context of GRBs, see Lee et al. (2000); Lee and Ramirez-Ruiz (2002); Tchekhovskoy and McKinney (2012); Lei et al. (2017); Lloyd-Ronning et al. (2019).
Gamma-Ray Bursts still pose many mysteries - from the nature of their progenitors to the details of the particle acceleration and radiation mechanisms in their relativistic jets - and multiple studies continue to uncover novel behaviors in these energetic events. In one such example, Lloyd-Ronning et al. (2019) found a statistically significant anti-correlation between jet opening angle and redshift (with narrower jets at higher redshifts) in a large sample of LGRBs (we note the presence of this correlation has also been suggested, although not to the same statistical significance, in Lloyd-Ronning et al. 2002; Yonetoku et al. 2005; Lu et al. 2012; Laskar et al. 2014, 2018). Conservatively accounting for the presence of any potential selection effects or observational biases that may produce the correlation (using the well-tested and vetted non-parametric Lynden-Bell (Lynden-Bell, 1971) and Efron-Petrosian (Efron and Petrosian, 1992) methods), they found that jet beaming angle goes to \(\theta_{j}\propto(1+z)^{-0.75\pm 0.25}\).
In a following study, Lloyd-Ronning et al. (2020) provided potential reasons as to why we may see a difference in jet beaming angle over cosmic redshift. They specifically looked into how some of the properties of the progenitor star might affect the jet opening angle. While looking at the luminosity requirements to launch a successful jet, Lloyd-Ronning et al. (2020) found that the anti-correlation between jet opening angle and redshift can be explained by an evolution of the progenitor envelope over cosmic time. At higher redshifts, lower metallicity, denser progenitors can more effectively collimate the jet as it traverses the stellar envelope and cocoon region. This is quantitatively consistent with evolving IMF models reported in the literature (e.g. Dave, 2008).
However, the role of the central engine in producing this correlation has yet to be explored. In particular, the BH-accretion disk dynamics are not yet well understood in relation to the jet geometry. There have been many works, beginning with McKinney and Gammie (2004), exploring the relationship between jet power (with an eye to AGN luminosity) and black hole and disk conditions (Tchekhovskoy and McKinney, 2012; Tchekhovskoy et al., 2012). However, less attention has been payed to opening angle. How this property depends on redshift is an additional aspect necessary to understanding the opening angle-redshift anti-correlation, and LGRBs in general. For instance, a progenitor at higher redshift would potentially lose less angular momentum over the lifetime of the star, due to its lower metallicity compared to progenitors at lower redshifts (assuming angular momentum loss comes from radiation-driven stellar winds, which are stronger at higher metallicity). This could eventually lead to a remnant with higher angular momentum as well. Therefore, a progenitor that collapses at higher redshift with higher angular momentum (compared to one at a lower redshift) may produce a more highly spinning BH with a narrower jet.
We study this possibility using a General Relativistic Radiation Magnetohydrodynamical (GRRMHD) code, \(\nu\)bhlight. With \(\nu\)bhlight we simulate a BH with an accretion disk and a jet in 3 dimensions and let it evolve in time. Our goal is to consider the degree of collimation of a Blandford-Znajek jet (Blandford and Znajek, 1977; MacDonald and Thorne, 1982) as a function of the spin of the black hole. Recently Narayan et al. (2022) performed a detailed examination of jet power and geometry for magnetically arrested disks; this differs from our work here, in that we study non-arrested (SANE) disks.
This letter is organized as follows: In SS2, we describe the GRMHD code \(\nu\)bhlight, and the set up we used in our simulations. In SS3, we present the results of our 3D simulations for three different spin values of of the black hole. We show that, measured a number of different ways, _more highly spinning black holes tend to produce narrower jets_. In SS4, we summarize our findings and discuss both the caveats and the implications of our results in the context of our current understanding of long gamma-ray bursts.
## 2 Numerical Tools
For our simulations we use \(\nu\)bhlight (read as "nublight"). \(\nu\)bhlight is a General Relativistic Radiation Magnetohydrodynamics (GRRMHD) code, built on top of the bhlight(Ryan et al., 2015), grmonty(Dolence et al., 2009), and HARM(Gammie et al., 2003) codes. In this work, we disable the radiation transport and solve only the equations of general relativistic ideal magneto-hydrodynamics (GRMHD). These include conservation of Baryon number:
\[\partial_{t}(\sqrt{-g}\rho_{o}u^{t})\;+\;\partial_{i}(\sqrt{-g}\rho_{o}u^{i}) \;=\;0, \tag{1}\]
conservation of energy and momentum,
\[\partial_{t}[\sqrt{-g}\,(T_{\nu}^{t}+\rho_{o}u^{t}\delta_{\nu}^{ t})\,]\;+\;\partial_{i}[\sqrt{-g}\,(\,T_{\nu}^{t}+\rho_{o}u^{i}\delta_{\nu}^{i} \,)\,]=\] \[\sqrt{-g}\,T_{\lambda}^{\kappa}\,\Gamma_{\nu\kappa}^{\lambda}\; \;\vee\;\nu=0,1...3, \tag{2}\]
and conservation of magnetic flux,
\[\partial_{t}\,(\sqrt{-g}\,B^{i}\,)\,-\,\partial_{j}\,[\sqrt{-g}\,(b^{j}u^{i} -b^{i}u^{j}\,)\,]\,=\,0, \tag{3}\]
given the energy-momentum tensor defined as
\[T_{\nu}^{\mu}\;=\;(\rho_{o}+P+b^{2}\,)\,u^{\mu}u_{\nu}\,+\,\partial_{\nu}^{\mu }\,(\,P\,+1/2\,b^{2}\,)\,-\,b^{\mu}\,b_{\nu}, \tag{4}\]
for metric \(g_{\mu\nu}\), rest energy \(\rho\), fluid four velocity \(u^{\mu}\), internal energy density \(u\), pressure \(P\), and Christoffel connection \(\Gamma_{\beta\gamma}^{a}\). In the equations above, we also make use of the magnetic field components \(B^{i}=\) *\(F^{it}\) of the Maxwell tensor *\(F_{\mu\nu}=b^{\mu}u^{\nu}-b^{\nu}v^{\mu}\), and the magnetic field four-vector \(b^{\mu}\). Here and throughout the text we use units of \(G=c=1\) and standard Einstein index notation, where Greek indices range from 0 to 3 for spacetime indices and roman indices range from 1 to 3 for spatial indexes only. We also subtract off Equation (1) from the zeroth component of Equation (2) to remove the rest energy from the energy conservation law.
The fluid equations (2) and (4) must be closed by an _equation of state_. In this work we use the standard ideal gas equation of state
\[P=(\Gamma-1)u \tag{5}\]
with \(\Gamma=5/3\) corresponding to an ionized gas. All simulations performed assume a stationary Kerr black hole background (Kerr, 1963). The \(\nu\)bhlight GRMHD treatment includes the industry standard treatments for codes of the HARM family, including the radially logarithmic quasi-spherical grid in horizon penetrating coordinates, as first described in McKinney & Gammie (2004), the WENO reconstruction first described in Tchekhovskoy et al. (2007), the primitive variable recovery scheme described in Mignone & McKinney (2007), and the drift-frame artificial atmosphere treatment de
Figure 1: 3D simulations showing a plot of density for 3 simulation runs, all at time 1100 M\({}_{\rm BH}\) out of 2000 M\({}_{\rm BH}\). On the top is the simulation corresponding to a BH of spin 0.5, in the middle is a BH of spin 0.7, and on the bottom a BH of spin 0.99. All simulations show a frontal view of the black hole - accretion disk - jet (bhadj) system. The blue and purple hues correspond to the least dense material (jet) in the system, while green and yellow hues to the densest (accretion disk).
Figure 2: 3D simulations of the BH - accretion disk - jet system, all at time 1100 M\({}_{\rm BH}\) out of 2000 M\({}_{\rm BH}\). On the top is a BH with spin 0.5 (lower spin fraction), in the middle is a BH of spin 0.7, and on the bottom a BH of spin 0.99 (highest spin fraction). \(\beta>1\) indicates material dominated by gas pressure (accretion disk - in yellow/orange) and \(\beta<1\) indicated material dominated by magnetic field energy (jet - pink/purple).
scribed in Ressler et al. (2015). There is a long history of GRMHD simulations in the community. For a recent discussion of issues and methods, see Porth et al. (2019).
We run three identical simulations where the only difference between them lies on the black hole spin \(a_{bh}\) values used \(a_{bh}=0.5\), 0.7, 0.99 (the maximum corresponds to \(a_{bh}=1\)). We begin each simulation with a torus in hydrostatic equilibrium (Fishbone & Moncrief, 1976) with an inner radius of \(r_{in}=6GM_{BH}/c^{2}\) and a radius of maximum pressure of \(r_{max,P}=12GM_{BH}/c^{2}\).
Because the ideal gas equation of state (EOS) has no fundamental energy scale, the disk-accretion problem becomes scale free, and our problem setup is valid for a large range of black hole masses and for accretion rates roughly corresponding to a maximum rate of a \(\sim 1\) in dimensionless units (one \(M_{BH}/(GM_{BH}/c^{3})\)). We thread our inital torus with a single poloidal magnetic field loop such that the ratio of gas to magnetic pressure
\[\beta=\frac{P_{\rm gas}}{P_{B}} \tag{6}\]
is 100 at the radius of maximum pressure. Where here
\[P_{B}=\frac{B^{2}}{2}. \tag{7}\]
We use a resolution of N\({}_{\rm r}\) x N\({}_{\rm\theta}\) x N\({}_{\rm\phi}\) = 256 x 192 x 128.
Molecular viscosity in the disk is negligible. As the disk evolves, the magnetorotational instability (MRI) (Velikhov, 1959; Balbus & Hawley, 1991) enhances the magnetic field strength and drives turbulences, which in turn provides turbulent viscosity, enabling angular momentum transport and accretion (Shakura & Sunyaev, 1973). This same magnetic field drives a jet through the BZ mechanism (Blandford & Znajek, 1977a). For this flow structure to be trustworthy, the fastest growing mode of the MRI must be captured. We use the MRI quality factor defined in Miller et al. (2019) and find that our quality factor is \(Q_{mri}\approx 10\) for the lifetime of all simulations. We run our simulations for a duration of t = \(10^{4}\)GM\({}_{\rm BH}/c^{3}\), long enough for the accretion flow to achieve a quasi-stationary flow.
## 3 Results
Figure 1 shows the density profiles of the black hole-accretion disk-jet system for our three different 3D simulation runs. On the top panel is the simulation corresponding to a BH of spin 0.5, the middle to a BH of spin 0.7, and on the bottom a BH of spin 0.99. The left column shows poloidal slices, while the right shows equatorial.
While jet structure is more challenging to discern on plots of density, the accretion disk on both of these simulations appears to differ. For the BH with 0.99 spin (bottom of Figure 1), the accretion disk appears to be much larger in size (about twice the size) than the simulation with 0.5 spin, for instance. The structure of the accretion disk can highly affect jet opening angle, as previously mentioned. A larger, puffier accretion disk may provide further collimation of the jet, since the jet will have to spend more time traveling through the accretion disk material. This could lead to a narrower jet opening angle. We note that, importantly, the initial conditions of the disk depend on the black hole spin (Fishbone & Moncrief, 1976). For example, for a Keplerian velocity profile in the disk, higher initial black hole spins will force a slightly thicker accretion disk in the Fishbone-Moncrief setup we employ (see, e.g., their Figure 2). We discuss this further in SS4.
To obtain a better view of the jet structure, we calculated \(\beta\) for all three simulations in Figure 2, where \(\beta>1\) indicates material dominated by gas pressure (shown in yellow/orange colors representing the gas in the accretion disk) and \(\beta<1\) indicates material dominated by magnetic field energy (shown in pink/purple hues representing the jet). Thus, the accretion disk would be the area dominated by gas pressure, and the jet would be the material dominated by magnetic field pressure. The top of figure 2 shows the BH of 0.5 spin, the middle a BH of spin fraction of 0.7, and the bottom panel shows the BH of spin 0.99. From a first glance, one can see that the jet for the BH of spin 0.99 seems to be much narrower than for the BH of spin 0.5. Qualitatively, the higher-spin calculation shows a more collimated jet than the lower spin runs.
We also examine magnetization parameter sigma
\[\sigma=B^{2}/(2\rho c^{2}), \tag{8}\]
which is the ratio of the magnetic energy density to rest mass energy density. As opposed to \(\beta_{\rm plasma}\), this ratio better represents the jet's structure and edge due to the system no longer being Maxwellian. We plot isocontours of \(\sigma=1\) in a poloidal slice for all three runs at four different times in Figure 3. A plot of \(\sigma\) is shown on Figure 3. Each plot is a snapshot in time of \(\sigma\) for all 3 of the BH spins. The blue curve corresponds to the BH of spin 0.5, the yellow curve corresponds to the BH of spin 0.7, and the purple line to the BH of spin 0.99. The snapshots correspond to sequential times in the simulation. In other words, the top-right plot is at an earlier time in the simulation, top-left at a slightly later time, and so on. As with Figure 2, more highly spinning black holes have more collimated jets.
To fully quantify this trend, we calculate the average disk angle \(\theta\) weighted by \(\sigma\) (\(\overline{\theta}_{\sigma}\)),
\[<\theta_{o}>_{\sigma}=\frac{\sqrt{g}\;\theta\,\sigma\;dx^{1}\,dx^{2}\,dx^{3}}{ \sqrt{g}\;\sigma\;dx^{1}\,dx^{2}\,dx^{3}}, \tag{9}\]
Where \(\sqrt{g}\) is the square root of the determinant of the metric, \(\theta\) is the angle measured from the equator of the simulation, and \(dx^{1}\), \(dx^{2}\), \(dx^{3}\) the measure. This acts as a proxy for jet opening angle: a larger \(\overline{\theta}_{\sigma}\) means a narrower jet beaming angle. The result of this calculation is shown in Figure 4. This plot shows the values of \(\overline{\theta}_{\sigma}\) as a function of spin. For the BH of spin 0.5 we obtained a value of \(\overline{\theta}_{\sigma}=1.23\), for the BH of spin 0.7: \(\overline{\theta}_{\sigma}=1.28\), and for the BH of spin 0.9: \(\overline{\theta}_{\sigma}=1.35\). While the difference between jets may seem small, it follows the general trend that has been seen throughout all 3D simulations alike: there is a difference in jet collimation as a function of spin. More specifically, there seems to be a trend where more highly spinning BHs lead to narrower jet opening angles.
## 4 Discussion & Conclusions
Using the GRMHD code \(\nu\)bhlight we have investigated the dependence of the jet opening angle on the black hole spin. Our primary result is that more rapidly spinning black holes create narrower jets. We can see this by eye from the snapshots of our 3D numerical solutions (Figures 1 and 2), and it is borne out by our more detailed analysis of the contours of both plasma \(\beta\) and \(\sigma\) (Figures 3 and 4). We see this pattern track across simulations and spins. We emphasize again the scale-free nature of our simulations; as such, these results should apply to a wide range of black hole-disk systems including those relevant to short GRBs and even supermassive black hole-disk systems.
As noted in SS3, it is important to consider that our initial disk set up based on Fishbone & Moncrief (1976) de
Figure 3: Plots of 3D simulation sigma (\(\sigma\)) analysis in different times of the simulation. Each plot is a snapshot in time of \(\sigma\) for all 3 of the BH spins. The blue curve corresponds to the BH of spin 0.5, the yellow curve corresponds to the BH of spin 0.7, and the purple line to the BH of spin 0.99. The top left plot is at the earliest time, and the bottom right is at a later time in the simulation. This plot indicates the trend that a more highly spinning BH leads to a narrower jet opening angle and vice versa.
pends on black hole spin in a nuanced way, with slightly thicker initial disks for more highly spinning black holes. This may contribute to the "narrowness" of the jet at later times in our simulation. That said, the set-up - a torus in hydrostatic equilibrium - is a solution to the relativistic Euler equations and therefore provides a physical framework for this result in some sense. That is, an initially thicker disk may lead to a narrower jet, but this is physically motivated and a reflection of the balance between magnetic, gravitational and gas pressures (not only initially but as the system evolves). Additionally, consider that a more rapidly rotating black hole will have a horizon radius (and innermost stable circular orbit) that is closer to the black hole. This allows magnetic flux to be brought into a narrower region near the black hole (i.e. a smaller polar angle) and may also contribute to the qualitative physical explanation for more rapidly spinning black holes producing more narrow jets.
Nonetheless, these results are broadly consistent with the analysis for magnetically arrested disks in Narayan et al. (2022), which help further connect the GRB central engine and jet properties, and may help us better understand the underlying progenitor(s) of GRBs. The original motivation for this study was an attempt to understand the observation that GRB jet opening angles are narrower at higher redshifts. If we ignore other effects that may lead to this correlation (such as the stellar envelope collimation described above and in Lloyd-Ronning et al. (2020)), our results suggest that GRB central engines are more rapidly rotating in the early universe relative to low-redshift GRBs. Qualitatively this is consistent with our current view of stellar evolution: massive stars in the early universe are expected to have lower metallicity and therefore less radiation-driven winds. As a result, they experience less angular momentum loss, which may lead to a more rapidly rotating central engine upon collapse, and therefore a more collimated jet. This explanation, of course, ignores the many intricacies of angular momentum transport/loss as the star collapses - it is still unclear to what extent we can connect the angular momentum of the progenitor at the end of its life with that of the black hole-disk system. However, these results provide a promising avenue to further explore the central engine's imprint on the jet physics.
## 5 Acknowledgments
J.M.M. would like to thank J. Dolence, B. Ryan, P. Mullen for many helpful discussions, and the Institute for Nuclear Theory at the University of Washington for its kind hospitality and stimulating research environment. N.M.L-R. would like to thank Roseanne Cheng, Gibwa Musoke, and Sera Markoff for elucidating conversations. V.U.H would like to thank Kelly Holley-Bockelmann, Dina Stroud, Lauren Campbell, and the Fisk-Vanderbilt Master's to PhD Bridge Program for their mentorship over the years. This work was supported through the Laboratory Directed Research and Development program under project number 20220564ECR at Los Alamos National Laboratory (LANL). LANL is operated by Triad National Security, LLC, for the National Nuclear Security Administration of U.S. Department of Energy (Contract No.
Figure 4: Plot of the average sigma weighted angle as a function of spin \(\overline{\theta}_{\sigma}\). The angle \(\theta\) is measured from the equator to the ‘edge’ of the jet. The blue dot corresponds to the value of the 0.5 BH spin, the orange to the 0.7 BH spin, and green to the 0.99 BH spin. The pattern shown in this plot shows a wider jet opening angle for a more slowly spinning BH, and a narrower jet opening angle for the more rapidly spinning BH.
89233218CNA000001). This research used resources provided by the Los Alamos National Laboratory Institutional Computing Program. This research was supported in part by the INT's U.S. Department of Energy grant No. DE-FG02- 00ER41132, and by the U.S. Department of Energy, Office of Science, Office of Workforce Development for Teachers and Scientists (WDTS) under the Science Undergraduate Laboratory Internships Program (SULI). This work is approved for unlimited release with LA-UR-23-30405.
|